<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2900.2604" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV>With regards to stickiness: Have you looked at ktcpvs? SIP is an
"http-like" protocol and I'm pretty sure that you can use the http-based regex
hashing to search for Call-Id. If you cannot use it right out of the box,
I'm pretty sure the modifications are minimal.</DIV>
<DIV> The user location problem: With a cluster back-end, I
also only see save_memory() as the only option.</DIV>
<DIV>g-)</DIV>
<DIV> </DIV>
<DIV>> "Greger V. Teigre" <greger@teigre.com> wrote:<BR>>>
Greger, thanks a lot.<BR>>> The problem with load balancer is that replies
goes to the wrong<BR>>> server due to rewriting outgoing a.b.c.d . BTW, as
Paul pointed, if<BR>>> you define some dummy interface with Virtual IP
(VIP), there is no<BR>>> need to rewrite outgoing messages (I tested this
a little).<BR>> <BR>> <BR>> Yes, if you use LVS with direct routing or
tunneling, that is what<BR>> you experience. <BR>> ===Of course. That why
I implemented small "session" stickness.<BR>> However, it causes additional
internal traffic. <BR>> <BR>> What I described was a "generic"
SIP-aware load balancer where SIP<BR>> messages would be rewritten and
stickiness implemented based on ex.<BR>> UA IP address (or call-id like
vovida's load balancer). <BR>> ====Sure, it's better solution; I think
we'll go this way soon (in<BR>> our next version). <BR>> <BR>>> Why
DNS approach is bad (except restricted NAT - let's say I am<BR>>> solving
this)?<BR>> <BR>> Well, IMO, DNS SRV in itself is not bad. It's just that
many user<BR>> clients do not support DNS SRV yet. Except that, I like
the concept<BR>> and it will give you a geographical redundancy and load
balancing. <BR>> ===I am trying to build the following
architecture:<BR>> <BR>> DNS (returns domain's public
IP)->LVS+tunneling (Virtual IP)->ser<BR>> clusters (with private IPs)
<BR>>
<BR>> |
<BR>>
<BR>> |
<BR>>
DB<BR>> (MySQL 4.1 cluster) <BR>> <BR>>> I guess, Paul utilizes
load-balancer scenario you have described.<BR>>> Believe there are only
proprietary solutions for<BR>>> "the-replies-problem". We tried Vovida
call-id-persistence package,<BR>>> unfortunately it didn't work for
us.<BR>> <BR>> Are you referring to the load balancer proxy? IMHO, the
SIP-aware<BR>> load balancer makes things a bit messy. It sounds to me
that the LVS<BR>> + tunneling/direct routing + virtual IP on dummy adapter is
a better<BR>> solution. <BR>> <BR>>> In my configuration
I use shared remote DB cluster (with<BR>>> replication). Each ser see it
as one-public-IP (exactly the approach<BR>>> you named for SIP). May be
it's good idea to use local DB clusters,<BR>>> but if you have more than 2
servers your replication algorythm gonna<BR>>> be complex. Additional
problem - it still doesn't solve usrloc<BR>>> synchronization - you still
have to use t_replicate()...<BR>> <BR>> <BR>> I'm not sure if I
understand.<BR>> ===Oh, probably I expressed myself not well
enough...<BR>> <BR>> So, you have 2 servers at two location, each location
with a shared<BR>> DB and then replication across an IPsec tunnel??
<BR>> IMHO, mysql 3.23.x two-way replication is quite
shaky and<BR>> dangerous to rely on. With no locking, you will easily
get<BR>> overwrites and you have to be very sure that your application
doesn't<BR>> mess up the DB. I haven't looked at mysql 4.1 clustering,
but from<BR>> the little I have seen, it looks good. Is that what you
use? <BR>> <BR>> ===We have 2 or more servers with MysQL
4.1 virtual server (clusters<BR>> balanced by LVS). We use MySQL for
maintaining subscribers' accounts,<BR>> not for location. User location is
still in-memory only so far. I am<BR>> afraid I have to switch to ser 09 in
order to use save_memory (thanks<BR>> Paul!) and forward_tcp() for
replication. <BR>> <BR>>> With regard to
t_replicate() - it doesn't work for more than 2<BR>>> servers, so I used
exactly forward_tcp() and save_noreply() (you're<BR>>> absolutely right -
this works fine so far); all sers are happy. Of<BR>>> course, this causes
additional traffic. Interesting whether Paul's<BR>>> FIFO patch reduces
traffic between sers?<BR>> <BR>> I believe Paul uses forward_tcp() and
save_memory() to save the<BR>> location to the replicated server's memory,
while the<BR>> save("location") on the primary server will store to the DB
(which<BR>> then replicates on the DB level). <BR>>
g-)<BR>> <BR>> <BR>> <BR>> <BR>> <BR>> Do you Yahoo!?<BR>>
Yahoo! Small Business - Try our new resources site!</DIV></BODY></HTML>