Good point. So then perhaps t_replicate() [AND] save_memory() should be
used as normal. And SER should then just start up using a lazy-loading
mechanism. Eventually all SER routers in the farm would then have a
fully populated cache and the problem would be solved.<br>
<br>
In other words the SER server that physically recieves the original REGISTER message would save() and t_replicate().<br>
<br>
All the peers in the farm that receive a REGISTER via the t_replicate() function would only use save_memory().<br>
<br>
MySQL replication still occurs and if a SER server is restarted it
doesn't attempt to load usrloc info upon startup, but rather loads it
over a period of time. All the while, if a usrloc record is looked-up
and it is on in cache, then SER would query MySQL for the correct
ucontact record.<br>
<br>
Thanks for the qustion --- I hadn't thought about that before.<br>
<br>
Regards,<br>
Paul<br><br><div><span class="gmail_quote">On 5/30/05, <b class="gmail_sendername">Karl H. Putz</b> <<a href="mailto:kputz@columbus.rr.com">kputz@columbus.rr.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
>-----Original Message-----On Behalf Of Jiri Kuthan<br>>Sent: Monday, May 30, 2005 5:36 AM<br>>At 03:40 AM 5/30/2005, Java Rockx wrote:<br>>>Currently, usrloc is replicated via t_replicate() using db_mode=writeback.
<br>>><br>>>However, our lazy-load patch would obsolete the need for<br>>t_replicate() because we have multiple MySQL servers that are<br>>active-active so __all__ replication really occurs at the database
<br>>layer rather than the SIP layer.<br>><br>>So this is the point which I am still struggling with. I mean<br>>generally there is a problem<br>>of read-write intenstive UsrLoc operations. We can move it from
<br>>SIP to DB. However, whichever<br>>layer we choose to solve the problem, it takes careful<br>>dimensioning. Otherwise the<br>>replication mechanism may cause peformance problems.<br>><br>>What Mysql setup are you exactly using? Cluster? Master/slave replication?
<br>><br>>Otherwise I think that the cache policy "load-on-demand" makes sense.<br><br>If pure DB replication is used, what would happen in the following scenario:<br><br>A given user receives multiple calls such that more than 1 physical SER
<br>server has userloc<br>cache populated.<br><br>The user then phsyically moves or changes return contact registration<br>information and re-registers.<br><br>It seems that the specific SER server that handled the registration would
<br>update cache and the<br>backend DB would be updated. But any attempt to contact the user through a<br>SER server that has<br>not yet expired the old cache info would fail.<br><br><br>Karl<br><br>><br>>-jiri<br>
><br>>_______________________________________________<br>>Serusers mailing list<br>><a href="mailto:serusers@lists.iptel.org">Serusers@iptel.org</a><br>><a href="http://lists.iptel.org/mailman/listinfo/serusers">http://mail.iptel.org/mailman/listinfo/serusers
</a><br>><br><br><br>_______________________________________________<br>Serusers mailing list<br><a href="mailto:serusers@lists.iptel.org">Serusers@iptel.org</a><br><a href="http://lists.iptel.org/mailman/listinfo/serusers">http://mail.iptel.org/mailman/listinfo/serusers
</a><br></blockquote></div><br>