[SR-Users] htable size

Alex Balashov abalashov at evaristesys.com
Mon Apr 26 13:51:23 CEST 2010


On 04/26/2010 07:48 AM, Henning Westerholt wrote:
> On Monday 26 April 2010, Alex Balashov wrote:
>>> i think this is a static setting, as the memory is allocated on startup
>>> in shared memory. Do you've run into performance problems or memory size
>>> constraints with the htable module?
>>
>> None whatsoever, I was just curious from a theoretical perspective.
>>
>> The answer lets me know in an a priori way whether the htable is optimal
>> for applications involving rapidly shrinking or contracting data sets,
>> by several factors.
>
> There is probably a certain point on which further increase of the htable size
> make not that much sense anymore. It seems the module only supports from 256
> to 16384 buckets in the htable. So depending on the distribution of the hash
> function the htable will start to degenerate, up to a worst case complexity to
> O(n). The used function is core_case_hash from hash_func.c

Indeed, I asked this question after seeing your git commit to the 
documentation that specifies that maximum dimension is 2^14.

Is there a particular reason for the cap to be at 2^14?

-- 
Alex Balashov - Principal
Evariste Systems LLC
1170 Peachtree Street
12th Floor, Suite 1200
Atlanta, GA 30309
Tel: +1-678-954-0670
Fax: +1-404-961-1892
Web: http://www.evaristesys.com/




More information about the sr-users mailing list