
Hi Jeremy, You implementation is great. I have one problem though, and that is that the size of the buckets is of way larger that it has to be. I find that in many applications I know the size of the map Iøm going to construct. Furthermore, I also find that N elements of type T* or unsigned int can be put in a hash map/set with N (or very close to N) buckets without gettng a max load factor above 1 (so the distribution of elements is perfect). This is probably because my T* objects points to consecutive memory locations or because my integers fall in a narrow range like [0,N]. Nevertheless, in such situations the container will allocate a huge number of buckets --- far more than needed wasting too much memory. I tried expanding your list of primes so that it contains more numbers in the small range and in particulat such that it contains numbers closely above powers of 2. It happens frequently that powers of 2 comes up. Here's my new list: static const std::size_t prime_list[] = { 5ul, 11ul, 17ul, 29ul, 37ul, 53ul, 67ul, 79ul, 97ul, 131ul, 193ul, 257ul, 389ul, 512ul, 769ul, 1031ul, 1543ul, 2053ul, 3079ul, 6151ul, 12289ul, 24593ul, 49157ul, 98317ul, 196613ul, 393241ul, 786433ul, 1572869ul, 3145739ul, 6291469ul, 12582917ul, 25165843ul, 50331653ul, 100663319ul, 201326611ul, 402653189ul, 805306457ul, 1610612741ul, 3221225473ul, 4294967291ul }; For the test I conducted, not once did I get a max load factor above 1 for containers of size 4, 5, 7, 9 etc up to about 512. Another alternative would be to allow users to control the initial bucket coutn explcitly in the constructor. I think I might like this better. Something like boost::unordered_set<T*> the_set( boost::unordered::bucket_count(32u) ); assert( the_set.bucket_count() == 32u ); Comments? -Thorsten