
Daniel James wrote:
Tobias Schwinger wrote:
I am missing a way to parametrize how hash values are combined. This is especially desirable because the skeleton could then be exploited to implement double hashing for compound data (a comon technique to avoid collisions by varying the hash function) and can be useful in other situations (e.g. a small hash table in memory vs. a large one on disk).
I think if you want to combine hash values in a different manner, it's best just write your own version of hash_combine. It could save a little code if we supplied a version of hash_range that works for different combining functions, but I don't think it'll be that useful (and would require that your hash_combine function had the same signature as ours, you might want to do it differently).
Well, this was not exactly the point I was trying to make: Example: let's say we have a disk-stored hash table and an in-memory hash table for caching. The same objects are stored but it is desirable to have two different hashing functions. Unfortunately boost::hash can only be used for at most one of them. If I wanted to implement double hashing I'ld have to duplicate the whole facility (which may include writing a lot of "hash_value2" functions for a grown project) because there can be only one 'hash_value' per type which can only use one algorithm for combining the hash values, as the current design does not provide a way to get parametrization in there. Looking at the source underlines this impression: hash_combine seems like a forgotten hard wired strategy in something that actually wants to be a facade.
The for loop in hash_float_value is a nice one to be unrolled via metaprogramming.
Yes, for anyone who hasn't looked at the code it loops a fixed number of times based on the accuracy of the float type. Compilers could unroll that loop themselves, but that's probably too much to expect.
Especially since it is very questionable if these global 'unroll loops' options do any good in the big picture. Regards, Tobias