
Peter Dimov wrote:
Boris Mansencal wrote:
return ((size_t)p.x()<<1)*P_width + (size_t)(p.y()<<1);
What type does p.x() have in your case, and why do you shift it left by one? In principle, this discards one bit of entropy, if the bucket count is even (but not when it's prime).
Why do you multiply p.x() by the width? Isn't x() supposed to be within 0..width and y() within 0..height? It seems that you need to either multiply y() by width or x() by height.
indeed, return (size_t)(p.x()*P_height + p.y()) seems to give slightly better results... (but actually I still do not really grasp why...)
Is there no solution to my question ?
You can't have a context-dependent hash_value, but you can pass a 'Hasher' function object to unordered_map that stores your context.
exactly. This is the solution I was looking for.
I can do :
class ptHasher
: public std::unary_function
Depending on the typical values of your width, it may also be possible to just use a fixed width of, say, 65536.
yes.
Unfortunately, there are no "context free" rules regarding hash functions, you have to find one that works best in your particular situation. Even if it doesn't make much sense. :-) One problem with this approach is that it can tie you to a particular unordered_map implementation. We've tried to make hash_combine work adequately for as many cases as possible, but this makes it a bit slower to compute.
thanks a lot, Boris.