
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Thorsten Ottosen wrote: | so basically you're saying that any collision is much worse than a slower | hash function because we must use == to go through the bucket ? | I would say that if you are designing your own hash function for your general use, then you can do whatever you find most convient, and try to optimise. In general you never want to risk having too many hash collisions (or even worse, having everything hashing to the same thing), espically for just a constant time speed increase, as suddenly all your operations can go from being mostly O(1) (on average) to mostly O(n).. and you know someone is going to trigger that (in particular, I've made hash tables of strings where they mostly don't different at the start, while doing genetic evolution..) Chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (MingW32) iD8DBQFCMHY33FpDzErifpIRAmrvAJ4whcYByCk3fnxOTZmPod5scku16QCgjxFt NL+fARQjyVyTmgNpWpKYZ+E= =e6A+ -----END PGP SIGNATURE-----