Re: [boost] [multi_index][hash][rfc] Hashed indices preview +Boost.Hash

Hi John, thanks for your comments! ----- Mensaje original ----- De: John Maddock <john@johnmaddock.co.uk> Fecha: Sábado, Febrero 19, 2005 6:02 pm Asunto: Re: [boost] [multi_index][hash][rfc] Hashed indices preview +Boost.Hash [...]
1.4. Usual implementations use one pointer per bucket, whereas Boost.MultiIndex has two pointers per bucket. This addresses potential> performance problems of insert() and erase() under low load conditions, as discussed in http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1756.pdf issue 6.14. Is this convenient?
Well it's not unreasonable, you need what you need.
BTW, have you read the committee's resolution to this issue? It doesn't make any sense to me (Bill Wade's concern, OTOH, seems very rasonable.)
6. Yet, this hasn't undergone any formal review, so some of you might> (with reason) object to its being commited to the CVS. From our point of view, we have three valid alternatives: * Boost members agree to have it in CVS without more ado. * As this is used by Boost.MultiIndex, Boost.Hash is suitable for fasttrack review. * This is untolerable and the library should be push_back()'ed to the review queue. Meanwhile, Boost.Hash should live as an impl detail of Boost.MultiIndex.
I don't see how you can place it in a detail namespace if users are going to be expected to (possibly) provide their own specialisations of the hash function object.
I've no objection to either a fast track review, or just adding to cvs,
I think Daniel is going to ask for a fasttrack review, and Thorsten volunteered to manage it. In the meantime, I'll commit my update and Boost.Hash code only, and wait for the outcome of Daniel's review --it'll be his responsabiity to upload docs, test, etc. if the lib is accepted. Best, Joaquín M López Muñoz Telefónica, Investigación y Desarrollo

On Feb 20, 2005, at 10:21 AM, JOAQUIN LOPEZ MU?Z wrote:
1.4. Usual implementations use one pointer per bucket, whereas Boost.MultiIndex has two pointers per bucket. This addresses potential> performance problems of insert() and erase() under low load conditions, as discussed in http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1756.pdf issue 6.14. Is this convenient?
Well it's not unreasonable, you need what you need.
BTW, have you read the committee's resolution to this issue? It doesn't make any sense to me (Bill Wade's concern, OTOH, seems very rasonable.)
Imho the committee's resolution is quite logical. The hash containers exist for one reason: As an optimized solution, in both memory consumption and speed, compared to the tree-based containers. The speed department especially concerns lookup, insert and erase (in that order), but not iteration (there are other containers which optimize iteration, vector being the extreme example). The coder who carefully tunes his hash function and loading will be rewarded with significant optimization. The coder who doesn't, may be presented with a solution that performs worse than the tree-based containers, and is better off sticking with the O(Log N) solution. Imho, the committee should do absolutely nothing to penalize the coder who has carefully tuned his hash function and loading, either in memory consumption nor speed. Of course if hash container writers would like to add safeguards which may also add such overhead, I see little rationale for the committee to object to that either. Doubling the per element storage overhead for a perfect distribution with a loading of 1 is a serious hit in the memory consumption department, and may well lead to a performance hit due to more L2 cache misses. Adding bidirectional iteration, again at the cost of doubling the per element overhead in a well tuned application makes little sense to me. The container is unordered. When you iterate in the forward direction, you access the elements in a generally unpredictable, unordered order. What motivation is there (motivation high enough to double per element overhead), to iterate in the reverse of the generally unpredictable, unordered order? Again, I have no wish for the committee to outlaw such hash containers. But I strongly feel that neither should they outlaw the more traditional hash container design which favors uncompromising performance when everything is tuned just right. -Howard
participants (2)
-
Howard Hinnant
-
JOAQUIN LOPEZ MU?Z