
std::vector + std::push|pop_heap 2.35 3.36 2.71 std::priority_queue<vector> 2.58 3.48 2.84
Hmm... I know that these aren't significantly different times, but they should be much closer together since, AFAIK priority queue's default implementation is a vector using push_heap/pop_heap. There should be virtually no difference between the two (unless I'm misreading something).
I was unable to try the proposed d_ary_heap because I couldn't find a working syntax for the arity template parameter - any ideas anyone?
I think you're supposed to use d_ary_heap<T, arity<N>>. I filed a defect about this a couple of days ago :)
I was expecting the b_heap to be faster, but it is not; it is substantially slower. What is going on there? Does it all come down to size of data, cache size, etc? Ideally, an "improved" implementation like this would be able to fall back to the naive implementation in cases where it does not function any better. Or maybe I'm using it wrongly.
That's a good question. The data structure is supposed to to be cache-friendly. What's the sizeof(work item)? It looks like it could be "pretty big". Maybe large enough that it breaks the cache friendliness.
My understanding is that the fibonacci, binomial and other heaps only improve things when other operations e.g. merging are needed - is that correct?
That's also my understanding. Their node-based nature makes them less performant, unless you're merging them. Andrew