
Tim Blechmann wrote:
there will be a `formal' review announcement, i want to invite people to check out code, examples and documentation ...
One quick note: your benchmarks operate on just 16 items
not quite: they operate on differently sized heaps (roughly 1 to 100000 elements), but each operation is performed 16 times (to get some kind of averaging).
OK, but the issue is that I don't trust that you can accurately measure execution times of the order of microseconds. The overhead of, for example, the system call to read the clock will be significant. If you believe that you can accurately measure such short periods, you should justify your methodology. Otherwise, repeat a few thousand times. Also, it might be interesting to look at non-random data. Howard Hinnant (IIRC) did some great work on the libc++ sort implementation to benchmark things like sorting already-sorted, reverse-sorted, and other non-random inputs. (Sorry, I can't find a link right now.) I could imagine that non-random inputs might be quite important for heaps too. Otherwise, this looks like useful work! Regards, Phil.