
Luke,
That said, I feel you owe me an apology for publishing misleading benchmark results
I've sent you the results, off-list, long before publishing them, including a report showing where things went wrong at your side. There were months for you to improve things before the figures were published. Our benchmark compares 6 libraries, not only 2. It's comparing 7 algorithms, not only 1. If you found them misleading you could have objected easily before publishment. It's surprising to read this now. And not necessary, after acceptance of your library. We've done our comparisons in a careful and honest way. We didn't hack things in frenetically to find bug X. Our website says: "We are aware of the weaknesses of performance tests and that it is hard to design objective benchmarks. So, this comparison is not considered to be objective in terms of intentions. However, we did what we can to achieve most comparable results." and "Therefore, everyone can review it and provide his critique as well as point bugs and weaknesses". So yes, we know that things can go wrong. Maybe I didn't discover option Y in library Z which speeds things up. We're happy to read that you'll replace your algorithm with ours. I'd modify your tone then, otherwise you might prevent acceptance. I don't think it wise for me to react on more of your attempts to avoid that. Regards, Barend