
Phil wrote:
Having said all that, personally I have little interest in an algorithm that has no benefit for fewer than millions of items. What do other people think about that aspect?
I forgot to mention, anyone interested in runtime of an application that spends a lot of time sorting large data volumes is going to do it on an up-to-date machine, which means 4, 8, 16 or 32 cores today and many more very soon. There is more than one source for a parallel_sort that should beat both std::sort and introsort by an order of magnitude on the machines that I use for large data sets, and if I were really interested in sorting faster I'd be using the threaded algorithms. I think std::sort is a straw man to compare yourself to, not because it isn't good at what it does, but because the target use case in question of very large data volumes is better suited to a parallel algorithm. Parallelize spreadsort, compare yourself to the good parallel sorting algorithms and suddenly the whole issue will become a lot more relevant and applicable to what is currently going on in the industry. Luke