On Fri Nov 14 2014 at 10:43:48 PM Steven Ross
On Thu Nov 13 2014 at 2:07:53 AM Vladimir Prus
wrote:
what input data distribution is this? Could you run the tests with
worst-case data distribution for your algorithm?
The input data distribution was evenly distributed 32-bit random data. The absolute worst-case distribution I've been able to design for spreadsort is binaryalrbreaker with an ALR_THRESHOLD of 3. With 32-bit integers, it has 512 elements, and takes 15 microseconds for integer_sort, and 14 microseconds for std::sort. With 64-bit integers, it has 131072 elements, and takes 2.8ms for integer_sort, and 1.6ms for std::sort. If you add or subtract any elements, it won't be the worst-case distribution! If I simply double the number of elements, 64-bit integer_sort takes 5.35ms, and std::sort takes 4.56ms. I'm not sure what kind of graph would make sense with this type of situation. I'd also like to note that this distribution is specifically designed to make integer_sort slow, and cannot be scaled up without improving the relative speed of integer_sort.
In analyzing this data to see why it was slower with integer_sort than
std::sort, I realized that it was creating the input data in reverse-sorted
order. This ends up being a case where std::sort is substantially faster
than usual (as branch prediction becomes very accurate), making it not a
worst-case comparison. I changed the code to swap 10% of the inputs with
random other inputs to break up the reverse-sorted order, and both
integer_sort and std::sort slowed down to comparable speeds. I also
regenerated the data (the least significant bits are randomized) and reran
the sort 10 times in a loop (on new data each time) to reduce noise. The
net result is:
std::sort elapsed time 0.049344
integer_sort elapsed time 0.046429
(results in seconds)
integer_sort is 6% faster on 64-bit integers, on my Ubuntu system, for
integer_sort's worst-case distribution.
The code is checked into the develop branch
in example/binaryalrbreaker.cpp. From the