
True worst-case has to be determined by determining the worst possible input, then feeding it to the algorithm. I believe I know what the real worst-case input is to this algorithm, and it will take O(n(bit_range/C - 1 + C)) time, where C is the smaller of MAX_SPLITS and (LOG_MEAN_BIN_SIZE + LOG_MIN_SPLIT_COUNT), and would take some effort to construct, like the Quicksort worst-case. For 32-bit integers and the default constants I provide, C is 8, so it ends up being: n((32/8 -1) = 3 slow iterations + 8 std::sort iterations)). Based upon the value of LOG_CONST and the distribution it can also give up and use std::sort, but only after n is cut into small pieces, so that the log(n) factor is small. What I am referring to is that I ran tests with randomness ranging from 0 to 32 bits, and I compared the worst, average, and best runtimes among those. Graphs are included in the libs/algorithm/sorting directory, linked to from the index.html. You are welcome to test it on whatever distribution or test you like. I believe float_sort results for randomness > 23 bits are skewed by NaNs. This change is obvious on the graph. On Mon, Apr 27, 2009 at 9:54 AM, David Abrahams <dave@boostpro.com> wrote:
on Mon Apr 27 2009, Steven Ross <spreadsort-AT-gmail.com> wrote:
All 3 have superior worst-case and average-case performance to std::sort, based upon my testing.
How can you determine worst-case performance by testing? Or have I misunderstood your statement? <http://lists.boost.org/mailman/listinfo.cgi/boost>