Vladimir Prus wrote:
If I see that function whatever_sort implements an algorithm from paper N, with 200 citations, then I can assume enough people reviewed the algorithm. If that function implements an adaptation, then I'm not really sure what complexity I can expect.
...
So the question, is who is is qualified to review your algorithm as an algorithm - you know, formal proof of correctness, formal complexity estimates, tests and statistical validity of them, and the quality relative to other existing algorithms. I would not think it's a job for Boost.
Demanding that the library be backed by a paper containing formal proof of correctness, formal complexity estimates, and 200 citations, strikes me as grossly unfair. We've never done this to any library under consideration. Had we done so, none of them would've passed. What we're interested in is: - is the library useful? - is the library high quality? - is the author going to stick around and support it? We do not, generally, require formal proofs for any of the above, no matter whether the library contains an amount of innovation. Raising the criteria for acceptance to absurd levels for innovative libraries treats innovation as a pollutant. We determine whether the library works by testing it, and we determine the library's performance by timing it. We're empiricists. Of course, if Steven wants to provide formal proofs of correctness and complexity requirements, that would be very nice of him; I just don't think it's fair to demand it.