
Hi Gennadiy, While I mostly agree with you pointing out the drawbacks of the current system, I don't quite agree with your proposal. On 02/28/2010 04:24 PM, Gennadiy Rozental wrote:
That's said, here's how better procedure might look like IMO. This will require some initial investment in writing scripts for process automation, but in a long run we should be very well compensated.
1. Any library author interrested in submission of new library should come to the "Candidate" page and register. Once registered candidate gets: a) svn repository for the library b) standardized page on boost website (something like boost.org/candidate/<candidate name> c) announcement post is sent automatically (with abstract and link to above page) to the mailing list.
Good. Having a central place for potential Boost libraries to evolve may simplify development. Although I'm not sure there are resources to maintain this kind of hosting.
2. The candidate page should contain abstract and links to the sources and docs. Also it should include some kind of "voting" mechanism, where people would express the interest. Preferable with authentication, which would link to the mailing list members. To qualify for the review candidate should exceed some predefined threshold of minimum number of "supporters". These people are expected to post a review later on for the library to have a chance of being accepted.
Voting is good. I appreciated the feature on SourceForge. Although I don't think that the right to vote should be tied with posting a review later. I consider voting as afeedback mechanism, nothing more. Regarding the candidate page, do you mean that the library docs should be hosted somewhere outside the Boost web site? If so, I don't like that idea. IMO, if we pursue the idea of a central hosting for the candidate libraries (with SVN, web access, etc.), it should include online documentation hosting, too.
3. Once candidate have proper number of supporters and passed all other formal requirements (docs, tests, directory structure) - all validated against repository, candidate author can schedule a review from reviews schedulers (whatever the proper name). Once review manager is assigned candidate page is transformed into "candidate review" page.
It's not clear how it's transformed and in what way. Regarding the review scheduling, it's pretty much like it happens nowdays.
4. Review process. The candidate review can start at any time by the review manager (no queue) and should take at least 2-4 month. There can be any number of reviewed being run concurrently. The "candidate review" page should include abstract, review package, and some kind of review submission mechanism (maybe boolean yes/no + an actual review). The review should be per person and each reviewer should have an ability to modify the review. Review discussion mechanism can be web based on rely on mailing list or some mixture of these.
I disagree, in several points. * 2-4 months is a very long period. You can't expect review manager and the library authors focused on the review that long. Also, for simple tools, such as Boost.Move that is in the queue now, there's nothing to review during all that time. On the other hand, I agree that a few weeks may not be enough for some larger scaled libraries. Which leads me to conclusion that the review duration should be individual, decided by the author, review manager and review wizards, taking into account other reviews. * Concurrent reviews is wrong. We don't have enough reviewers and wizards to make sequential reviews. Allowing parallel reviews won't make it better. The review quality will also drop. * Review mechanism should be convenient for both the reviewers and the author/review manager. It should allow an easy conversation between the reviewers and the author. Mailing list is good enough, I think.
5. Review manager have a right to stop a review at any time and make a decision if there is an overwhelming evidence that the library is going to be accepted/rejected.
Ok.
6. If there is not enough reviews with first 2-4 month, the library is rejected due to lack of support.
Hmm, arguable, at least. If it made it to the review, there surely is an interest to the library.
7. If there is no review manager found within a year, the library is rejected due to lack of support.
I think, there are several useful libraries in the queue that fit that criteria. My Boost.Log was surely longer than a year without a review manager, and I can't say there's no interest. For both 6 and 7, bouncing candidates away won't help the situation. And the most important objection from my side is that your proposal doesn't change anything to solve the root problem - there are not enough people (or free time of theirs) to manage and write full reviews. I actually makes it a bit worse.