
Zachary Turner wrote:
On Tue, Nov 17, 2009 at 10:45 PM, John Phillips <phillips@mps.ohio-state.edu
wrote:
Zachary Turner wrote:
On Mon, Nov 16, 2009 at 4:38 PM, Andreas Huber <
...
... ...
It's difficult to do that because I wasn't involved the last time such a review happened and it didn't go well, so I'm not in touch with all the issues. That being said, certainly each person needs to review the other person's library. Did that happen the first time a parallel review was attempted?
The archives have the review process. Go take a look to get a better understanding. I also got a few off list mails about it, but they expressed the same problems as you'll see in the archives. The first problem is that the effort to review scales at least combinatorially in the number of libraries, and more likely in the number of submodules in the libraries. This is because the reviewer has to do five things. Understand how library 1 does a job, consider the strengths and weaknesses of that method, understand how library 2 does a job, consider the strengths and weaknesses of that method, and compare the relative strengths and weaknesses. If the library does several things, then the reviewer has to iterate on this process and finally produce some sort of overall ranking, along with reasons and hopefully suggestions for improvements. As a result, I don't recall a single person giving a full comparative review to both libraries. Most only had time to give a partial review to one library and didn't talk about the other at all. So, the review manager couldn't see what any one person thought about both libraries. Another problem is that the discussion threads get confusing. There are frequent switches between the two libraries, and it becomes hard to follow what comments are about what library, or in response to what previous statements. This could be solved by disciplined conversational patterns, but probably at the cost of lively discussion. There were more problems, but this should get you started.
Maybe it is just my naivete in not being familiar with the issues from last time, but I'm having a hard time understanding why doing a parallel review isn't hands down the obvious choice. Or rather, I'm having a hard time understanding why knowingly going into a review of a library with another very very very similar library only slightly further along in the review queue is even an option. I can't see any possible benefit to doing reviews this way, aside from "it's logistically easier than doing it the other way", and I also can't see any downsides to doing a parallel review, other than "it has some issues that need to be ironed out". On the other hand, the converse is definitely true -- that there are serious (and more importantly, long lasting) problems with not doing a parallel review.
Honestly, those of us who participated in the discussion that lead to the joint Futures review had the same opinion at the time. It seemed like a good idea, but when we actually did it we found it did not work well at all. So far, I have no good ideas for how to make it work better, but maybe someone else does. John
If Andrey's review is going to be first, and that's just the way it is, I can accept that -- but then delete the other library from the queue and just say there's no room for it at this time (assuming Andrey's gets accepted). How is the community served by having two virtually identical libraries? What if both of these libraries get accepted, and then 6 months later, I decide to submit YALL (yet another logging library) for review? Is it possible to have 3 logging libraries in boost? Where do we draw the line for "maximum number of virtually identical libraries allowed in boost"?
Zach _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost