
Jose wrote:
On Wed, Nov 18, 2009 at 5:35 AM, John Phillips <phillips@mps.ohio-state.edu> wrote:
...
As a summary, I don't argue about the quality of the algorithms in Polygon, the author and reviewer are both experts.
The author, the reviewers, and the review manager were also all quite conscious of the existence of GGL. This existence was not considered a show stopper in any of the reviews posted, which is where the Boost community expresses such concerns.
The community objective is to get a generic library were multiple authors can eventually contribute their algorithms, like Boost BGL or the competing CGAL. This situation is one of the cases were cooperating is justified and worthwhile for everybody.
I don't recall a single reviewer stating the ability of multiple authors to contribute algorithms as an objective for them. I may be forgetting something, so please point me to it in the archives if so. If not, then I do not consider this a community objective. Historically, I see no evidence for it as a standard Boost concern, either. Again, please correct me if I'm missing something.
In this case both authors are really involved, wrote Boostcon09 papers, and they were both committed towards a COMMON GOAL. If I look at the end of the abstract of the GTL paper presented to Boostcon I think it clearly shows what the community was aiming for:
"This paper discusses the specific needs of generic geometry programming and how these needs are met by the concepts-based type system that makes the generic API possible"
Does the community want a high quality generic geometry library? I think the answer to that is well established as yes. Is this considered by the community to be equivalent to a library where many people can contribute algorithms? I see no evidence presented that it is. So, this line of argument suggests that the discussion should be technical in nature. Is Polygon a high quality generic geometry library? This is why I keep trying to redirect you to technical matters.
Since you blame the policy and schedule, please provide a proposed change in the policy that would prevent this from happening. Remember when you provide it the factual details I have provided about when information to base a decision on was available, since any so called solution that ignores these details is useless.
The idea is:
"In cases where the Boost community is aiming for a broad library useful in multiple application domains, accepting a new library that doesn't meet the generic objectives should be driven by consensus from the different application domains represented in the review" (the actual wording should be better and how consensus is measured should be clarified, to me consensus is measured by votes but there has to be a minimum number of votes also)
How are we supposed to determine whether the correct way to solve a broad problem is a single library that tries to satisfy everyone, or a few different libraries that are more focused on specific tasks? Are we going to impose some "pre-review" process where we decide whether the community considers this to be a broad library case, then if we do a second step where we decide whether this case is best served by a single library or by multiple smaller libraries? How are cases of split votes decided? Who has the final say? What if the involved authors (who have actually implemented something and so know things the rest of us don't) strongly disagree with the conclusion? If consensus is measured by votes, what fraction of the votes counts as a consensus? If there is a minimum, what number meets that minimum? In short, the questions you are asking are things every individual reviewer should already be considering. Does this library meet the standards we want for Boost? That already covers your concerns. In the Polygon review, 4 people said no and gave their reasons for saying so. 6 people said yes, and also gave their reasons. The review manager, who is well versed in the technical issues of the library weighed the strength of the different arguments and found the yes arguments not only more numerous, but also more persuasive than the no arguments. He proceeded to address the no arguments in the review result and explain why he did not find them persuasive. So, every member of the Boost community had the opportunity to raise the issues you have and support them. I personally and publicly encouraged Barend to participate fully and not to be concerned that his writing a different but related library made his opinions somehow tainted. In the course of the discussion, several comparisons between the libraries were drawn. This review wasn't conducted in a cave, but with a full understanding of what else was available at the time of the review. I do not think it can be faulted for not knowing what would become available a month later, since even Barend didn't have (or, at least didn't share) all the details for that during the review.
The existence of another library is not a persuasive technical argument in this case, nor is the name change for the Polygon library. I have explained why above, as well as in other responses.
Exactly, I am not trying to make a technical argument! I If what I wrote above is not clear, I don't have further to add!
thank you for getting interested in the issues I pointed out. I don't want to go on an endless debate about this so take what's useful (if anything) and ignore the rest I said. You make good technical arguments that I will not answer b/c is not the issue I'm pointing out.
Regards jose
As you have probably noticed, I am reluctant to add more formal process to the reviews. This is largely a personal philosophical point. Process should only be added when you can clearly see how it will improve what you are doing. If you can't clearly see the improvements it will bring, then adding process becomes an action for its own sake. I have had the misfortune to sit in many University committee meetings where process for its own sake succeeded in choking off the ability to accomplish anything. (A single meeting where someone used a proceedural point as a means to complain about who uses what parking spaces for 2 hours is a good, though not isolated example. We accomplished none of the work on the agenda that meeting.) Adding layers to the review process adds delay, produces extra work for authors (who already have plenty) and for review managers (who are already hard to recruit), and builds in places where someone intent on obstruction can do so. To accept that cost, I believe we need to see a very clear and large advantage for the review process. So far, what I see is formalizing steps to consider what all reviewers and managers should already be considering. John