Updating the Boost Review Process Was: [Boost] [GGL] Bost.Polygon (GTL) vs GGL - rationale

I propose these 3 changes (starting with the word ADD: below) to the current review policy in http://www.boost.org/community/reviews.html I realize that there is no mention to the word "vote" in the review policy, just "review comments". See message below for rationale on why these may be good changes. ----------------------------------- INTRODUCTION ... The final "accept" or "reject" decision is made by the Review Manager, based on the review comments received from boost mailing list members. ADD: Consensus should be objectively measured with the votes and otherwise the right to accept solely by the review manager could be revoked. ... RESULTS At the conclusion of the comment period, the Review Manager will post a message to the mailing list saying if the library has been accepted or rejected. A rationale is also helpful, but its extent is up to the Review Manager. If there are suggestions, or conditions that must be met before final inclusion, they should be stated. ADD: In conflicting reviews or with little consensus the Review Manager should justify way the library was accepted or not accepted. NOTES FOR REVIEW MANAGERS ... ADD: The review manager must not have or have had business ties with the library author/organization. ... ---------- Forwarded message ---------- From: Jose <jmalv04@gmail.com> Date: Mon, Nov 16, 2009 at 12:09 PM Subject: Re: [boost] [Boost] [GGL] Bost.Polygon (GTL) vs GGL - rationale To: boost@lists.boost.org Hi, I am not expert in geometry but what matters is that boost continues to provide generic libraries (if possible !) like BGL, GIL, .. I think there is no blame on library authors which make a huge effort. What was broken is the review process, and some of the reasons why the review process was broken were: 1) The result of the review was 6 positive to 4 negative, when boost normally aims for consensus. This is the most objective point. Also, the issues the review manager was proposing to be fixed would not change the votes as the library was not appropiate even for 2D geo applications 2) The scope of GTL was changed before the review (see snippet 2 at the end) but the review docs still mention a wider app focus (see snippet 1 at the end). This confusion appeared when GTL was being reviewed. 3) The review manager didn't have time for the review (if you look at the apology at the beginning of the review email summary) 4) The GGL library proposal, which had been iterating the design with input from boost, in my opinion received an unfair treatment in the way the schedule was managed. 5) Boost failed to set the scope for a geometry library/ies and created tension with candidate library authors There are more points but I don't like long emails. I propose to change the current review process: http://www.boost.org/community/reviews.html ----------------- NOTES FOR REVIEW MANAGERS Decides if there is consensus to accept the library, and if there are any conditions attached. ----------------- AMENDMENT 1 that consensus should be objectively measured (with the votes) and otherwise the right to accept solely by the review manager could be revoked. Clearly there was no consensus in this library, and no clear discussion if one single library was possible (I understand your points that multiple libraries in this case may be preferable but still that is not incompatible with a complete design discussion) AMENDMENT 2 The review manager should have or have had no business ties with the library author/organization. Amendment 2 is not related to this review, but this would support the current transparency. I feel that in some cases there might be a vague prejudice in favor of big/important/known organizations to get their libraries accepted regards jose --------------------------------------------- Snippet from Boost.Polygon docs http://svn.boost.org/svn/boost/sandbox/gtl/doc/index.htm "These so-called Boolean algorithms are of significant interest in GIS (Geospatial Information Systems), VLSI CAD as well al other fields of CAD, and many more application areas, and providing them is the primary focus of this library." 7/19/09 I am changing the name of my library from GTL (Geometry Template Library) to boost::polygon and narrowing the scope from "computational geometry" to "polygon manipulation". This scope precisely describes the current scope of what is implemented. It also clarifies the position of the library relative to similar proposals. ------------------------------------------------

2009/11/16 Jose <jmalv04@gmail.com>:
I realize that there is no mention to the word "vote" in the review policy, just "review comments". See message below for rationale on why these may be good changes.
I strongly dislike the idea of "voting" and a correspondingly purely objective acceptance criterion, since then you have to define whether someone is permitted to vote, which is necessarily exclusive. I'd be far more interested in a 2-stage process where there's a limited review for whether something is usable, at which point is gets into the official distribution, but under a "proposed" subdirectory. These might not be the optimal implementations or interfaces, nor would they need to be at all orthogonal. A later review like the current would then decide whether it's the correct form for the functionality, at which point it would be promoted.

On Mon, Nov 16, 2009 at 7:56 AM, Scott McMurray <me22.ca+boost@gmail.com>wrote:
2009/11/16 Jose <jmalv04@gmail.com>:
I realize that there is no mention to the word "vote" in the review policy, just "review comments". See message below for rationale on why these may be good changes.
I strongly dislike the idea of "voting" and a correspondingly purely objective acceptance criterion, since then you have to define whether someone is permitted to vote, which is necessarily exclusive.
I'd be far more interested in a 2-stage process where there's a limited review for whether something is usable, at which point is gets into the official distribution, but under a "proposed" subdirectory. These might not be the optimal implementations or interfaces, nor would they need to be at all orthogonal. A later review like the current would then decide whether it's the correct form for the functionality, at which point it would be promoted.
I actually like the idea of only reviewing new libraries for inclusion in Boost at a small number of fixed dates each year. For example, Boost could adopt a policy such as: "New libraries go up for review every February and October". This would have eliminated the problems like what occured with GGL going up for review 1 week after the other was accepted. Now we're in the unfortunate situation of either a) having 2 libraries that have massive overlap but each providing something unique, b) withdrawing a library that has already been accepted (although in reality this won't happen), or c) rejecting a library which, if compared directly against the other library may have been preferable if users had initially been asked to choose only one. In general I'm against having too much overlap in libraries. There's the discussion of [msm] and Boost.StateChart going on right now that I feel is largely similar. Although that situation is slightly different since StateChart has been around for some time. Should there be a way to deprecate old libraries when something better comes along, even if the owner of said library is still actively maintaining it (note that this says nothing about any 2 libraries in particular, including the one just mentioned in the previous sentence, I only used that as an example to lead into the situation of competing libraries)? Yea, this sucks for the original library author, I completely agree, but the person we should be caring for is the end user, not the library maintainer. Or maybe what would really be the best for the community is if people knowledgeable about the subject domain could easily and freely make changes to existing libraries without having to worry about the "library ownership" model that goes on. Or maybe if it's not one's own library that they've put their blood, sweat, and tears into they will have little to no motivation to extend / enhance it. Either way, 10 years from now I don't want to look at Boost and see 3-5 of every library just because people with well written libraries came along and wanted to offer 90% of the same functionality as what's already there, but in their own format. Perhaps one possible idea is to say that once a library for solving problems in domain X has been accepted, *that is the library, there is no other* for a set amount of time, say 3 years. At that time, there will be a process by which other competing libraries can be submitted for review, and either 1 or 0 of them will be selected to replace existing library. The key here is organization and fairness. None of the following are fair to say to someone: - [To library author] Hey your library got rejected even though it's amazing, if you had only been a week earlier... :( - [To user of boost] Since we now have a library for doing X, this will be the library forever, even if significantly better libraries come along. - [To user of boost] You have to choose between these 5 libraries, all of which solve the same problem completely differently. Likely you'll need to spend 3-5 hours reading the documentation of each one to figure out which one suits you best. As long as there is communication, set expectations, and deadlines, I think the system can be completely fair to everyone. For example, had the two competing geometry library authors known a year ago that reviews for geometry libraries would happen the first week of November 2009 then they could have both submitted at exactly the same time and could be reviewed in parallel. This way only one (or 0) could have been chosen, and it would clearly have been the best. Then if there was a library replacement procedure, the author who got rejected could have (at his choosing) opted to resubmit his library some time later, say 2-3 years. But as long as this expectation is known by all parties, the original author would have the same opportunity to improve his library so that when it is compared again later it is on fair ground. Does this make sense? If I had to summarize this, I'd say that the problems are: - lack of organization of review process - lack of communication between library authors - somewhat arbitrary review process - no fixed timelines.

2009/11/16 Zachary Turner <divisortheory@gmail.com>:
In general I'm against having too much overlap in libraries.
Absolutely, in the official libraries. Having overlap in a sort of official staging area, though, would be great, as then what becomes the official library would hopefully be a synthesis of the good parts of both. I'm picturing something like unstable/testing/stable in Debian.

On Mon, Nov 16, 2009 at 4:20 PM, Zachary Turner <divisortheory@gmail.com> wrote:
This would have eliminated the problems like what occured with GGL going up for review 1 week after the other was accepted. Now we're in the unfortunate situation of either a) having 2 libraries that have massive overlap but each providing something unique, b) withdrawing a library that has already been accepted (although in reality this won't happen), or c) rejecting a library which, if compared directly against the other library may have been preferable if users had initially been asked to choose only one. ...
Does this make sense? If I had to summarize this, I'd say that the problems are: - lack of organization of review process - lack of communication between library authors - somewhat arbitrary review process - no fixed timelines.
The process needs to make it very easy for the author/s proposing the library/ies as they are doing the hard work BUT it also has to be updated to prevent the current situation. Also make it easy for the review manager as it is hard to find the right expert to be willing to be the review manager. Under the current policy it's up to the review manager to do whatever! There is no way to question the review manager decision. I am saying in this case (probably the first time) it seems that the decision should be questioned.

On Mon, Nov 16, 2009 at 2:56 PM, Scott McMurray <me22.ca+boost@gmail.com> wrote:
2009/11/16 Jose <jmalv04@gmail.com>:
I realize that there is no mention to the word "vote" in the review policy, just "review comments". See message below for rationale on why these may be good changes.
I strongly dislike the idea of "voting" and a correspondingly purely objective acceptance criterion, since then you have to define whether someone is permitted to vote, which is necessarily exclusive.
In the specific proposal I do not say that voting should be the criteria, but the current review process is asking users to say: YES NO NO but if x and y are provided then YES So it's incongruent to ask this and then completely ignore it. So if the votes are clearly divided, the acceptance needs to be justified, otherwise the process is ignoring what people think (like in the Boost.polygon review!)

Jose wrote:
I propose these 3 changes (starting with the word ADD: below) to the current review policy in http://www.boost.org/community/reviews.html
I realize that there is no mention to the word "vote" in the review policy, just "review comments". See message below for rationale on why these may be good changes. ...
regards jose
I'm going to reply to this thread in two different ways, so as to try and avoid confusion between my role as a Review Wizard and my personal opinions. This reply is as a Wizard. The review policy is alway open to revision, based on the needs of the community and our understanding of what has worked well and poorly in the past. I strongly encourage a discussion of this sort, even if I don't agree with some of the suggestions. My voice is in no way final in such a discussion, but any substantive changes should include input from the Moderators and the Review Wizards, along with input from other Boost members. The decision not to base acceptance purely on votes predates my involvement with Boost (which dates to ~2000), so I can not provide the rationale for it when it was made. However, I think it is a good idea not to turn the review manager into a vote counter. From my experience managing reviews, more important than the count of votes is the reasoning given in the reviews. The manager looks at the reasons given and tries to determine how deeply they affect the library. Even when some persuasive negative reasons are given, the manager may decide that the changes to the library needed to address the complaints are not so central that they preclude acceptance. Such a decision has to come from the manager's understanding of the library, the submitter, and the needed changes and I do not think a formal rule for how to make such a decision would be a good idea. The manager is selected in part because the Wizards think such decisions will be made well. If we make a mistake in selecting a manager, then we will have to step in and adjust the decision but I am not convinced we made such a mistake. There has been talk of parallel reviews of competing libraries. This sounds like a decent idea on the surface but it has been tried and it did not go well. In the review of the two Thread Pool libraries, I do not think anyone involved was happy with how things ran. We discussed it as a community in advance and decided to review them together, but the reality was just not satisfying. A better example of how competing libraries interact well can be found in the Lambda and Phoenix libraries. Lambda is a well constructed library that has some problems that were only understood well after broad use in the community. Phoenix addressed some of the problems in the Lambda library, but not everything. Both are available in Boost. The two development groups then started working together to incorporate all of the lessons learned from both libraries into a merger that is noticeably better than either original. However, without the intense use in the community given to both libraries such a merger would not be possible. In general, many of the comments I have seen seem oriented toward a stronger central control and planning for Boost. I was not part of the founding group of developers, but my understanding has been that such centralized design was not chosen intentionally. In fact, given the number of different backgrounds and specialties represented in Boost I am not sure who could provide such a planning and control service. John Phillips Review Wizard

On Mon, Nov 16, 2009 at 8:30 PM, John Phillips <phillips@mps.ohio-state.edu> wrote:
I'm going to reply to this thread in two different ways, so as to try and avoid confusion between my role as a Review Wizard and my personal opinions. This reply is as a Wizard.
Thanks for getting involved!
The decision not to base acceptance purely on votes predates my involvement with Boost (which dates to ~2000), so I can not provide the rationale for it when it was made. However, I think it is a good idea not to turn the review manager into a vote counter. From my experience managing reviews, more
Just to make it clear, my proposal is not based on using votes. The idea is that if there is clearly two different overall opinions, and the NOs are not going to be reversed by the changes anyway, then the reasoning for the acceptance has to be well justified OR the judgement of the review manager should be questioned. Otherwise, you're ignoring the No group! With broad libraries covering different application domains, it seems obvious that the above might happen, and it's not a question of one vs the other but of broadening the purpose of the library (if technically possible!) The earlier boost libraries were more fundamental and broadly useful but some of the newer libraries are specific to application domains, not to all boost/c++ users.
important than the count of votes is the reasoning given in the reviews. The manager looks at the reasons given and tries to determine how deeply they affect the library. Even when some persuasive negative reasons are given, the manager may decide that the changes to the library needed to address the complaints are not so central that they preclude acceptance. Such a decision has to come from the manager's understanding of the library, the submitter, and the needed changes and I do not think a formal rule for how to make such a decision would be a good idea. The manager is selected in part because the Wizards think such decisions will be made well. If we make a mistake in selecting a manager, then we will have to step in and adjust the decision but I am not convinced we made such a mistake.
Well, maybe you made a mistake! If not, then please take action and FIX the situation. Also, the reviewer has to acknowledge that he has time to give a timely decision and engage all reviewers. This takes a lot of time !! If you look at the specific case, the reviewer is very experienced and has given lots of advice to one author, as acknowledged in one boostcon paper but the other application domain has been mostly ignored. From a generic library viewpoint you need to try to reconcile views that are from different applications domains, you can not just ignore half of the potential users!
There has been talk of parallel reviews of competing libraries. This sounds like a decent idea on the surface but it has been tried and it did not go well. In the review of the two Thread Pool libraries, I do not think anyone involved was happy with how things ran. We discussed it as a community in advance and decided to review them together, but the reality was just not satisfying.
Nobody is suggesting more competition. I think there are times only library proposal doesn't cut the mustard and the submitter realizes this and abandons the proposal. A different case, the current case, is that different proposals are overlapping, competition has been fostered and you actually want someone experienced to guide the cooperation to get a single library.
A better example of how competing libraries interact well can be found in the Lambda and Phoenix libraries. Lambda is a well constructed library that has some problems that were only understood well after broad use in the community. Phoenix addressed some of the problems in the Lambda library, but not everything. Both are available in Boost. The two development groups then started working together to incorporate all of the lessons learned from both libraries into a merger that is noticeably better than either original. However, without the intense use in the community given to both libraries such a merger would not be possible.
As stated above, some libraries tackle fundamental problems and multiple approaches make sense. But imagine multiple BGL-related libraries, multiple asio-related libraries, multiple GIL-related libraries, multiple geometry libraries .. I think that would be a huge mess and as a user I would be discouraged!
In general, many of the comments I have seen seem oriented toward a stronger central control and planning for Boost. I was not part of the founding group of developers, but my understanding has been that such centralized design was not chosen intentionally. In fact, given the number of different backgrounds and specialties represented in Boost I am not sure who could provide such a planning and control service.
I don't have a clear opinion on this. It looks like the Wizard has the most control over the process and maybe should be acknowledged to take some decisions on behalf of the community. If nobody owns to problem, nobody comes up with a solution. My main point is that boost has to make an effort to attract more libraries and make it easier for the new authors to get the libraies in (if the proposal makes sense!). Sometimes I find a great c++ library and encourage the author to consider submitting it to Boost but they see the cost (docs, examples, ..) but don't see the benefits as clearly (and the benefits are there in terms of peer-review driven quality, ...). Also, it would be useful to know what libraries people think are needed in boost. This could guide what needs to get in, rather than reviewing only everything that is submitted. I think many are interested in geometry-related domains so a FIX to the current situation is important. regards

Jose wrote:
On Mon, Nov 16, 2009 at 8:30 PM, John Phillips <phillips@mps.ohio-state.edu> wrote:
..
Just to make it clear, my proposal is not based on using votes. The idea is that if there is clearly two different overall opinions, and the NOs are not going to be reversed by the changes anyway, then the reasoning for the acceptance has to be well justified OR the judgement of the review manager should be questioned. Otherwise, you're ignoring the No group!
Look at the Announcement that Polygon was accepted. Fernando addresses the specific complaints of the "No" group one at a time. That is not ignoring them, in my opinion.
With broad libraries covering different application domains, it seems obvious that the above might happen, and it's not a question of one vs the other but of broadening the purpose of the library (if technically possible!)
The problem with broadening the purpose is that sometimes it is far harder to do than is obvious before trying. (Think about how many bad cross platform GUI libraries have been written.) There are at least two valid ways to accomplish the goal. The first is to work from the top down - design for the broad purpose from the beginning and fit the pieces in. The second is to work from the bottom up - make libraries that are well suited to the pieces while only worrying about having compatible concepts, then when the pieces work well look to refactor and combine. In the Polygon review, and now in the GGL review there has been animated discussion of what those compatible concepts should be, so they are following the second route.
The earlier boost libraries were more fundamental and broadly useful but some of the newer libraries are specific to application domains, not to all boost/c++ users.
Take a look at the Review Schedule page. Libraries like uBLAS and Special Functions that are tied closely to some specific domains have been around for many years. Now look at the queue. Libraries like the Logging proposals could be useful in many different domains. It has always been a mix.
..
Well, maybe you made a mistake! If not, then please take action and FIX the situation. Also, the reviewer has to acknowledge that he has time to give a timely decision and engage all reviewers. This takes a lot of time !!
I have not excluded the possibility that we made a mistake, but I have seen no proof that we did. If no mistakes were made, then there is nothing to fix (whether or not it is shouted). I am quite familiar with the time requirements of managing a Boost review. I've done it a few times. Though this is Fernando's first time managing, he has successfully submitted a couple of libraries and knows the review process as a developer well. However, since he lives in the real world where work requirements come up while you are doing other things. As a volunteer organization, we just have to accept that this happens.
If you look at the specific case, the reviewer is very experienced and has given lots of advice to one author, as acknowledged in one boostcon paper but the other application domain has been mostly ignored. From a generic library viewpoint you need to try to reconcile views that are from different applications domains, you can not just ignore half of the potential users!
A frequent piece of design advice for generic libraries is to understand a more limited case first, then expand from there. Polygon and GGL are both limited in some ways, but both are broad enough to be useful in real world code. (Both have established user bases, after all.) I do not know the technical details of the problem domain well enough to know how hard merging all the different computational geometry sub-domains into one generic library will be, but I would expect it to be very hard. In such a case, it is not wise to let the perfect become the enemy of the good.
There has been talk of parallel reviews of competing libraries. This sounds like a decent idea on the surface but it has been tried and it did not go well. In the review of the two Thread Pool libraries, I do not think anyone involved was happy with how things ran. We discussed it as a community in advance and decided to review them together, but the reality was just not satisfying.
My mistake, here. It was the competing Futures libraries, not Thread Pool. Not central, but still wrong.
Nobody is suggesting more competition. I think there are times only library proposal doesn't cut the mustard and the submitter realizes this and abandons the proposal.
A different case, the current case, is that different proposals are overlapping, competition has been fostered and you actually want someone experienced to guide the cooperation to get a single library.
The Futures review was not a case of anyone abandoning anything. Anthony submitted an implementation of the proposal for addition to the standard library. He did not want to change or merge it because then it would not implement the proposal. Oliver submitted what he believed was a better library than the standard library proposal, and so also didn't want to lose what he considered important extra features. One stated goal of the joint review was to guide cooperation and a possible merger. However, we found that the review process does not do this job well.
...
As stated above, some libraries tackle fundamental problems and multiple approaches make sense. But imagine multiple BGL-related libraries, multiple asio-related libraries, multiple GIL-related libraries, multiple geometry libraries .. I think that would be a huge mess and as a user I would be discouraged!
And, if they don't each offer something important and useful that is not available other places, I would expect the Boost community to reject them. However, for different text parsing tasks we have a few different libraries in Boost. They coexist exactly because they provide different things for different use cases. The implication is that there is no one correct way to parse text in all circumstances, but instead there are ways that are well suited to different tasks. I do not know if an analogous situation is true for computational geometry, but I am not willing to exclude the possibility out of hand. Instead, I want to look at libraries and see what they have to offer in the context of what is already available.
...
I don't have a clear opinion on this. It looks like the Wizard has the most control over the process and maybe should be acknowledged to take some decisions on behalf of the community. If nobody owns to problem, nobody comes up with a solution.
The Wizards do make some decisions on behalf of the community, as do the people in the other named roles (such as the release team, the moderators, and others). However, in borderline cases the only way anyone can say that the wrong choice was made in a review is by having a deep grasp of the technical details. To say that this is the job of the Wizards is equivalent to saying that the Wizards must be people who have a deep grasp of the technical details for everything that appears in or is proposed for Boost. No such people exist, and I sure am not one. What is the other possibility? The Wizards monitor the review process to catch any egregious problems and try to solve them. This is already done, though we try to be as low profile about it as possible. When there is a subtle problem, the Wizards look at the presented technical details when they exist and request advice from experts if needed. In the absence of technical information, the Wizards engage in the conversation but have no basis for taking any action.
My main point is that boost has to make an effort to attract more libraries and make it easier for the new authors to get the libraies in (if the proposal makes sense!). Sometimes I find a great c++ library and encourage the author to consider submitting it to Boost but they see the cost (docs, examples, ..) but don't see the benefits as clearly (and the benefits are there in terms of peer-review driven quality, ...).
I agree that there are many libraries that would be good additions to Boost if they were willing to meet our quality standards. However, I don't see how to make that easier. Producing high quality code that is extensively tested, well documented, and provides instructive examples is just a darn lot of work. I would be against any proposal that tried to lower these standards to make it easier for new authors. We also have the problem that the flow of incoming libraries is already out pacing the flow of reviews. This is my candidate for the biggest problem in the review process. It is happening for a few reasons. One, we don't have enough qualified review managers volunteering. Two, since producing a review takes time and work, we can't schedule many reviews close together or the reviewer response drops a lot. This also affects scheduling parallel reviews for libraries that address the same domain, since the scaling for producing a good review is worse than linear in the number of libraries. Not only are there the issues of the individual review for each, but there are also comparisons and compatibility questions.
Also, it would be useful to know what libraries people think are needed in boost. This could guide what needs to get in, rather than reviewing only everything that is submitted. I think many are interested in geometry-related domains so a FIX to the current situation is important.
regards
The old wiki had a section for new library requests. I don't know if the new one does, but you can check just as easily as I can. This might provide a nudge to a developer who was considering producing a Boost library (If the evidence of interest was already available.), but its not lie we can assign someone to making a specific library. That just isn't the way Boost is organized. John PS - If your goal in shouting FIX is to convince me of how important it is, you are failing. I am much more persuaded by reasoned arguments and technical details than by shouting.

Hi Jose,
The idea is that if there is clearly two different overall opinions, and the NOs are not going to be reversed by the changes anyway, then the reasoning for the acceptance has to be well justified OR the judgement of the review manager should be questioned. Otherwise, you're ignoring the No group!
Absolutely. And this is the very reason why I took SO long in posting the review results: I had to objetively justify my decision considering each NO vote in turn. I did consider the objections VERY carefully, is just that I didn't have the time to write down the justifications in the result because the GGL review started and it would have been a huge mess if the results from Boost.Polygon were still unknown, so I had to rush into posting the results, entirely unlike the way I planned it. But of course I should have realized that the decision itself ended up looking subjetive and unjustified. Apologies for that, but believe me it was not at all like that. Naturally, you and/or each of those who voted NO are more than welcome to challenge my reasons for accepting it.
With broad libraries covering different application domains, it seems obvious that the above might happen, and it's not a question of one vs the other but of broadening the purpose of the library (if technically possible!)
The earlier boost libraries were more fundamental and broadly useful but some of the newer libraries are specific to application domains, not to all boost/c++ users.
important than the count of votes is the reasoning given in the reviews. The manager looks at the reasons given and tries to determine how deeply they affect the library. Even when some persuasive negative reasons are given, the manager may decide that the changes to the library needed to address the complaints are not so central that they preclude acceptance. Such a decision has to come from the manager's understanding of the library, the submitter, and the needed changes and I do not think a formal rule for how to make such a decision would be a good idea. The manager is selected in part because the Wizards think such decisions will be made well. If we make a mistake in selecting a manager, then we will have to step in and adjust the decision but I am not convinced we made such a mistake.
Well, maybe you made a mistake! If not, then please take action and FIX the situation. Also, the reviewer has to acknowledge that he has time to give a timely decision and engage all reviewers. This takes a lot of time !!
Again I had time to do the review as carefully as it has to. I stop having time after the review was finished and when I was working on explaining my reasons.
If you look at the specific case, the reviewer is very experienced and has given lots of advice to one author, as acknowledged in one boostcon paper but the other application domain has been mostly ignored.
I was not ignoring GIS at all. If you dig out the discussions from many years back (and some not so old) you will find that I have given lots of advice to the other author as well, and very well to the point of the GIS application domain. Best -- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com

On Thu, Nov 19, 2009 at 12:04 AM, Fernando Cacciola <fernando.cacciola@gmail.com> wrote:
Naturally, you and/or each of those who voted NO are more than welcome to challenge my reasons for accepting it.
The reason is that in this case the broad interest of the community is a library that can satisfy multiple application domains not a single one and a good base is necessary. Boost.Polygon doesn't provide that base although it may have algorithms brilliantly implemented!. I am asking you to reconsider your decision and also provide your expert opinion so that GGL can be acceptable as a base where the Polygon algorithms can be included. I know there might be a lot of work involved, just start with the minimum necessary and fold Polygon algorithms as a significant part of it. I think Boost wants a library that attracts further contributions not a situation that distracts users and alienates future contributors. In Spanish they say "Rectificar es de sabios". I don't know the English translation. regards

Jose wrote:
I propose these 3 changes (starting with the word ADD: below) to the current review policy in http://www.boost.org/community/reviews.html
I realize that there is no mention to the word "vote" in the review policy, just "review comments". See message below for rationale on why these may be good changes.
...
regards jose
As promised, a reply with my personal opinions. I think the discretion of the review manager is an important component in the process. In some of the reviews I have managed or participated in there are examples of objections that I consider invalid. Part of the job of the manager is to understand that and weigh the review with it in mind. This is part of why the policy refers to the comments instead of to votes. If the only issue were counting votes, there would be no need for a manager at all. Anyone who wants to can look at the discussion thread and count the votes, after all. I also don't think Boost is a good place for centralized design decisions. Our goal is to promote creative and innovative solutions from across the community and to assure certain quality standards are met by them. If Boost decided as a group that we needed a recursive descent parser and because there are many possibilities already in existence that we needed to decide the allowed design before allowing a review, then it is unlikely that the design of Spirit would have been chosen. Even if it was, the later work to create Spirit 2 would then have needed further community discussion. I personally liked Spirit, and think Spirit 2 is a substantial improvement so I think a process that makes its development and inclusion unlikely is a mistake. As for whether the Polygon review decision was a mistake, I do not think so. One of the things I do as a Wizard is monitor review discussions. I think Fernando's result was a reasonable response to that discussion when placed in the context of the expectations for the library. It is not a complete solution to every geometry issue, but it is a strong solution to some such problems and it is an established solution for a respected collection of groups who use it. I'm not sure if I would have made the same decision if I were running the review (I'm really not sure. I have not spent the time thinking about the details to form a strong opinion.), but I also lack Fernando's expertise in the problem domain. I also do not think that the inclusion of Polygon in Boost should then exclude GGL. They overlap, but do not have the same set of useful cases. However, if GGL is also included I do have a preferred future for the libraries. I would like the developers to work together to answer the questions of whether a combination of the solutions that applies across both domains (and hopefully even more) is feasible. This will include looking at compile time efficiency, run time efficiency, correctness, robustness, and quality of the abstractions. My hope is that such a combination can learn important lessons from both designs and from use by the Boost community to produce something better than just the sum of the parts similar to what has been happening with Lambda and Phoenix. I can imagine situations where the Wizards have to step in and set aside a decision, but this is not one. (To my knowledge, this has not happened to date.) Thanks again for starting this discussion and for everyones participation in it. John

On Mon, Nov 16, 2009 at 9:02 PM, John Phillips <phillips@mps.ohio-state.edu> wrote:
As promised, a reply with my personal opinions.
All my opinions are as a Boost library user.
I think the discretion of the review manager is an important component in the process. In some of the reviews I have managed or participated in there are examples of objections that I consider invalid. Part of the job of the manager is to understand that and weigh the review with it in mind. This is part of why the policy refers to the comments instead of to votes. If the only issue were counting votes, there would be no need for a manager at all. Anyone who wants to can look at the discussion thread and count the votes, after all.
Yes, this explains it clearly. But fuzzy criteria also leads to conflict on new situations. It would be great if you could step in above and try to find a solution since one of problems seems to be the way the reviews were scheduled (a different approach is to ignore the problem and make it bigger than it is).
I also don't think Boost is a good place for centralized design decisions.
If I understand it correctly, in a community there are no centralized decisions, but there are roles and yours seems the most important in this situation.
Our goal is to promote creative and innovative solutions from across the community and to assure certain quality standards are met by them. If Boost decided as a group that we needed a recursive descent parser and because there are many possibilities already in existence that we needed to decide the allowed design before allowing a review, then it is unlikely that the design of Spirit would have been chosen. Even if it was, the later work to create Spirit 2 would then have needed further community discussion. I personally liked Spirit, and think Spirit 2 is a substantial improvement so I think a process that makes its development and inclusion unlikely is a mistake.
100% in agreement. But the confrontation with different libraries is something that discourages new authors. As I said before, make it easier for the one that proposes a library as he is doing the hard work.
As for whether the Polygon review decision was a mistake, I do not think so. One of the things I do as a Wizard is monitor review discussions. I think Fernando's result was a reasonable response to that discussion when placed in the context of the expectations for the library. It is not a complete solution to every geometry issue, but it is a strong solution to some such problems and it is an established solution for a respected collection of groups who use it. I'm not sure if I would have made the same decision if I were running the review (I'm really not sure. I have not spent the time thinking about the details to form a strong opinion.), but I also lack Fernando's expertise in the problem domain.
Ok, It's a solution, maybe not the best one but I lack the in-depth expertise judge.
I also do not think that the inclusion of Polygon in Boost should then exclude GGL. They overlap, but do not have the same set of useful cases. However, if GGL is also included I do have a preferred future for the libraries. I would like the developers to work together to answer the questions of whether a combination of the solutions that applies across both domains (and hopefully even more) is feasible. This will include looking at compile time efficiency, run time efficiency, correctness, robustness, and quality of the abstractions. My hope is that such a combination can learn important lessons from both designs and from use by the Boost community to produce something better than just the sum of the parts similar to what has been happening with Lambda and Phoenix.
I can imagine situations where the Wizards have to step in and set aside a decision, but this is not one. (To my knowledge, this has not happened to date.)
Just read the reviews, and you'll see people mentioning that the confrontation is not good, how did we get into this mess, how did this happen .. I know, the easiest is to say there is not a problem so there is no need for a solution. The schedule was bad and that could have been fixed, the review could have been cancelled, ..! I understand that Boost has a close-knit community at its core and many users like me are spectators, but improving the review process is good for everybody and others suggestions have come up, not just mine! regards

Jose wrote:
On Mon, Nov 16, 2009 at 9:02 PM, John Phillips <phillips@mps.ohio-state.edu> wrote:
...
Yes, this explains it clearly. But fuzzy criteria also leads to conflict on new situations.
Rigid criteria also lead to conflict in some situations, but they provide less flexibility to try and fix such conflict.
It would be great if you could step in above and try to find a solution since one of problems seems to be the way the reviews were scheduled (a different approach is to ignore the problem and make it bigger than it is).
Since I'm involved in this conversation, it should be obvious that I am not ignoring what is happening. However, taking action is not synonymous with doing what any one person wants.
I also don't think Boost is a good place for centralized design decisions.
If I understand it correctly, in a community there are no centralized decisions, but there are roles and yours seems the most important in this situation.
In some communities there certainly are centralized decisions - see political science for numerous examples. However, I do not agree that the role intended by the Boost community for the Review Wizards is that of central planners for what should be and not be in Boost.
... 100% in agreement. But the confrontation with different libraries is something that discourages new authors. As I said before, make it easier for the one that proposes a library as he is doing the hard work.
I see no way to completely remove such confrontation, and the only effective ways to reduce it are more dependent on the library developers than on the review process. The list can and does encourage developers who are working on common problems to work together and develop a joint vision. Such threads are not uncommon if you look at list history and the geometry libraries have been encouraged to do this as well. However, not all developers take such well intentioned advice for any number of reasons. Everyone certainly agrees that the work of a Boost developer is hard and that reasonable steps should be taken to keep it as easy as is possible, but I am not convinced that adding layers of extra work to the review process is the way to accomplish that.
..
Ok, It's a solution, maybe not the best one but I lack the in-depth expertise judge.
I find this comment a little confusing and possibly frustrating. Maybe I'm misunderstanding it so correct me if needed. However, this reads as you saying that you lack the in depth knowledge to know if Fernando made a good decision in accepting the Polygon library. If so, why in the world have you said several times that his decision should be overturned? I would think such a statement can only be made if the person making it has clear technical reasons to back up the assertion.
...
Just read the reviews, and you'll see people mentioning that the confrontation is not good, how did we get into this mess, how did this happen .. I know, the easiest is to say there is not a problem so there is no need for a solution. The schedule was bad and that could have been fixed, the review could have been cancelled, ..!
First, as I pointed out elsewhere, I did read the reviews and there were some strong opinions both for and against the library. The basis for making a good decision in such a case is an understanding of the technical details and the use cases, combined with careful consideration. If you wish to argue that the decision was wrong, then use these as the basis of your discussion. If you make a good argument of this sort, then you might even get what you want. However, in my own experience as a review manager I can tell you that there are sometimes very strongly held opinions in reviews that are simply technically wrong. So, just having a strong opinion against the library is not a good argument to overturn the review. (This should not be read to imply that the opinions against Polygon were technically wrong. I have not put the work into the technical details to have an opinion on that.) You have stated many times that the Wizards (Ron and I) could have canceled the Polygon review. No, we could not unless we can travel backward in time. The review for Polygon ran from late August to early September. At the time of the review, it was the only geometry library that had been submitted for review. Barend had posted many times on the list about the library he was working on and it produced many lively discussions, but the library had not been submitted for review. The first contact requesting a review was an email he sent in early October. That is more than a month after the Polygon review ended. The same day, I wrote Fernando to request that he hurry with producing the review result so people could know the outcome for Polygon before the GGL review began. As he stated, producing the results was delayed by his work obligations. As a pure volunteer organization we have to understand that this will happen sometimes. It is already hard to get qualified review managers; imagine how much harder it would be if we required that managers can't respond to changes in their work conditions and have some leeway for delay. There is such a thing as too much delay, and if you check the review history you will find that I have stepped in in the past and taken over managing in such a case. It is quite possible that I will be doing so again, soon, unfortunately. This is a major time sink for me, but it is also part of my role. So, the real choices would be to not allow a review result after the review was completed and the work done or to refuse to schedule the Polygon review because there was the possibility that GGL would someday be submitted for review. Both of these strike me as far worse choices than what has happened so far.
I understand that Boost has a close-knit community at its core and many users like me are spectators, but improving the review process is good for everybody and others suggestions have come up, not just mine!
regards
I entirely agree that the process can be improved and that looking for improvements should be a constant goal. However, the review is central to what Boost is and how it works, so all ideas for changing it should be subjected to very careful (almost ruthless) scrutiny. There is nothing personal in this, it is just something I think everyone in the community should do in such a case. As for changing the process - I do not think that Ron and I have the authority to do so unilaterally. It should require a broad consensus across the Boost community. This should especially include input from developers who have been through the review process as submitters and as review managers. John

On Tue, Nov 17, 2009 at 5:11 PM, John Phillips <phillips@mps.ohio-state.edu> wrote:
Ok, It's a solution, maybe not the best one but I lack the in-depth expertise judge.
I find this comment a little confusing and possibly frustrating. Maybe I'm misunderstanding it so correct me if needed. However, this reads as you saying that you lack the in depth knowledge to know if Fernando made a good decision in accepting the Polygon library. If so, why in the world have you said several times that his decision should be overturned? I would think such a statement can only be made if the person making it has clear technical reasons to back up the assertion.
I missed the to in "expertise TO judge". I mean in technical in-depth experience which is fundamentally important. On the other side, I gave 5 reasons and could give more why the review was flawed (and some people that voted yes to GTL added further comments). The reasons are in the separate thread GTL vs GGL - rationale. I questioned the whole planning of the review and the fact that a combined library should be possible (and the GGL authors actually wanted to make it possible). Also, if you check my replies to Luc, the scope of GTL and the name were changed just before the review, which is ok in general but not ok given that a broader library, with great overlap would be reviewed soon after the first review. Zachary Turner answer at the beginning of this thread summarizes it nicely ----------------------------------------------------------------------------------------- Now we are in the unfortunate situation of either a) having 2 libraries that have massive overlap but each providing something unique, b) withdrawing a library that has already been accepted (although in reality this won't happen), or c) rejecting a library which, if compared directly against the other library may have been preferable if users had initially been asked to choose only one. ------------------------------------------------------------------------------------------ And I agree with him on the "(although in reality this won't happen)" and I think it should happen, because it sets a really bad precedent and I only blame the review policy and the schedule.
First, as I pointed out elsewhere, I did read the reviews and there were some strong opinions both for and against the library. The basis for making a good decision in such a case is an understanding of the technical details and the use cases, combined with careful consideration. If you wish to argue that the decision was wrong, then use these as the basis of your discussion. If you make a good argument of this sort, then you might even get what you want.
Well, I think the points are above and there are technical issues but fundamentally it boils down to a process flaw. Both library authors and Fernando are really experienced in their domain and I am not questioning that. I am saying that this is a broad field like Graphs, Networking or Graphics and does require some coordination, specially when multiple authors want to contribute, but it didn't happen !
However, in my own experience as a review manager I can tell you that there are sometimes very strongly held opinions in reviews that are simply technically wrong. So, just having a strong opinion against the library is not a good argument to overturn the review. (This should not be read to imply that the opinions against Polygon were technically wrong. I have not put the work into the technical details to have an opinion on that.)
Sure, but I don't think this is a list were people are fooled by technically wrong arguments. One piece of evidence in reviews is benchmarks, you publish them and publish the code to run the benchmarks, and the benchmark could still be flawed if nobody cares to check them and run them but it's better that statements about what a library does. Another key piece of evidence is code examples, so you can understand the application domain, what the library does and how it does it. In my own review of GGL I pointed out that the library major design weakness was to address the robustness issues.
You have stated many times that the Wizards (Ron and I) could have canceled the Polygon review. No, we could not unless we can travel backward in time. The review for Polygon ran from late August to early September. At the time of the review, it was the only geometry library that had been submitted for review. Barend had posted many times on the list about the library he was working on and it produced many lively discussions, but the library had not been submitted for review.
Don't want to be unfair with my comments and are not specific to the Wizards but to a process flaw. My argument is that Boost aims for well designed generic libraries (among other things), and there are at least two competent/expert authors in their respective application domains that want to propose a library and for several years they have been advancing/iterating with more or less input from the community but still as separate libraries (everything is ok at this point although it probably would have been better to cooperate - in this case) and then: - the generic library completely changes the scope (reduces to specific algorithms), has a non-consensus review and is accepted (and this would also be ok if there weren't an involvement by the community to achieve a generic library that covers the 2D geometry and that can probably incorporate all the algorithms). This is what's not logical or good! I see three types of libraries: 1) Technically superior or high quality solutions that provide a specific benefit - This includes early Boost libraries that even end up contributing to the standard library 2) Multiple approaches make sense - This makes sense for some language paradigms (Lambda-Phoenix, Spiriti-Spirit II, ...) 3) Generic libraries useful across multiple application domains (the current case) .... Graphs - BGL Networking - asio Images - GIL Geometry - GGL .... Goals: generality, performance, flexibility, extensibility to multiple application domains, compatibility The important point in most libraries in the third group is to actually have a set up where people can contribute algorithms and the library can evolve. It's also key to look at competing libraries !! CGAL, which is focused on computational geometry, and Fernando knows well, ends his philosophy page with this text that is interesting. http://www.cgal.org/philosophy.html -------------------------------------------------- Beyond Robustness Let us conclude by pointing out that guaranteed robustness is not the only (but probably the most important) aspect in which CGAL makes a difference. Another major feature of CGAL is its flexibility. CGAL closely follows the generic programming approach of the C++ Standard Template Library. This for example means that you can feed most CGAL algorithms with your own data: instead of converting them to some CGAL format, you can adapt the algorithm to work directly with your data. Last but not least, CGAL's range of functionality is by now very large, and it's still growing. CGAL offers solutions for almost all basic (and a lot of advanced) problems in computational geometry. CGAL is an open source project with a large number of developers (eventually, you might become one of them). Many of us have tight connections to computational geometry research, and to application domains that involve geometric computing. It is this combination of expertise that we believe makes CGAL unique.

Jose wrote:
On Tue, Nov 17, 2009 at 5:11 PM, John Phillips <phillips@mps.ohio-state.edu> wrote:
Ok, It's a solution, maybe not the best one but I lack the in-depth expertise judge.
...
I missed the to in "expertise TO judge". I mean in technical in-depth experience which is fundamentally important. On the other side, I gave 5 reasons and could give more why the review was flawed (and some people that voted yes to GTL added further comments). The reasons are in the separate thread GTL vs GGL - rationale. I questioned the whole planning of the review and the fact that a combined library should be possible (and the GGL authors actually wanted to make it possible).
I did not concern myself with the typo. I am concerned that you want the review result for a library overturned when you then claim you don't have the expertise to judge whether that is the best decision. In such a case, I think a more appropriate statement is to express your concern about the process without insisting on overturning a result because the manager did something wrong. For your 5 listed reasons in the other thread - yes, I read them. As I have pointed out several times, I try to read everything that applies to the review process on the list. However, your base concern seems to be that 60% of the votes supporting the library should not be enough to justify acceptance, even when the manager explains the reasons in the review report. (Please notice that the reasons presented discuss real technical issues and include enough detail to follow the ideas. This is a good thing in a technical conversation.) Luke has replied to your post in that thread, as well.
Also, if you check my replies to Luc, the scope of GTL and the name were changed just before the review, which is ok in general but not ok given that a broader library, with great overlap would be reviewed soon after the first review.
This contains several factual inaccuracies. First you claim that the name and scope changed "just before the review." This is not true. According to the gmane archives, Luke sent a message to the list on June 19th informing the list that the name was changed, and that the reason for the name change was that it better reflected the true scope of the library. He had originally hoped to produce a broader library, but the library he actually produced fit this name better. This was 4 days before he requested a review, and 6 days before the start date was selected. The review started more than 2 months after this name change. As I have stated in other replies in this thread, the request for a review for GGL happened more than a month after the review period for Polygon ended. Shy of impressive tarot skills, there was no way for Luke, Fernando, the Wizards, or anyone else to know what the review schedule would be for GGL while the Polygon review was ongoing. So, as far as I can see, all of your arguments about time sequencing are factually incorrect and so not persuasive in the least.
Zachary Turner answer at the beginning of this thread summarizes it nicely
----------------------------------------------------------------------------------------- Now we are in the unfortunate situation of either a) having 2 libraries that have massive overlap but each providing something unique, b) withdrawing a library that has already been accepted (although in reality this won't happen), or c) rejecting a library which, if compared directly against the other library may have been preferable if users had initially been asked to choose only one. ------------------------------------------------------------------------------------------
I agree that it is a less than ideal situation. Ideal would be to have perfect information about past and future, and also to always know the right scales of application for the abstractions we use to design concepts and code. However since we live in the real world, this is not available. We have to base decisions on the information available. During the Polygon review, the now known fact that GGL would be submitted soon was not available. So, it was not possible to plan based on it. Further, at the time of the Polygon review, Barend and team were working on their code and did not have the latest version ready for scrutiny. After the review period was done he told Phil that they hoped to have something to show in October. (In an odd case for software deadlines, they even did.) Prior to that, no dates were given that I can find or recall. So, comparing and choosing only one was not feasible. Comparing to other available facilities (such as CGAL) was done to some extent, and would be quite reasonable to an even greater extent. However, such comparisons take a lot of time and no reviewer felt driven to do one in depth. Now that the Polygon review is done, and the GGL review begun some comparisons between them are possible. This is an obvious part of the GGL review. So reviewers should be asking whether GGL adds enough to Boost to justify having it, as well. I have already outlined my personal hope for the longer term future if both libraries are in Boost, but I want to add a little to it. If the two libraries are incompatible in some ways, then the Boost user base will help determine which concepts and methods are to be preferred. They will do this by using the library that works better and provides more value for less work. This is the experienced guidance needed to produce a later joint library, and it has the advantage that the choices made are already know to work for coders in the real world, unlike trying to fully describe a large and complicated domain full of abstractions and concepts before producing the library.
And I agree with him on the "(although in reality this won't happen)" and I think it should happen, because it sets a really bad precedent and I only blame the review policy and the schedule.
Since you blame the policy and schedule, please provide a proposed change in the policy that would prevent this from happening. Remember when you provide it the factual details I have provided about when information to base a decision on was available, since any so called solution that ignores these details is useless.
...
Well, I think the points are above and there are technical issues but fundamentally it boils down to a process flaw.
Sorry, I have seen where other talk about technical issues, but very little of it from your posts. Please direct me to where you detail them.
Both library authors and Fernando are really experienced in their domain and I am not questioning that. I am saying that this is a broad field like Graphs, Networking or Graphics and does require some coordination, specially when multiple authors want to contribute, but it didn't happen !
If you believe Boost should have a process to require cooperation between different groups working on some problem domains, then please propose such a process to the list. Then, the other members of Boost can look at the details of a real proposal and decide if that works for them. Especially try to get input from the Moderators and from authors of already reviewed libraries, since they have the most useful information for such questions. If they don't care enough to get involved, then you proposal is unlikely to go anywhere. Think of it as voting by apathy that they are satisfied with the status quo.
However, in my own experience as a review manager I can tell you that there are sometimes very strongly held opinions in reviews that are simply technically wrong. So, just having a strong opinion against the library is not a good argument to overturn the review. (This should not be read to imply that the opinions against Polygon were technically wrong. I have not put the work into the technical details to have an opinion on that.)
Sure, but I don't think this is a list were people are fooled by technically wrong arguments. One piece of evidence in reviews is benchmarks, you publish them and publish the code to run the benchmarks, and the benchmark could still be flawed if nobody cares to check them and run them but it's better that statements about what a library does. Another key piece of evidence is code examples, so you can understand the application domain, what the library does and how it does it.
I come from a scientific computing background, so numerical methods and their pitfalls are very familiar territory for me. I rarely write GUIs, so the issues there are not familiar. I have seen Boost members who are very good at what they do be mislead by incorrect arguments about numerics. I'm sure I could be mislead by incorrect arguments about tricky subjects I'm not familiar with. The range of Boost is gigantic, and all of us have holes in our understanding, even the very best of us. Anyone can be fooled by technically wrong arguments, so the voices of experts in domains really should count for more, especially if you don't know the details yourself. Asking for a clear explanation from the experts is a good idea, but all else being equal the smart money bets on the expert. Then we have questions like benchmarks and other pieces of evidence. What evidence is important in each domain. In a geometry library intended to process large sets of polygons, benchmarks and scaling along with accuracy are very important. However, in some applications pure speed is so important that users will happily give up accuracy to get it. In other applications, the desired trade off is exactly the opposite. High speed with inaccurate results could be disastrous in some of Luke's applications, even though as fast as possible is still the goal. So, we need to know about the problem domain to even decide what evidence matters. This is part of why benchmarks are welcome in Boost documentation, but in general are not required. Code examples are just a part of good documentation, and so are required. However, I think it is naive to believe you can understand the application domain from code examples. I could produce hundreds of code examples of using statistical tests on data but you still would not know the limitations on proper application of such tests after seeing them. You would know how to add the tests to your own code, but not how to interpret the results or whether you are applying the right test for your situation.
...
Don't want to be unfair with my comments and are not specific to the Wizards but to a process flaw. My argument is that Boost aims for well designed generic libraries (among other things), and there are at least two competent/expert authors in their respective application domains that want to propose a library and for several years they have been advancing/iterating with more or less input from the community but still as separate libraries (everything is ok at this point although it probably would have been better to cooperate - in this case) and then:
- the generic library completely changes the scope (reduces to specific algorithms), has a non-consensus review and is accepted (and this would also be ok if there weren't an involvement by the community to achieve a generic library that covers the 2D geometry and that can probably incorporate all the algorithms). This is what's not logical or good!
You keep saying it was non-consensus. How many yes votes does it take to count as consensus for you? I'm from the US, and 60% is called a supermajority in our politics and is enough to over ride even the strongest opposition. (I have no idea where you are from, and choose not to assume any location for you.) If the results need to be unanimous, then we should be overturning most Boost reviews. Instead, I prefer to trust in the judgment of the review manager when there is contention. If you wish to show that the result of the review was incorrect and there are show stopper issues with the Polygon library that make it unsuitable, please provide those focused technical arguments so the members of Boost can weigh them on their merits. However, what I see so far is a collection of unstructured emotional appeals that include gaps and factual inconsistencies. I still see no reason to overturn the decision of the manager. The existence of another library is not a persuasive technical argument in this case, nor is the name change for the Polygon library. I have explained why above, as well as in other responses.
I see three types of libraries:
1) Technically superior or high quality solutions that provide a specific benefit - This includes early Boost libraries that even end up contributing to the standard library
Futures is contributing to the standard library, and it is a recent library. Hopefully, we have not run out of possible standard library ideas.
2) Multiple approaches make sense - This makes sense for some language paradigms (Lambda-Phoenix, Spiriti-Spirit II, ...)
This seems like an artificial category that exists only so you can say it is different from 3). How do you know that different approaches make sense for Lambda/Phoenix (Which, as I pointed out are merging to become one approach that carries the benefits of both earlier approaches.) but not for Polygon/GGL? What is the technical and design based difference that lets you make this distinction? By the way, Spirit 2 is the successor and replacement for Spirit 1, not a separate and parallel approach. Some legacy code is expected to keep using Spirit 1, but I believe the suggestion of the Spirit developers would be to prefer Spirit 2.
3) Generic libraries useful across multiple application domains (the current case)
.... Graphs - BGL Networking - asio Images - GIL Geometry - GGL ....
All of Boost strives to be generic libraries useful across multiple application domains, so this also seems like a poor abstraction.
Goals: generality, performance, flexibility, extensibility to multiple application domains, compatibility
The important point in most libraries in the third group is to actually have a set up where people can contribute algorithms and the library can evolve. It's also key to look at competing libraries !!
Yes, a willingness to accept useful input from others is good. However, Boost has never required this, and some developers have been almost unresponsive when offered outside assistance with providing things like new algorithms. So, I don't think it is an important point for Boost, so far. Looking at other implementations of the same ideas is always a good idea, especially during reviews. In the case of the Polygon review, some of this was done. If you felt more should have been done, you were quite welcome to discuss it during the review. The discussion was lively, and I saw no examples of useful comments being ignored. However, that review completed more than 2 months ago so we can't go back in time and add new discussions. Therefore other implementations are pertinent as a way to suggest improvements on the accepted library (and I'm sure Luke would be happy to talk to you about ways to make his library better, though that does not mean he will just do whatever you say), or because they clarify a technical point that shows a unacceptable flaw in the library that can't be readily fixed.
CGAL, which is focused on computational geometry, and Fernando knows well, ends his philosophy page with this text that is interesting.
http://www.cgal.org/philosophy.html -------------------------------------------------- Beyond Robustness
Let us conclude by pointing out that guaranteed robustness is not the only (but probably the most important) aspect in which CGAL makes a difference. Another major feature of CGAL is its flexibility. CGAL closely follows the generic programming approach of the C++ Standard Template Library. This for example means that you can feed most CGAL algorithms with your own data: instead of converting them to some CGAL format, you can adapt the algorithm to work directly with your data.
Last but not least, CGAL's range of functionality is by now very large, and it's still growing. CGAL offers solutions for almost all basic (and a lot of advanced) problems in computational geometry. CGAL is an open source project with a large number of developers (eventually, you might become one of them). Many of us have tight connections to computational geometry research, and to application domains that involve geometric computing. It is this combination of expertise that we believe makes CGAL unique.
I'm not sure what your goal is, here. Yes, CGAL strives to be a very good geometry library. The team wants it to be generic, broad and robust. However, a review where many people were quite conscious of CGAL came to the conclusion that Polygon was a worthy addition to Boost. How does this philosophy text matter to that? John

On Wed, Nov 18, 2009 at 5:35 AM, John Phillips <phillips@mps.ohio-state.edu> wrote:
I did not concern myself with the typo. I am concerned that you want the review result for a library overturned when you then claim you don't have the expertise to judge whether that is the best decision. In such a case, I think a more appropriate statement is to express your concern about the process without insisting on overturning a result because the manager did something wrong.
As a summary, I don't argue about the quality of the algorithms in Polygon, the author and reviewer are both experts. The community objective is to get a generic library were multiple authors can eventually contribute their algorithms, like Boost BGL or the competing CGAL. This situation is one of the cases were cooperating is justified and worthwhile for everybody. In this case both authors are really involved, wrote Boostcon09 papers, and they were both committed towards a COMMON GOAL. If I look at the end of the abstract of the GTL paper presented to Boostcon I think it clearly shows what the community was aiming for: "This paper discusses the specific needs of generic geometry programming and how these needs are met by the concepts-based type system that makes the generic API possible"
Since you blame the policy and schedule, please provide a proposed change in the policy that would prevent this from happening. Remember when you provide it the factual details I have provided about when information to base a decision on was available, since any so called solution that ignores these details is useless.
The idea is: "In cases where the Boost community is aiming for a broad library useful in multiple application domains, accepting a new library that doesn't meet the generic objectives should be driven by consensus from the different application domains represented in the review" (the actual wording should be better and how consensus is measured should be clarified, to me consensus is measured by votes but there has to be a minimum number of votes also)
The existence of another library is not a persuasive technical argument in this case, nor is the name change for the Polygon library. I have explained why above, as well as in other responses.
Exactly, I am not trying to make a technical argument! I If what I wrote above is not clear, I don't have further to add! thank you for getting interested in the issues I pointed out. I don't want to go on an endless debate about this so take what's useful (if anything) and ignore the rest I said. You make good technical arguments that I will not answer b/c is not the issue I'm pointing out. Regards jose

Jose wrote:
On Wed, Nov 18, 2009 at 5:35 AM, John Phillips <phillips@mps.ohio-state.edu> wrote:
...
As a summary, I don't argue about the quality of the algorithms in Polygon, the author and reviewer are both experts.
The author, the reviewers, and the review manager were also all quite conscious of the existence of GGL. This existence was not considered a show stopper in any of the reviews posted, which is where the Boost community expresses such concerns.
The community objective is to get a generic library were multiple authors can eventually contribute their algorithms, like Boost BGL or the competing CGAL. This situation is one of the cases were cooperating is justified and worthwhile for everybody.
I don't recall a single reviewer stating the ability of multiple authors to contribute algorithms as an objective for them. I may be forgetting something, so please point me to it in the archives if so. If not, then I do not consider this a community objective. Historically, I see no evidence for it as a standard Boost concern, either. Again, please correct me if I'm missing something.
In this case both authors are really involved, wrote Boostcon09 papers, and they were both committed towards a COMMON GOAL. If I look at the end of the abstract of the GTL paper presented to Boostcon I think it clearly shows what the community was aiming for:
"This paper discusses the specific needs of generic geometry programming and how these needs are met by the concepts-based type system that makes the generic API possible"
Does the community want a high quality generic geometry library? I think the answer to that is well established as yes. Is this considered by the community to be equivalent to a library where many people can contribute algorithms? I see no evidence presented that it is. So, this line of argument suggests that the discussion should be technical in nature. Is Polygon a high quality generic geometry library? This is why I keep trying to redirect you to technical matters.
Since you blame the policy and schedule, please provide a proposed change in the policy that would prevent this from happening. Remember when you provide it the factual details I have provided about when information to base a decision on was available, since any so called solution that ignores these details is useless.
The idea is:
"In cases where the Boost community is aiming for a broad library useful in multiple application domains, accepting a new library that doesn't meet the generic objectives should be driven by consensus from the different application domains represented in the review" (the actual wording should be better and how consensus is measured should be clarified, to me consensus is measured by votes but there has to be a minimum number of votes also)
How are we supposed to determine whether the correct way to solve a broad problem is a single library that tries to satisfy everyone, or a few different libraries that are more focused on specific tasks? Are we going to impose some "pre-review" process where we decide whether the community considers this to be a broad library case, then if we do a second step where we decide whether this case is best served by a single library or by multiple smaller libraries? How are cases of split votes decided? Who has the final say? What if the involved authors (who have actually implemented something and so know things the rest of us don't) strongly disagree with the conclusion? If consensus is measured by votes, what fraction of the votes counts as a consensus? If there is a minimum, what number meets that minimum? In short, the questions you are asking are things every individual reviewer should already be considering. Does this library meet the standards we want for Boost? That already covers your concerns. In the Polygon review, 4 people said no and gave their reasons for saying so. 6 people said yes, and also gave their reasons. The review manager, who is well versed in the technical issues of the library weighed the strength of the different arguments and found the yes arguments not only more numerous, but also more persuasive than the no arguments. He proceeded to address the no arguments in the review result and explain why he did not find them persuasive. So, every member of the Boost community had the opportunity to raise the issues you have and support them. I personally and publicly encouraged Barend to participate fully and not to be concerned that his writing a different but related library made his opinions somehow tainted. In the course of the discussion, several comparisons between the libraries were drawn. This review wasn't conducted in a cave, but with a full understanding of what else was available at the time of the review. I do not think it can be faulted for not knowing what would become available a month later, since even Barend didn't have (or, at least didn't share) all the details for that during the review.
The existence of another library is not a persuasive technical argument in this case, nor is the name change for the Polygon library. I have explained why above, as well as in other responses.
Exactly, I am not trying to make a technical argument! I If what I wrote above is not clear, I don't have further to add!
thank you for getting interested in the issues I pointed out. I don't want to go on an endless debate about this so take what's useful (if anything) and ignore the rest I said. You make good technical arguments that I will not answer b/c is not the issue I'm pointing out.
Regards jose
As you have probably noticed, I am reluctant to add more formal process to the reviews. This is largely a personal philosophical point. Process should only be added when you can clearly see how it will improve what you are doing. If you can't clearly see the improvements it will bring, then adding process becomes an action for its own sake. I have had the misfortune to sit in many University committee meetings where process for its own sake succeeded in choking off the ability to accomplish anything. (A single meeting where someone used a proceedural point as a means to complain about who uses what parking spaces for 2 hours is a good, though not isolated example. We accomplished none of the work on the agenda that meeting.) Adding layers to the review process adds delay, produces extra work for authors (who already have plenty) and for review managers (who are already hard to recruit), and builds in places where someone intent on obstruction can do so. To accept that cost, I believe we need to see a very clear and large advantage for the review process. So far, what I see is formalizing steps to consider what all reviewers and managers should already be considering. John

On Wed, Nov 18, 2009 at 5:09 PM, John Phillips <phillips@mps.ohio-state.edu> wrote:
The author, the reviewers, and the review manager were also all quite conscious of the existence of GGL. This existence was not considered a show stopper in any of the reviews posted, which is where the Boost community expresses such concerns.
Obviously! A paper, a presentation and multiple iterations had been produced and discussion ensued. It is obvious that the GTL author and the reviewer had close ties, as clearly acknowledged in the GTL paper. I assume these were just email discussions but please confirm there were no business ties, just to be 100% clear !!! In these non-technical issues (some call it politics) some people have a confrontational attitude and other don't, it varies by culture. If you really want candid opinions then you should ask in private. Again, if you don't foresee a problem then you will not ask in the first place. In the same vein, I have received private email supporting what I was doing! Also, it's hard to evaluate big company-sponsored libraries. My Boost experience has been that the authors have it as one of their work objectives and then when they don't the library stalls. But this is another topic ...
I don't recall a single reviewer stating the ability of multiple authors to contribute algorithms as an objective for them. I may be forgetting something, so please point me to it in the archives if so. If not, then I do not consider this a community objective. Historically, I see no evidence for it as a standard Boost concern, either. Again, please correct me if I'm missing something.
This one came this morning (it's in the thread started by my GGL review) but if you look carefully there are more. Also check reviews for broad libraries, like GIL, that even the authors had that as an objective !! It's logical that for a broad field you can not pretend a single person will be able to provide all the algorithms. ------------------------------------------------------- Hi Jon, I have a slightly unconventional spatial index that I would like one day to submit to Boost; I was hoping that we would have a Boost geometry framework that I could port it to, and as you may have noticed I'm unhappy that we instead have two incompatible ones... .... -------------------------------------------------------
Does the community want a high quality generic geometry library? I think the answer to that is well established as yes. Is this considered by the community to be equivalent to a library where many people can contribute algorithms? I see no evidence presented that it is.
You have some advice above on how to gather this info and one proof from an email comment. I apologize if some of my comments were harsh towards the wizard. This is probably not fair. I think this discussion is largely irrelevant now. Long emails make it very hard for people to join and give their opinion. Someone should get the private opinion from different reviewers/authors and get a conclusion (and act on it) I was reading your comments before and I felt you thought there was not a problem but I continue reading and I feed you acknowledge we have a problem and we only need to find a solution. If you acknowledge there is a PROBLEM I can give you my comments offline to work towards a SOLUTION.
How are we supposed to determine whether the correct way to solve a broad problem is a single library that tries to satisfy everyone, or a few different libraries that are more focused on specific tasks? Are we going to impose some "pre-review" process where we decide whether the community considers this to be a broad library case, then if we do a second step where we decide whether this case is best served by a single library or by multiple smaller libraries? How are cases of split votes decided? Who has the final say? What if the involved authors (who have actually implemented something and so know things the rest of us don't) strongly disagree with the conclusion? If consensus is measured by votes, what fraction of the votes counts as a consensus? If there is a minimum, what number meets that minimum?
In short, the questions you are asking are things every individual reviewer should already be considering. Does this library meet the standards we want for Boost? That already covers your concerns.
e.g. see comment from one user above !!
In the Polygon review, 4 people said no and gave their reasons for saying so. 6 people said yes, and also gave their reasons. The review manager, who is well versed in the technical issues of the library weighed the strength of the different arguments and found the yes arguments not only more numerous, but also more persuasive than the no arguments. He proceeded to address the no arguments in the review result and explain why he did not find them persuasive. So, every member of the Boost community had the opportunity to raise the issues you have and support them.
Again, yes, the reviewer is an expert in the field but not in the other application domain (GIS) that was of interest to reviewers. Otherwise the feedback would not have been so directed to one of the libraries vs the other.
The existence of another library is not a persuasive technical argument in this case, nor is the name change for the Polygon library. I have explained why above, as well as in other responses.
Check Phil Endecott's comment above and his polygon review, his arguments are quite clear!

Jose wrote:
It is obvious that the GTL author and the reviewer had close ties, as clearly acknowledged in the GTL paper. I assume these were just email discussions but please confirm there were no business ties, just to be 100% clear !!! ... Again, yes, the reviewer is an expert in the field but not in the other application domain (GIS) that was of interest to reviewers. Otherwise the feedback would not have been so directed to one of the libraries vs the other.
To be 100% clear I find your behavior inappropriate and unprofessional. I acknowledged Fernando in my GTL paper because helped edit the paper and for the advice he gave on this list about design and implementation. I know of no business ties between Fernando and Intel. My relationship with Fernando is limited to the public email disucssions on this list and private email discussions about publishing my work and review of my library. My understanding is that Fernando has experience in both CAD and GIS. Your accusations do a disservice to both yourself and the people who put in volunteer effort to make boost great. It is very rare for this list to see the kind of thoughtless behavior that is so common on the internet, and which you are now exhibiting. Fernando's reputation is his livelyhood. I don't know what your reputation is worth to you, but if you won't think of others, think of how your behavior reflects upon yourself. Regards, Luke

On Wed, Nov 18, 2009 at 7:57 PM, Simonson, Lucanus J <lucanus.j.simonson@intel.com> wrote:
Again, yes, the reviewer is an expert in the field but not in the other application domain (GIS) that was of interest to reviewers. Otherwise the feedback would not have been so directed to one of the libraries vs the other.
To be 100% clear I find your behavior inappropriate and unprofessional.
Luc, thank you for clarifying and I apologize if you feel that way. I was surprised about all this feedback given to one library and not the other one, so that is the reason why I asked
I acknowledged Fernando in my GTL paper because helped edit the paper and for the advice he gave on this list about design and implementation. I know of no business ties between Fernando and Intel. My relationship with Fernando is limited to the public email disucssions on this list and private email discussions about publishing my work and review of my library. My understanding is that Fernando has experience in both CAD and GIS. Your accusations do a disservice to both yourself and the people who put in volunteer effort to make boost great.
No intention to make an accusation, as an external observer I saw much of the feedback in one library and not the other when both were in the same field. I apologize again. I should have been more careful in wording this. My fault! regards

On Wed, Nov 18, 2009 at 8:06 PM, Jose <jmalv04@gmail.com> wrote:
Luc, thank you for clarifying and I apologize if you feel that way. I was surprised about all this feedback given to one library and not the other one, so that is the reason why I asked
Luc, let me add that I have apologized directly to Fernando. I also wished he could have followed these discussions!. regards

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Jose Sent: Wednesday, November 18, 2009 5:54 PM To: boost@lists.boost.org Subject: Re: [boost] Updating the Boost Review Process Was: [Boost] [GGL] Bost.Polygon (GTL) vs GGL - rationale
It is obvious that the GTL author and the reviewer had close ties, as clearly acknowledged in the GTL paper. I assume these were just email discussions but please confirm there were no business ties, just to be 100% clear !!!
I believe it is unrealistic to ban a reviewer from business ties with the submitter - even if they are from the same company. We are short enough of qualified reviewers as it is. What we do need is total transparency, and, of course, confidence that the reviewer is expert enough. Although the decisions were and are more than usually difficult, I have full confidence in decisions of the wizard, the choice of the reviewer, the review, and the decision. So I don't think there is a problem. Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

Hi Jose,
On Wed, Nov 18, 2009 at 5:09 PM, John Phillips <phillips@mps.ohio-state.edu> wrote:
The author, the reviewers, and the review manager were also all quite conscious of the existence of GGL. This existence was not considered a show stopper in any of the reviews posted, which is where the Boost community expresses such concerns.
Obviously! A paper, a presentation and multiple iterations had been produced and discussion ensued.
It is obvious that the GTL author and the reviewer had close ties
It might be implied but it is definitely not obvious, and more importantly, totally incorrect. So for the record I have absolutely no close ties, of any nature, with neither Intel nor Luke.
as clearly acknowledged in the GTL paper. I assume these were just email discussions
You assume right as Luke already clarified.
Again, yes, the reviewer is an expert in the field but not in the other application domain (GIS)
You must have second guess that from somwhere, but, just for the record, you are wrong. I have expertise in several dommains where geometric computing is applicable: CAD, GIS, Computer Graphics (which is a quite different domain with significantly different requirements) and Computer Generated Imaginery (also different).
that was of interest to reviewers. Otherwise the feedback would not have been so directed to one of the libraries vs the other.
Please don't forget that GGL was not readily available when Boost.Polygon was reviewed, and still not when I was drawing the conclusion for the result. So there was certainly no contention between libraries *at all*. Best -- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com

Hi Fernando, On Thu, Nov 19, 2009 at 1:17 AM, Fernando Cacciola <fernando.cacciola@gmail.com> wrote:
Hi Jose,
It is obvious that the GTL author and the reviewer had close ties
It might be implied but it is definitely not obvious, and more importantly, totally incorrect.
So for the record I have absolutely no close ties, of any nature, with neither Intel nor Luke.
Although my words were poorly worded and I apologized, I wanted to add that the review policy doesn't say anything against this so it would not necessarily be negative
as clearly acknowledged in the GTL paper. I assume these were just email discussions
You assume right as Luke already clarified.
Again, yes, the reviewer is an expert in the field but not in the other application domain (GIS)
Otherwise the feedback would not have been so directed to one of the libraries vs the other.
Please don't forget that GGL was not readily available when Boost.Polygon was reviewed, and still not when I was drawing the conclusion for the result.
So there was certainly no contention between libraries *at all*.
Can you clarify ? I don't understand what contention means here ? Also, thank you for joining the thread. I didn't like that you were not present in the thread and why I contacted you regards jose

John Phillips wrote:
Jose wrote:
The community objective is to get a generic library were multiple authors can eventually contribute their algorithms, like Boost BGL or the competing CGAL. This situation is one of the cases were cooperating is justified and worthwhile for everybody.
I don't recall a single reviewer stating the ability of multiple authors to contribute algorithms as an objective for them. I may be forgetting something, so please point me to it in the archives if so. If not, then I do not consider this a community objective. Historically, I see no evidence for it as a standard Boost concern, either. Again, please correct me if I'm missing something.
I agree that this openness to contributions is not something that often happens in Boost libraries, but for the record I would personally like the geometry efforts to have that sort of focus so that I can contribute algorithms. I believe that I expressed this long ago during one of the numerous "bike shed" point concept discussions; I may not have mentioned it explicitly in my Polygon and GGL reviews but I am flagging it now.
In the Polygon review, 4 people said no and gave their reasons for saying so. 6 people said yes, and also gave their reasons. The review manager, who is well versed in the technical issues of the library weighed the strength of the different arguments and found the yes arguments not only more numerous, but also more persuasive than the no arguments. He proceeded to address the no arguments in the review result and explain why he did not find them persuasive.
Here is an extract from my "No" review to Boost.Polygon: " As I will explain in detail below, my complaints are mainly things like excessive warnings and odd misfeatures in the interface. These are all things that could be fixed, and they perhaps only indicate that the library has arrived a little prematurely. Based only on these issues my verdict would be that that the library could be accepted after some revisions. But we must also look at the bigger picture, i.e. the existence of other competing libraries "in the wings". ... " In view of all this, I suggest that this library should be rejected for now. This will tell Barend that he still has an opportunity to present his library for review, and that it will be considered on a level playing field. If Barend's library is reviewed and found to be more complete, more performant and at least as usable as this library, then it should be accepted. On the other hand, if Barend's library is found to be deficient in some way (or is not submitted for review), then Luke will have an opportunity to resubmit an updated version of this library, which I anticipate should be accepted. " In the review result announcement, Fernando listed many of my minor complaints about the library but did not address this suggestion, or the existence of GGL "in the wings", at all. Regards, Phil.

Phil Endecott wrote:
John Phillips wrote:
Jose wrote:
The community objective is to get a generic library were multiple authors can eventually contribute their algorithms, like Boost BGL or the competing CGAL. This situation is one of the cases were cooperating is justified and worthwhile for everybody.
I don't recall a single reviewer stating the ability of multiple authors to contribute algorithms as an objective for them.
Well I ceartainly do, but didn't think that it was necessary to explicitly state this. Boost is an open source project after all, and - at least IMHO - "multiple authors contributing" is what open source is all about.
I may be forgetting something, so please point me to it in the archives if so. If not, then I do not consider this a community objective. Historically, I see no evidence for it as a standard Boost concern, either. Again, please correct me if I'm missing something.
I agree that this openness to contributions is not something that often happens in Boost libraries,
This is certainly true, but I always thought that this is just an unfortunate side effect of the strong library ownership and the general boost process, but not a explicit goal. I cannot state this strongly enough, but I believe that boost should go out of its way to foster contributions, since this is the heart and soul of great open source projects.
" In view of all this, I suggest that this library should be rejected for now. This will tell Barend that he still has an opportunity to present his library for review, and that it will be considered on a level playing field. If Barend's library is reviewed and found to be more complete, more performant and at least as usable as this library, then it should be accepted. On the other hand, if Barend's library is found to be deficient in some way (or is not submitted for review), then Luke will have an opportunity to resubmit an updated version of this library, which I anticipate should be accepted. "
For me the completeness argument is the most important. I didn't submit a review for boost.polygon because I wasn't particular interested in it as all prior communication already hinted that it would only do 2d with a focus towards VLSI, and that another library with much more interessting scope was about to be reviewed. I assumed that there would be some discussion on which library would be included or how they can be merged before final acceptance, like there was for threading (wrong thinking on my part, obviously, so no one to blame but myself). Quote from John Philips:
How are we supposed to determine whether the correct way to solve a broad problem is a single library that tries to satisfy everyone, or a few different libraries that are more focused on specific tasks?
Someone (IMO the Review Manager) should rise this exact question. This could even be a Question in the default "Review Questionnaire" (i.e. Is this library broad enough in scope / is it to broad? or something along this lines.) Regards Fabio

Hi Phil,
In the review result announcement, Fernando listed many of my minor complaints about the library but did not address this suggestion, or the existence of GGL "in the wings", at all.
You are correct... as I said in another post I really intended to address all objections but as GGL review started I had to compromise. So, FWIW, I totally agreed with Luke's own response to your suggestion: that there is absolutely no need to reject Boost.Polygon as a mean to make sure GGL has a chance to be accepted. The one thing that I could not state on my results is this: I had followed GGL from the begging, as I did GTL, and I know enough of both to be certain that both *can* coexist within Boost, even in spite of their high impedance in some region of the fundamental base level. I believe they can and should coexist because I don't think any of the libraries is good enough at the realization of a truly generic common base, yet they offer somewhat complementary views to it. I can picture a future where the *experience* of these two proposals being used by many people with totally separate expectations and requirements will light some insight into what it takes to have a really common and generic geometric playground. Best -- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com

Fernando Cacciola wrote:
Hi Phil,
In the review result announcement, Fernando listed many of my minor complaints about the library but did not address this suggestion, or the existence of GGL "in the wings", at all.
You are correct... as I said in another post I really intended to address all objections but as GGL review started I had to compromise.
I understand. I posted that comment primarily in reply to John Phillips' claim that "[we] were also all quite conscious of the existence of GGL. This existence was not considered a show stopper in any of the reviews posted".
So, FWIW, I totally agreed with Luke's own response to your suggestion: that there is absolutely no need to reject Boost.Polygon as a mean to make sure GGL has a chance to be accepted.
The one thing that I could not state on my results is this: I had followed GGL from the begging, as I did GTL, and I know enough of both to be certain that both *can* coexist within Boost, even in spite of their high impedance in some region of the fundamental base level.
I'm not sure what you mean by "high impedance"...
I believe they can and should coexist because I don't think any of the libraries is good enough at the realization of a truly generic common base, yet they offer somewhat complementary views to it.
I can picture a future where the *experience* of these two proposals being used by many people with totally separate expectations and requirements will light some insight into what it takes to have a really common and generic geometric playground.
Yuk :-( I would really like a single common set of basic concepts to code to. As I wrote in my GGL review, there are gratuitous differences in terminology like "within" vs. "contains". I have seen no discussion of whether we think this should be fixed, can be fixed, will be fixed etc. Let alone who should "cede ground" in order to arrive at a compromise, or any technical discussion of how e.g. the point concepts could inter-operate. Personally I'm totally unmotivated to contribute to "Boost.Geometry" if I have to either do everything twice or gamble on which one is going to "win" in the end. I would really like to see some discussion of this before this review ends, though sadly I will be going away tomorrow and may not be able to take part in much of the remaining discussion. Changing the subject slightly, I have also wondered over the last few days about acceptance criteria. Many of Barend's emails mention things that have not yet been implemented or are planned. I wonder whether we are, or should be, judging the library in terms of what has been submitted for review, or what we believe that the authors will eventually deliver? Based on previous proposals where "fully formed" libraries have been presented, and some comments from review managers / wizards(?) that "the library being reviewed is the one submitted", I have always assumed the former. In this case, I think that some reviewers are assuming the latter. Regards, Phil.

Hi Phil,
Fernando Cacciola wrote:
Hi Phil,
In the review result announcement, Fernando listed many of my minor complaints about the library but did not address this suggestion, or the existence of GGL "in the wings", at all.
You are correct... as I said in another post I really intended to address all objections but as GGL review started I had to compromise.
I understand. I posted that comment primarily in reply to John Phillips' claim that "[we] were also all quite conscious of the existence of GGL. This existence was not considered a show stopper in any of the reviews posted".
OK
So, FWIW, I totally agreed with Luke's own response to your suggestion: that there is absolutely no need to reject Boost.Polygon as a mean to make sure GGL has a chance to be accepted.
The one thing that I could not state on my results is this: I had followed GGL from the begging, as I did GTL, and I know enough of both to be certain that both *can* coexist within Boost, even in spite of their high impedance in some region of the fundamental base level.
I'm not sure what you mean by "high impedance"...
I meant differences that are theoretically fixable but practically not quite without significant effort.
Yuk :-(
I would really like a single common set of basic concepts to code to.
I think we all do.
As I wrote in my GGL review, there are gratuitous differences in terminology like "within" vs. "contains". I have seen no discussion of whether we think this should be fixed, can be fixed, will be fixed etc. Let alone who should "cede ground" in order to arrive at a compromise, or any technical discussion of how e.g. the point concepts could inter-operate.
OK, here is another bit of the rationale I intended to include in the results: One of the things I considered when deciding on the Boost.Polygon results was the fact that the relative ordering of the reviews will neccesarily force GGL to adapat to Boost.Polygon. IMNSHO, GGL *must* adapt now and remove all such gratuitous differences and keep its own version of things *only* when there is enough justification. I do fully realized all this and considered the amount of additional work it requires for GGL. I also realized and considered that the requirement could be considered unfair. However, I don't think it really is unfair because both libraries had coexisted in parallel for years, yet they never merged before. Say I reject(ed) Polygon (GTL) then we accept GGL. Do we accept Polgon later on and require *that one* to adapt? Or do we just loose GTL for good? The way I saw it, the only way to make sure that the *community* would benefit from the best of both (and both have *great* things to offer) was to avoid rejection on the basis of future incompatibilities even at the expense of requiring the second comer to adapt back or rationally argue that Polygon should change so as to set the record that the incompatibility is considered to the Polygon's fault and justify that way the burden put on users. Accepted libraries are not set on stone. Many have evolved a long way from the fist accepted version and I don't imagine Luke erroneously believing that, since his library was accepted first, he doesn't have to do corrections on the light of GGL and for the sake of the community. If fairness is to be considered, I guess one could argue that the first one to had been ready deserved the right to set the reference. Specially if we consider that GGL was not ready for review when Polygon was, so it is not that they ended up with such a relative ordering due to arbitrary scheduling. That could have made the current GGL burden unfair, but it's not how it happened.
Personally I'm totally unmotivated to contribute to "Boost.Geometry" if I have to either do everything twice or gamble on which one is going to "win" in the end.
Of course, but rejecting one of the libraries is not neccessarily the best way to avoid ending up with competing choices. Sometimes, getting cooperation rolling requires a small push.
I would really like to see some discussion of this before this review ends, though sadly I will be going away tomorrow and may not be able to take part in much of the remaining discussion.
Changing the subject slightly, I have also wondered over the last few days about acceptance criteria. Many of Barend's emails mention things that have not yet been implemented or are planned. I wonder whether we are, or should be, judging the library in terms of what has been submitted for review, or what we believe that the authors will eventually deliver? Based on previous proposals where "fully formed" libraries have been presented, and some comments from review managers / wizards(?) that "the library being reviewed is the one submitted", I have always assumed the former. In this case, I think that some reviewers are assuming the latter.
FWIW I definitely accepted Boost.Polygon on the basis of what it is *now*, not what it could become. Yet at the same time, I also considered how, IMO, things would make sense to play out with GGL in the future, which I just outlined above. Best -- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com

Hi Fernando, Your rationale below was definitely missing. Thanks! I find it logical that authors from different applications domains will find it difficult to collaborate on a single library unless someone with your experience/vision is guiding and contributing to the process. In my opinion, what is unfair, is that instead of aiming for the initial objective, and voting on that, the scope was reduced before the review to guarantee that the library would be accepted (ignoring that there was another approach). I think this sets a bad precedent, and why I asked for the decision to be reversed because the timing of the reviews is used against the higher objective
Fernando Cacciola wrote: OK, here is another bit of the rationale I intended to include in the results: Accepted libraries are not set on stone. Many have evolved a long way from the fist accepted version and I don't imagine Luke erroneously believing that, since his library was accepted first, he doesn't have to do corrections on the light of GGL and for the sake of the community.
I think this is the theory. In practice, getting the library accepted is the major part.
If fairness is to be considered, I guess one could argue that the first one to had been ready deserved the right to set the reference. Specially if we consider that GGL was not ready for review when Polygon was, so it is not that they ended up with such a relative ordering due to arbitrary scheduling. That could have made the current GGL burden unfair, but it's not how it happened.
I completely disagree, given that polygon scope was 2D. I strongly think the contrary, GGL tries from the getgo to to tackle a broader set of geometries/coordinates although it has other issues. regards

Phil Endecott wrote:
I would really like a single common set of basic concepts to code to. As I wrote in my GGL review, there are gratuitous differences in terminology like "within" vs. "contains". I have seen no discussion of whether we think this should be fixed, can be fixed, will be fixed etc. Let alone who should "cede ground" in order to arrive at a compromise, or any technical discussion of how e.g. the point concepts could inter-operate.
Personally I'm totally unmotivated to contribute to "Boost.Geometry" if I have to either do everything twice or gamble on which one is going to "win" in the end.
I would really like to see some discussion of this before this review ends, though sadly I will be going away tomorrow and may not be able to take part in much of the remaining discussion.
Phil, I understand both your high expectations and your disappointment. If you look back in the archives you will find discussion between myself and Barend where we did go into the technical details of what the semantics of geometry concepts ought to be and how the concepts should be implemented. If we had been able to agree early on we could have collaborated. Barend generally was not receptive to my criticism of his designs for his concepts in which I pointed out how what I was doing differed and *why* I was doing it differently. We arrive now at a state where his polygon concept: Requires that the user maintain an invariant positive winding direction, either clockwise or counter clockwise. Requires that the user maintain the invariant that the first and last point in a "ring" are identical. Requires that the user provide mutable access to an object with STL container interfaces to access the holes. Has two different ways to add points to a ring, which is needlessly confusing. Requires random access iterators (I think) for a ring as well as linestring, or was it just linestring? My polygon_with_holes concept on the other hand: Does not require the user to maintain an invariant positive winding, but allows the user to specify that they do so and which orientation it has. If the user does not maintain the invariant I check the winding at runtime when that information is needed. Does not require that the user enforce either "open" or "closed" invariant with regard to whether the first and last vertex are identical. I check this case in all algorithms and handle either equally well. Use iterator pair for getting and setting points of polygons as well as holes. Require forward const iterators only. My polygon_with_holes concept is more generic than Barend's polygon concept in every point, and I discussed this with him several times over the course of years. Now in the review of his library we have the ability to switch invariant winding direction half implemented, still no plan to support checking the winding direction at runtime, a plan to support both open and closed semantics for the last vertex (but nothing implemented) with conflicting information about this in the documentation and a polygon concept that can only adapt legacy polygon types if the user declares a proxy class that fakes an STL container interface for access to the holes. Many of these policies and invariants are a convenience to the library author, and it is hard to retrofit support for not requiring invariants from the data type once a large code base of algorithms has been developed that rely on them. It would have been better if Barend had adopted more generic polygon concept semantics earlier, but he seemed to regard the extra work entailed to make his polygon concept more generic not worth the effort. I'd already put in the work to make my algorithms invariant to winding direction and open/closed as well as to define a polygon concept interface that was easy to adapt for legacy types. What Barend never accepted was that some polygon data types enforce each of the four possible combinations of clockwise/counterclockwise open/closed invariant and that you can't call his correct() function on such a data type because it will just see the wrong winding direction fed back in and revert it to the original and drop the redundant last vertex to save memory. I know this because Intel has a large number of legacy polygon data types and I designed my concepts to easily adapt all of them. Why would I throw out support for 3/4 of them? His offer to allow me to join his project, let him drive design decisions and throw away my own concepts hierarchy was not so temping early on and particularly un-enticing later when it came with a threat that he would release his benchmark results during the review of my library if I didn't call off my review.
Changing the subject slightly, I have also wondered over the last few days about acceptance criteria. Many of Barend's emails mention things that have not yet been implemented or are planned. I wonder whether we are, or should be, judging the library in terms of what has been submitted for review, or what we believe that the authors will eventually deliver? Based on previous proposals where "fully formed" libraries have been presented, and some comments from review managers / wizards(?) that "the library being reviewed is the one submitted", I have always assumed the former. In this case, I think that some reviewers are assuming the latter.
My concern is that many reviewers are having trouble distinguishing between what is actually submitted and what is a planned feature. I get confused myself. Regards, Luke

Hi Luke, 2009/11/19 Simonson, Lucanus J <lucanus.j.simonson@intel.com>:
Why would I throw out support for 3/4 of them? His offer to allow me to join his project, let him drive design decisions and throw > away my own concepts hierarchy was not so temping early on and particularly un-enticing later !> when it came with a threat that he would release his benchmark results during the review of my library !> if I didn't call off my review.
what you are saying here is pretty fierce. I doubt very much that it is wise to do such statements on the list. From your wording one can get the impression that you have been blackmailed. I'm worried Joachim

Joachim Faulhaber wrote:
Hi Luke,
2009/11/19 Simonson, Lucanus J <lucanus.j.simonson@intel.com>:
Why would I throw out support for 3/4 of them? His offer to allow me to join his project, let him drive design decisions and throw > away my own concepts hierarchy was not so temping early on and particularly un-enticing later
!> when it came with a threat that he would release his benchmark results during the review of my library !> if I didn't call off my review.
what you are saying here is pretty fierce. I doubt very much that it is wise to do such statements on the list. From your wording one can get the impression that you have been blackmailed.
I'm worried Joachim _______________________________________________
Thanks Joachim, I didn't read this piece yet. It is *unbelievable* what Luke's writing here and *completely untrue*. I have *never* mailed something like this, and I would never do so. Our last communication was on July 9 and ended friendly. Between July 3 and July 9 we corresponded with 5 mails about the benchmark, in a friendly way; there were several people in the CC, and I helped Luke to find a deviation, and Luke suggested improvements for our algorithm. /I've never written anything about publishing benchmarks at all./ I can really not believe what I'm seeing here above. I don't understand where this sudden statement comes from. Luke, if there are any resentments from the past, I regret that. Please let that go, let's work together in the future. Barend

Barend Gehrels wrote:
Joachim Faulhaber wrote:
Hi Luke,
2009/11/19 Simonson, Lucanus J <lucanus.j.simonson@intel.com>:
Why would I throw out support for 3/4 of them? His offer to allow me to join his project, let him drive design decisions and throw > away my own concepts hierarchy was not so temping early on and particularly un-enticing later
!> when it came with a threat that he would release his benchmark results during the review of my library !> if I didn't call off my review.
what you are saying here is pretty fierce. I doubt very much that it is wise to do such statements on the list. From your wording one can get the impression that you have been blackmailed.
I'm worried Joachim _______________________________________________
Thanks Joachim, I didn't read this piece yet.
It is *unbelievable* what Luke's writing here and *completely untrue*. I have *never* mailed something like this, and I would never do so. Our last communication was on July 9 and ended friendly. Between July 3 and July 9 we corresponded with 5 mails about the benchmark, in a friendly way; there were several people in the CC, and I helped Luke to find a deviation, and Luke suggested improvements for our algorithm. /I've never written anything about publishing benchmarks at all./
I can really not believe what I'm seeing here above.
I don't understand where this sudden statement comes from. Luke, if there are any resentments from the past, I regret that. Please let that go, let's work together in the future.
I wasn't so sure about sending that email yesterday. I thought about leaving it unsent until the morning and re-reading it in the morning to decide whether I should send it, then I just hit send. I'm sorry that I implied blackmail, I reread the old correspondence and your suggestion that I join your project is much friendlier than my email from yesterday suggests (and the way I remember feeling about it at the time.) "Let me also repeat that you're still welcome to join us. We're prepared to add 45 and 90 manhattan geometries. You could implement your algorithms as specializations for those cases. We would form a strong team and having only one geometry library would be much less complicated an stronger chances of acceptance. You might think about this." I can't find any statement that you planned specificially to release the benchmarks during the review, but this "The comparison program is in our SVN and, combined with the next preview, as we go to Boost Sandbox, it will automatically be publicly available such that they can reproduced by everyone." I guess I remember more clearly the feeling of being cornered by your benchmark results which I had no way to reproduce for myself from an algorithm that I didn't know how it worked when the review of my library was close. I went out of my way to be positive and friendly in my response, but I was concerned that your benchmark results would kill my library's chances in review, which they very nearly did. I agree that your intention was not blackmail, but I felt blackmailed. I'll do my best to let these feeling from the past go, as you say. I've wanted to work together from the beginning. Early on it was challenging because we were both learning and experimenting with syntax for doing what we wanted. Now there are still some semantic differences that prevent merging (as well as syntactic differences.) I think we should look at what is required for our libraries to interoperate first, then we can create a base that is common to both. Sorry again, I'll try to keep discussion technical and focus on achiving what is in everyone's best interests, Luke

On Thu, Nov 19, 2009 at 11:08 AM, Simonson, Lucanus J <lucanus.j.simonson@intel.com> wrote:
We're prepared to add 45 and 90 manhattan geometries.
I'm very much interested in having support for 90 manhattan geometries. Of course, 1-norm in n-space is what I really want, but we'll start within the current constraints. :-) Jon

Hi Luke, Thanks for your response, it clarifies enough. Sorry about your feelings, we really didn't know. We always wanted to invite you and inform you. That is how things are meant.
I'll do my best to let these feeling from the past go, as you say. I've wanted to work together from the beginning. Early on it was challenging because we were both learning and experimenting with syntax for doing what we wanted. Now there are still some semantic differences that prevent merging (as well as syntactic differences.) I think we should look at what is required for our libraries to interoperate first, then we can create a base that is common to both.
Sorry again, I'll try to keep discussion technical and focus on achiving what is in everyone's best interests, Luke
Great, let's go on, your scenario sounds good. Regards, Barend

2009/11/19 Simonson, Lucanus J <lucanus.j.simonson@intel.com>
I guess I remember more clearly the feeling of being cornered by your benchmark results which I had no way to reproduce for myself from an algorithm that I didn't know how it worked when the review of my library was close. I went out of my way to be positive and friendly in my response, but I was concerned that your benchmark results would kill my library's chances in review ...
Luke, I really appreciate your very open reply. I can understand your feelings well. I think this is a very good place to start from for a creative future of both of your libraries (and authors) working together. Cheers Joachim

Joachim Faulhaber wrote:
2009/11/19 Simonson, Lucanus J <lucanus.j.simonson@intel.com>
I guess I remember more clearly the feeling of being cornered by your benchmark results which I had no way to reproduce for myself from an algorithm that I didn't know how it worked when the review of my library was close. I went out of my way to be positive and friendly in my response, but I was concerned that your benchmark results would kill my library's chances in review ...
Luke, I really appreciate your very open reply. I can understand your feelings well. I think this is a very good place to start from for a creative future of both of your libraries (and authors) working together.
I, too, was pleased to see Luke's reply. I was troubled by the angst and even anger in some of the traffic related to geometry over the recent months. I thought some responses were excessive and others confrontational. Things didn't escalate beyond reason, but were frequently edgy. Luke's confession of his perceptions has, I think, cleared the air. I'm likewise pleased that Bahrend has expressed his willingness to forget the past and reasserted his willingness to cooperate with Luke. It's can be hard to control one's feelings, but assuming the best of others and openly revealing one's own concerns will go a long way toward keeping things civil. I think that's what Luke and Bahrend are now trying to do. I hope that one day the geometry efforts can be merged into a library even better than the two before us now. Time and experience will reveal the right approach where the two diverge. It needn't be personal. _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

Simonson, Lucanus J wrote:
I wasn't so sure about sending that email yesterday. I thought about leaving it unsent until the morning and re-reading it in the morning to decide whether I should send it, then I just hit send.
I'm sorry ... more stuff elided by Patrick ...
I just want to say that this email was an amazingly brave and wonderfully vulnerable statement that shows good character and a willingness to develop great character. Good show. How rare in the technical world. My hat is off to you Luke. Patrick

Patrick Horgan wrote:
Simonson, Lucanus J wrote:
I wasn't so sure about sending that email yesterday. I thought about leaving it unsent until the morning and re-reading it in the morning to decide whether I should send it, then I just hit send.
I'm sorry ... more stuff elided by Patrick ...
I just want to say that this email was an amazingly brave and wonderfully vulnerable statement that shows good character and a willingness to develop great character. Good show. How rare in the technical world. My hat is off to you Luke.
Patrick
I'm very pleased with the behavior of both Luke and Barend in this. What could have become an ugly situation is now not because of how the two of you dealt with it. Everyone makes mistakes. The true sign of character is in how you deal with them. Thanks for the example, it has brightened my day. John

I reacted on Luke's answer yesterday but I kept thinking about this all.
[...] I went out of my way to be positive and friendly in my response, but I was concerned that your benchmark results would kill my library's chances in review, which they very nearly did. [...]
Luke, I want to make my apologies about all those benchmarks, explicitly. Since last day, realizing your feelings, your real feelings, I'm convinced now that I should not have published them during your review. I regret this, and I want to apologize now. I measured things; however, a week ago you opened my eyes, that within a night the reverse could be measured. Having read your open message, I first didn't believe you, honestly, but after your answer I started to feel the threat you must have felt. We wanted to be open with you, convince you and invite you, very true, but all that together was unfortunate. I did not realize that because you stayed friendly, as you've written. But now I'm feeling very sorry about this all. When your review was there, I first didn't want to vote but in the end I did. I voted mainly based on my own benchmarks. But they can be reversed. My description was way too explicit and inappropriate. I apologize for that too. I would wish, if that would have been possible, to retract my no-vote and all the objections I did express. I was surprised, and I really appreciate, that you didn't mirror that, you voted for accepting our library, with reservations, but in an impartial way. Yesterday I wrote that we should go on and you accepted that immediately. Thanks for that, great, and again, I'm feeling very sorry. Best regards, Barend

Barend Gehrels wrote:
I reacted on Luke's answer yesterday but I kept thinking about this all.
[...] I went out of my way to be positive and friendly in my response, but I was concerned that your benchmark results would kill my library's chances in review, which they very nearly did. [...]
Luke, I want to make my apologies about all those benchmarks, explicitly. Since last day, realizing your feelings, your real feelings, I'm convinced now that I should not have published them during your review. I regret this, and I want to apologize now. I measured things; however, a week ago you opened my eyes, that within a night the reverse could be measured.
Having read your open message, I first didn't believe you, honestly, but after your answer I started to feel the threat you must have felt. We wanted to be open with you, convince you and invite you, very true, but all that together was unfortunate. I did not realize that because you stayed friendly, as you've written. But now I'm feeling very sorry about this all.
When your review was there, I first didn't want to vote but in the end I did. I voted mainly based on my own benchmarks. But they can be reversed. My description was way too explicit and inappropriate. I apologize for that too. I would wish, if that would have been possible, to retract my no-vote and all the objections I did express.
I was surprised, and I really appreciate, that you didn't mirror that, you voted for accepting our library, with reservations, but in an impartial way.
Yesterday I wrote that we should go on and you accepted that immediately. Thanks for that, great, and again, I'm feeling very sorry.
Best regards, Barend
Thank you, this means a lot to me. Luke

Joachim Faulhaber wrote:
what you are saying here is pretty fierce. I doubt very much that it is wise to do such statements on the list. From your wording one can get the impression that you have been blackmailed.
I don't think it is wise to make any statements in this thread. All these bad and unjustified attacks against Lucanus Simonson and Fernando Cacciola, intermixed with short excuses and continued attacks. (The attacks come from Jose, not Bahrend, just to be clear here.) OK, finally Luke lost his self-restraint. So what? Bahrends response is understandable, but the "Please let that go, let's work together in the future." is probably inappropriate here, because the whole discussion was about that they can't be forced to work together. And Luke already gave good and understandable reasons why it is unattractive for him to merge his work with GGL. But I admit that "..., let's work together in the future." can also just mean to continue to have fruitful discussion, without any plans to merge the resulting work. This is completely appropriate, of course. I just want to point out my support for Luke, but not in the sense that I wouldn't also support Bahrend. (As long as this doesn't imply forcing Luke to merge his work with GGL.) Regards, Thomas

On Thu, Nov 19, 2009 at 2:05 PM, Thomas Klimpel <Thomas.Klimpel@synopsys.com> wrote:
Joachim Faulhaber wrote:
what you are saying here is pretty fierce. I doubt very much that it is wise to do such statements on the list. From your wording one can get the impression that you have been blackmailed.
I don't think it is wise to make any statements in this thread. All these bad and unjustified attacks against Lucanus Simonson and Fernando Cacciola, intermixed with short excuses and continued attacks. (The attacks come from Jose, not Bahrend, just to be clear here.)
Hi Thomas, I apologized for some bad wordings on my side. Questioning the process is not an attack. I think I am more interested in your objective viewpoint (which is kind of hard to do in this thread), but you already provided it in your reply to me before and it was useful.
I just want to point out my support for Luke, but not in the sense that I wouldn't also support Bahrend. (As long as this doesn't imply forcing Luke to merge his work with GGL.)

Hi,
But I admit that "..., let's work together in the future." can also just mean to continue to have fruitful discussion, without any plans to merge the resulting work. This is completely appropriate, of course.
Maybe it is good to state here that I this was what I meant, I did not mean merging, but let the separate libraries work together. Thanks for pointing this out. Barend

Phil Endecott wrote:
Personally I'm totally unmotivated to contribute to "Boost.Geometry" if I have to either do everything twice or gamble on which one is going to "win" in the end.
I would really like to see some discussion of this before this review ends, though sadly I will be going away tomorrow and may not be able to take part in much of the remaining discussion.
After using the library in my testing for the review, I have to admit that I would have reservations about this as well. I really want to contribute to the geometry efforts as I believe I have something to offer, but my efforts so far with GGL have been fairly frustrating. Rather I find that parts of the interface just feel clunky to use and that the names of things are not so natural. As an example, when I tested the segment intersection algorithm, I found that the bundled segment type holds its points by reference. So I couldn't have a vector of those points without modifying or rolling my own segment type. The call to check for intersections is named 'relate'. Though it really only calculates the intersection points. The name would seem to suggest it performs a more thorough topological characterization. Perhaps these come down to preference, but the feelings are there.
Simonson, Lucanus J wrote: My concern is that many reviewers are having trouble distinguishing between what is actually submitted and what is a planned feature. I get confused myself.
I have this problem as well. My impression of the work is that it's unrefined at this point, and that answers to emerging problems are being concocted on the fly. That is not to say that I don't appreciate all the work and difficulties involved. I know as I do this kind of work routinely. The fact that the review library still has the use of double with c-style casting and 1e-10 tolerance checks tells me that the library isn't well tested under things like GMP. The conclusion from that is that there are claims being made about how it should work theoretically as though they are already fact. Regards, Brandon

> The fact that the review library still has the use of double with > c-style casting and 1e-10 tolerance checks tells me that the library > isn't well tested under things like GMP. The conclusion from that is > that there are claims being made about how it should work > theoretically as though they are already fact. - It is working in practice, as I justed described on that new web-page and in the post yesterday. - The page is new but the approach was already there before review. See the x02_numeric_adaptor_example.cpp on the doc page. - The approach was already there, long before review (see postings on this list dating from 30/03) - The GMP/CLN approach was implemented in various algorithms, and it is tested there - It was not implemented in *all *algoritms, as I mailed on this list before, including the reason for that - the only thing I did yesterday was that I added it to intersection and union as well. This is not submitted to not confuse the review process, however, it is available for people who want it - that 1e-10 tolerance occurs only once in the whole code, at a place setting a boolean flag is often set anyway. That is noted there - furthermore we compare : integer with ==, FP with epsilon (like in Boost.Test) and GMP or CLN with == . See ggl/util/math.hpp - there is nothing concocted on the fly of this review, besides maybe things really not implemented (as our answer to Pierre about infinity), but they are not presented as being there or already planned Barend

Barend Gehrels wrote:
- It is working in practice, as I justed described on that new web-page and in the post yesterday.
Perhaps it is at least as far as you've tested it since fixing some of the issues found in the review. The casting bits both explicit and implicitly in things like less<double>/greater<double> are still there and would mean your GMP types are cast to double for these predicates.
- The GMP/CLN approach was implemented in various algorithms, and it is tested there - It was not implemented in *all *algoritms, as I mailed on this list before, including the reason for that - the only thing I did yesterday was that I added it to intersection and union as well. This is not submitted to not confuse the review process, however, it is available for people who want it
As I said above (and please everyone, don't take my word for it.. do a search for 'double' on the project), the locations of these instances would seem to pollute much of the core algorithms which are likely interdependent.
- that 1e-10 tolerance occurs only once in the whole code, at a place setting a boolean flag is often set anyway. That is noted there - furthermore we compare : integer with ==, FP with epsilon (like in Boost.Test) and GMP or CLN with == . See ggl/util/math.hpp
Yes, I agree there was only one instance of this, but it is relevant nonetheless.
- there is nothing concocted on the fly of this review, besides maybe things really not implemented (as our answer to Pierre about infinity), but they are not presented as being there or already planned
I would consider changing the things I have mentioned as being concocted on the fly. How can you claim that GGL works with GMP when you have code that casts it to a double internally? I consider this point to be incontrovertible. Brandon

Perhaps it is at least as far as you've tested it since fixing some of the issues found in the review. The casting bits both explicit and implicitly in things like less<double>/greater<double> are still there and would mean your GMP types are cast to double for these predicates. But where did I write that *everything* works with GMP? I thought I mentioned, everywhere where I talked about this, that it is working *some* algorithms (area etc). See what I just answered:
- It was not implemented in *all *algoritms, as I mailed on this list before, including the reason for that
As I said above (and please everyone, don't take my word for it.. do a search for 'double' on the project), the locations of these instances would seem to pollute much of the core algorithms which are likely interdependent.
A search will find many, even in area. The default calculation for area uses at least a double, as stated in the source code: // Else, use the pointtype, but at least double this means, if you use float, area calculation uses double, unless specified else. But it also means, if you use GMP, area calculation uses GMP.
- that 1e-10 tolerance occurs only once in the whole code, at a place setting a boolean flag is often set anyway. That is noted there - furthermore we compare : integer with ==, FP with epsilon (like in Boost.Test) and GMP or CLN with == . See ggl/util/math.hpp
Yes, I agree there was only one instance of this, but it is relevant nonetheless.
I don't think it is relevant. I explicitly commented that in the source code there: "So it is never harmful to do this with a larger epsilon." I know that 1e-10 is too large, for FP comparisons.
- there is nothing concocted on the fly of this review, besides maybe things really not implemented (as our answer to Pierre about infinity), but they are not presented as being there or already planned
I would consider changing the things I have mentioned as being concocted on the fly. How can you claim that GGL works with GMP when you have code that casts it to a double internally? I consider this point to be incontrovertible.
See above. I still claim this. Area really uses GMP and GMP only, if specified. If you specify, it uses CLN and CLN only. I've explicitly tested it to avoid castings. What I didn't claim is that it working everywhere, and I think that I've mentioned that in all my postings. Didn't check them now, but if not, sorry about that. Finally, I didn't react on this:
I really want to contribute to the geometry efforts as I believe I have something to offer but I was happy to read that part of course.
Regards, Barend

Hi Barend,
I know that 1e-10 is too large, for FP comparisons.
Really? Let's see: Given these 2D straight line segments segments: s0: (-e,0) - (e ,h) s1: (0 ,0) - (0 ,h) s2: (e ,0) - (-e,h) where 'e' is a really small number, say 1e-5, and h a really big number, say 1e5 Now compute the intersection points p between s0 and s1, and q between s1 and s2. You can see from basic reasoning that p and q should he exactly coincident, but what are the computed coordinates of p and q using 'double' (you can even try it with a straight C expression, FWIW, since the goal is to understand FP behaviour)? What is their distance? (i.e. the error) That example is off the top of my head, so the points might be closer than I imagine, but let's see what the results are. Best -- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com

Hi Fernando, Fernando Cacciola wrote:
Hi Barend,
I know that 1e-10 is too large, for FP comparisons.
Really?
Let's see:
Given these 2D straight line segments segments:
s0: (-e,0) - (e ,h) s1: (0 ,0) - (0 ,h) s2: (e ,0) - (-e,h)
where 'e' is a really small number, say 1e-5, and h a really big number, say 1e5
Now compute the intersection points p between s0 and s1, and q between s1 and s2.
You can see from basic reasoning that p and q should he exactly coincident, but what are the computed coordinates of p and q using 'double' (you can even try it with a straight C expression, FWIW, since the goal is to understand FP behaviour)? What is their distance? (i.e. the error)
That example is off the top of my head, so the points might be closer than I imagine, but let's see what the results are.
That epsilon on which the discussion started is only used to set a boolean flag "trivial", which might be set anyway. But I will try it and report it, it is interesting, thanks. Regards, Barend

- It is working in practice, as I justed described on that new web-page and in the post yesterday.
Perhaps it is at least as far as you've tested it since fixing some of the issues found in the review. The casting bits both explicit and implicitly in things like less<double>/greater<double> are still there and would mean your GMP types are cast to double for these predicates.
With apologies, I haven't had a chance to get involved in this debate... but just to let you know that Boost.Math has a number of concept-architypes that can test for these and other errors: http://www.boost.org/doc/libs/1_41_0/libs/math/doc/sf_and_dist/html/math_too... No need to search around for possible mistakes or casts... if there is an "instantiate-everything" test, just throw the two real-number architypes at the templates and see what breaks :-) HTH, John.

As an example, when I tested the segment intersection algorithm, I found that the bundled segment type holds its points by reference. So I couldn't have a vector of those points without modifying or rolling my own segment type. The call to check for intersections is named 'relate'. Though it really only calculates the intersection points. The name would seem to suggest it performs a more thorough topological characterization. Perhaps these come down to preference, but the feelings are there. The relate has various policies, one of them is calculating intersection
points. Other are just checking how segments relate, but avoid the calculation, because it is not necessary for all algorithms. This is the reason to call it "relate" The segment type provided by default is indeed holding references and it does that on purpose. That avoids copying points all the time. AFAIK the segment intersection algorithm does not use those references, it refers to segments in the generic way, using get<0,0>(segment) etc. It makes no difference if you're using a segment by reference or a segment by value. Barend

Barend Gehrels wrote:
The segment type provided by default is indeed holding references and it does that on purpose. That avoids copying points all the time. AFAIK the segment intersection algorithm does not use those references, it refers to segments in the generic way, using get<0,0>(segment) etc. It makes no difference if you're using a segment by reference or a segment by value.
Maybe so, but that still doesn't help when you want to have a collection of segments. For example, in my testing I wanted to calculate the intersection between a huge number of segments, and with GGL's you can't do this: typedef boost::tuple<double, double> point_type; typedef ggl::segment<point_type> segment_type; //! segment_type does not model the default constructible concept. std::vector< segment_type > segments; Brandon

Hi Brandon,
The segment type provided by default is indeed holding references and it does that on purpose. That avoids copying points all the time. AFAIK the segment intersection algorithm does not use those references, it refers to segments in the generic way, using get<0,0>(segment) etc. It makes no difference if you're using a segment by reference or a segment by value.
Maybe so, but that still doesn't help when you want to have a collection of segments. For example, in my testing I wanted to calculate the intersection between a huge number of segments, and with GGL's you can't do this:
typedef boost::tuple<double, double> point_type; typedef ggl::segment<point_type> segment_type;
//! segment_type does not model the default constructible concept. std::vector< segment_type > segments;
That is true indeed. The segment is not meant for that. However, you can use it like below. This was meant to be included in the review, tests folder, but apparently I forgot that, or considered it as too rough, sorry. I hope that the speed of my answer will prove here that at least this one is not concocted on the fly :-) Regards, Barend // Generic Geometry Library test file // // Copyright Barend Gehrels, 1995-2009, Geodan Holding B.V. Amsterdam, the Netherlands. // Copyright Bruno Lalande 2008, 2009 // Use, modification and distribution is subject to the Boost Software License, // Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #include <iostream> #include <ggl_test_common.hpp> #include <ggl/geometries/concepts/segment_concept.hpp> #include <ggl/geometries/point.hpp> #include <ggl/geometries/segment.hpp> #include <ggl/geometries/register/point.hpp> #include <ggl/geometries/register/segment.hpp> #include <ggl/geometries/adapted/c_array_cartesian.hpp> #include <ggl/geometries/adapted/tuple_cartesian.hpp> #include <ggl/util/write_dsv.hpp> #include <test_common/test_point.hpp> template <typename P> void test_all() { typedef ggl::segment<P> S; P p1; P p2; S s(p1, p2); BOOST_CHECK_EQUAL(&s.first, &p1); BOOST_CHECK_EQUAL(&s.second, &p2); // Compilation tests, all things should compile. BOOST_CONCEPT_ASSERT( (ggl::concept::ConstSegment<S>) ); BOOST_CONCEPT_ASSERT( (ggl::concept::Segment<S>) ); typedef typename ggl::coordinate_type<S>::type T; typedef typename ggl::point_type<S>::type SP; //std::cout << sizeof(typename coordinate_type<S>::type) << std::endl; typedef ggl::segment<const P> CS; //BOOST_CONCEPT_ASSERT( (concept::ConstSegment<CS>) ); CS cs(p1, p2); typedef typename ggl::coordinate_type<CS>::type CT; typedef typename ggl::point_type<CS>::type CSP; } struct custom_point { double x, y; }; struct custom_segment { custom_point one, two; }; template <typename P> struct custom_segment_of { P p1, p2; }; struct custom_segment_4 { double a, b, c, d; }; GEOMETRY_REGISTER_POINT_2D(custom_point, double, ggl::cs::cartesian, x, y) GEOMETRY_REGISTER_SEGMENT(custom_segment, custom_point, one, two) GEOMETRY_REGISTER_SEGMENT_TEMPLATIZED(custom_segment_of, p1, p2) GEOMETRY_REGISTER_SEGMENT_2D_4VALUES(custom_segment_4, custom_point, a, b, c, d) template <typename S> void test_custom() { S seg; ggl::set<0,0>(seg, 1); ggl::set<0,1>(seg, 2); ggl::set<1,0>(seg, 3); ggl::set<1,1>(seg, 4); std::ostringstream out; out << ggl::dsv(seg); BOOST_CHECK_EQUAL(out.str(), "((1, 2), (3, 4))"); } int test_main(int, char* []) { test_all<int[3]>(); test_all<float[3]>(); test_all<double[3]>(); //test_all<test_point>(); test_all<ggl::point<int, 3, ggl::cs::cartesian> >(); test_all<ggl::point<float, 3, ggl::cs::cartesian> >(); test_all<ggl::point<double, 3, ggl::cs::cartesian> >(); test_custom<custom_segment>(); test_custom<custom_segment_of<ggl::point<double, 2, ggl::cs::cartesian> > >(); test_custom<custom_segment_of<custom_point> >(); test_custom<custom_segment_4>(); return 0; }

Barend Gehrels wrote:
That is true indeed. The segment is not meant for that. However, you can use it like below. This was meant to be included in the review, tests folder, but apparently I forgot that, or considered it as too rough, sorry. I hope that the speed of my answer will prove here that at least this one is not concocted on the fly :-)
Sure that works. My advice though would be to provide defaults with value semantics just for QoL reasons for the users. I don't know how expensive the costs of a copy would be.. but I'm guessing it's not huge. You can always have a ref_segment too, and in memory tight situations the users with that need will be treated as well. Brandon

Hi Brandon,
My advice though would be to provide defaults with value semantics just for QoL reasons for the users. I don't know how expensive the costs of a copy would be.. but I'm guessing it's not huge. You can always have a ref_segment too, and in memory tight situations the users with that need will be treated as well.
Didn't react on this yet, but you are right. It is better to have a ggl::segment like you describe, and provide an additional ggl::ref_segment, with references. Thanks, Barend

Barend Gehrels wrote:
I hope that the speed of my answer will prove here that at least this one is not concocted on the fly :-)
Impressive, you are really a fast coder. Still, I guess it is concocted on the fly :-)

Thomas Klimpel wrote:
Barend Gehrels wrote:
I hope that the speed of my answer will prove here that at least this one is not concocted on the fly :-)
Impressive, you are really a fast coder. Still, I guess it is concocted on the fly :-)
I can see I'm being taken very seriously here. ;) Look, I'm not just trying to be a bear, but this is a review. I think its important that we get this done properly as whatever ends up being accepted will have long ranging consequences on the users.

Hi Brandon,
Look, I'm not just trying to be a bear, but this is a review. I think its important that we get this done properly as whatever ends up being accepted will have long ranging consequences on the users.
Sure, I agree. Good point. If you've more questions, they are welcome of course. Regards, Barend

John Phillips wrote: ...lots of great stuff elided here... Thanks for your effort combating the dark forces of FUD;) It was needed and well done. People like you keep me sane. best regards, Patrick

Patrick Horgan wrote:
John Phillips wrote:
...lots of great stuff elided here...
Thanks for your effort combating the dark forces of FUD;) It was needed and well done. People like you keep me sane.
Yes, thank you. Regards, Thomas

Hi Jose,
CGAL, which is focused on computational geometry, and Fernando knows well, ends his philosophy page with this text that is interesting.
http://www.cgal.org/philosophy.html -------------------------------------------------- Beyond Robustness
Just for the record, I don't just know CGAL well. I am one of its active developers and even contributed to the internal discussions that lead to this piece of text you just quoted (for no clear reason though) I also invested quite some time working with BOTH GTL and GGL authors on the technical details of the robustness issues on their respective libraries, so I know in quite detail how the libraries handle this, FWIW (though as I say I am not sure what you point is here) Best -- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com

On Thu, Nov 19, 2009 at 12:30 AM, Fernando Cacciola <fernando.cacciola@gmail.com> wrote:
Hi Jose,
CGAL, which is focused on computational geometry, and Fernando knows well, ends his philosophy page with this text that is interesting.
Just for the record, I don't just know CGAL well. I am one of its active developers and even contributed to the internal discussions that lead to this piece of text you just quoted (for no clear reason though)
I also invested quite some time working with BOTH GTL and GGL authors on the technical details of the robustness issues on their respective libraries, so I know in quite detail how the libraries handle this, FWIW (though as I say I am not sure what you point is here)
My point here is that is obvious what CGAL is good at and it's strenghts. What do "the Boost generic geometry efforts" want to be when they grow up ?
participants (17)
-
Barend Gehrels
-
Brandon Kohn
-
Fabio Fracassi
-
Fernando Cacciola
-
Joachim Faulhaber
-
John Maddock
-
John Phillips
-
Jonathan Franklin
-
Jose
-
Patrick Horgan
-
Paul A. Bristow
-
Phil Endecott
-
Scott McMurray
-
Simonson, Lucanus J
-
Stewart, Robert
-
Thomas Klimpel
-
Zachary Turner