
Jose wrote:
On Tue, Nov 17, 2009 at 5:11 PM, John Phillips <phillips@mps.ohio-state.edu> wrote:
Ok, It's a solution, maybe not the best one but I lack the in-depth expertise judge.
...
I missed the to in "expertise TO judge". I mean in technical in-depth experience which is fundamentally important. On the other side, I gave 5 reasons and could give more why the review was flawed (and some people that voted yes to GTL added further comments). The reasons are in the separate thread GTL vs GGL - rationale. I questioned the whole planning of the review and the fact that a combined library should be possible (and the GGL authors actually wanted to make it possible).
I did not concern myself with the typo. I am concerned that you want the review result for a library overturned when you then claim you don't have the expertise to judge whether that is the best decision. In such a case, I think a more appropriate statement is to express your concern about the process without insisting on overturning a result because the manager did something wrong. For your 5 listed reasons in the other thread - yes, I read them. As I have pointed out several times, I try to read everything that applies to the review process on the list. However, your base concern seems to be that 60% of the votes supporting the library should not be enough to justify acceptance, even when the manager explains the reasons in the review report. (Please notice that the reasons presented discuss real technical issues and include enough detail to follow the ideas. This is a good thing in a technical conversation.) Luke has replied to your post in that thread, as well.
Also, if you check my replies to Luc, the scope of GTL and the name were changed just before the review, which is ok in general but not ok given that a broader library, with great overlap would be reviewed soon after the first review.
This contains several factual inaccuracies. First you claim that the name and scope changed "just before the review." This is not true. According to the gmane archives, Luke sent a message to the list on June 19th informing the list that the name was changed, and that the reason for the name change was that it better reflected the true scope of the library. He had originally hoped to produce a broader library, but the library he actually produced fit this name better. This was 4 days before he requested a review, and 6 days before the start date was selected. The review started more than 2 months after this name change. As I have stated in other replies in this thread, the request for a review for GGL happened more than a month after the review period for Polygon ended. Shy of impressive tarot skills, there was no way for Luke, Fernando, the Wizards, or anyone else to know what the review schedule would be for GGL while the Polygon review was ongoing. So, as far as I can see, all of your arguments about time sequencing are factually incorrect and so not persuasive in the least.
Zachary Turner answer at the beginning of this thread summarizes it nicely
----------------------------------------------------------------------------------------- Now we are in the unfortunate situation of either a) having 2 libraries that have massive overlap but each providing something unique, b) withdrawing a library that has already been accepted (although in reality this won't happen), or c) rejecting a library which, if compared directly against the other library may have been preferable if users had initially been asked to choose only one. ------------------------------------------------------------------------------------------
I agree that it is a less than ideal situation. Ideal would be to have perfect information about past and future, and also to always know the right scales of application for the abstractions we use to design concepts and code. However since we live in the real world, this is not available. We have to base decisions on the information available. During the Polygon review, the now known fact that GGL would be submitted soon was not available. So, it was not possible to plan based on it. Further, at the time of the Polygon review, Barend and team were working on their code and did not have the latest version ready for scrutiny. After the review period was done he told Phil that they hoped to have something to show in October. (In an odd case for software deadlines, they even did.) Prior to that, no dates were given that I can find or recall. So, comparing and choosing only one was not feasible. Comparing to other available facilities (such as CGAL) was done to some extent, and would be quite reasonable to an even greater extent. However, such comparisons take a lot of time and no reviewer felt driven to do one in depth. Now that the Polygon review is done, and the GGL review begun some comparisons between them are possible. This is an obvious part of the GGL review. So reviewers should be asking whether GGL adds enough to Boost to justify having it, as well. I have already outlined my personal hope for the longer term future if both libraries are in Boost, but I want to add a little to it. If the two libraries are incompatible in some ways, then the Boost user base will help determine which concepts and methods are to be preferred. They will do this by using the library that works better and provides more value for less work. This is the experienced guidance needed to produce a later joint library, and it has the advantage that the choices made are already know to work for coders in the real world, unlike trying to fully describe a large and complicated domain full of abstractions and concepts before producing the library.
And I agree with him on the "(although in reality this won't happen)" and I think it should happen, because it sets a really bad precedent and I only blame the review policy and the schedule.
Since you blame the policy and schedule, please provide a proposed change in the policy that would prevent this from happening. Remember when you provide it the factual details I have provided about when information to base a decision on was available, since any so called solution that ignores these details is useless.
...
Well, I think the points are above and there are technical issues but fundamentally it boils down to a process flaw.
Sorry, I have seen where other talk about technical issues, but very little of it from your posts. Please direct me to where you detail them.
Both library authors and Fernando are really experienced in their domain and I am not questioning that. I am saying that this is a broad field like Graphs, Networking or Graphics and does require some coordination, specially when multiple authors want to contribute, but it didn't happen !
If you believe Boost should have a process to require cooperation between different groups working on some problem domains, then please propose such a process to the list. Then, the other members of Boost can look at the details of a real proposal and decide if that works for them. Especially try to get input from the Moderators and from authors of already reviewed libraries, since they have the most useful information for such questions. If they don't care enough to get involved, then you proposal is unlikely to go anywhere. Think of it as voting by apathy that they are satisfied with the status quo.
However, in my own experience as a review manager I can tell you that there are sometimes very strongly held opinions in reviews that are simply technically wrong. So, just having a strong opinion against the library is not a good argument to overturn the review. (This should not be read to imply that the opinions against Polygon were technically wrong. I have not put the work into the technical details to have an opinion on that.)
Sure, but I don't think this is a list were people are fooled by technically wrong arguments. One piece of evidence in reviews is benchmarks, you publish them and publish the code to run the benchmarks, and the benchmark could still be flawed if nobody cares to check them and run them but it's better that statements about what a library does. Another key piece of evidence is code examples, so you can understand the application domain, what the library does and how it does it.
I come from a scientific computing background, so numerical methods and their pitfalls are very familiar territory for me. I rarely write GUIs, so the issues there are not familiar. I have seen Boost members who are very good at what they do be mislead by incorrect arguments about numerics. I'm sure I could be mislead by incorrect arguments about tricky subjects I'm not familiar with. The range of Boost is gigantic, and all of us have holes in our understanding, even the very best of us. Anyone can be fooled by technically wrong arguments, so the voices of experts in domains really should count for more, especially if you don't know the details yourself. Asking for a clear explanation from the experts is a good idea, but all else being equal the smart money bets on the expert. Then we have questions like benchmarks and other pieces of evidence. What evidence is important in each domain. In a geometry library intended to process large sets of polygons, benchmarks and scaling along with accuracy are very important. However, in some applications pure speed is so important that users will happily give up accuracy to get it. In other applications, the desired trade off is exactly the opposite. High speed with inaccurate results could be disastrous in some of Luke's applications, even though as fast as possible is still the goal. So, we need to know about the problem domain to even decide what evidence matters. This is part of why benchmarks are welcome in Boost documentation, but in general are not required. Code examples are just a part of good documentation, and so are required. However, I think it is naive to believe you can understand the application domain from code examples. I could produce hundreds of code examples of using statistical tests on data but you still would not know the limitations on proper application of such tests after seeing them. You would know how to add the tests to your own code, but not how to interpret the results or whether you are applying the right test for your situation.
...
Don't want to be unfair with my comments and are not specific to the Wizards but to a process flaw. My argument is that Boost aims for well designed generic libraries (among other things), and there are at least two competent/expert authors in their respective application domains that want to propose a library and for several years they have been advancing/iterating with more or less input from the community but still as separate libraries (everything is ok at this point although it probably would have been better to cooperate - in this case) and then:
- the generic library completely changes the scope (reduces to specific algorithms), has a non-consensus review and is accepted (and this would also be ok if there weren't an involvement by the community to achieve a generic library that covers the 2D geometry and that can probably incorporate all the algorithms). This is what's not logical or good!
You keep saying it was non-consensus. How many yes votes does it take to count as consensus for you? I'm from the US, and 60% is called a supermajority in our politics and is enough to over ride even the strongest opposition. (I have no idea where you are from, and choose not to assume any location for you.) If the results need to be unanimous, then we should be overturning most Boost reviews. Instead, I prefer to trust in the judgment of the review manager when there is contention. If you wish to show that the result of the review was incorrect and there are show stopper issues with the Polygon library that make it unsuitable, please provide those focused technical arguments so the members of Boost can weigh them on their merits. However, what I see so far is a collection of unstructured emotional appeals that include gaps and factual inconsistencies. I still see no reason to overturn the decision of the manager. The existence of another library is not a persuasive technical argument in this case, nor is the name change for the Polygon library. I have explained why above, as well as in other responses.
I see three types of libraries:
1) Technically superior or high quality solutions that provide a specific benefit - This includes early Boost libraries that even end up contributing to the standard library
Futures is contributing to the standard library, and it is a recent library. Hopefully, we have not run out of possible standard library ideas.
2) Multiple approaches make sense - This makes sense for some language paradigms (Lambda-Phoenix, Spiriti-Spirit II, ...)
This seems like an artificial category that exists only so you can say it is different from 3). How do you know that different approaches make sense for Lambda/Phoenix (Which, as I pointed out are merging to become one approach that carries the benefits of both earlier approaches.) but not for Polygon/GGL? What is the technical and design based difference that lets you make this distinction? By the way, Spirit 2 is the successor and replacement for Spirit 1, not a separate and parallel approach. Some legacy code is expected to keep using Spirit 1, but I believe the suggestion of the Spirit developers would be to prefer Spirit 2.
3) Generic libraries useful across multiple application domains (the current case)
.... Graphs - BGL Networking - asio Images - GIL Geometry - GGL ....
All of Boost strives to be generic libraries useful across multiple application domains, so this also seems like a poor abstraction.
Goals: generality, performance, flexibility, extensibility to multiple application domains, compatibility
The important point in most libraries in the third group is to actually have a set up where people can contribute algorithms and the library can evolve. It's also key to look at competing libraries !!
Yes, a willingness to accept useful input from others is good. However, Boost has never required this, and some developers have been almost unresponsive when offered outside assistance with providing things like new algorithms. So, I don't think it is an important point for Boost, so far. Looking at other implementations of the same ideas is always a good idea, especially during reviews. In the case of the Polygon review, some of this was done. If you felt more should have been done, you were quite welcome to discuss it during the review. The discussion was lively, and I saw no examples of useful comments being ignored. However, that review completed more than 2 months ago so we can't go back in time and add new discussions. Therefore other implementations are pertinent as a way to suggest improvements on the accepted library (and I'm sure Luke would be happy to talk to you about ways to make his library better, though that does not mean he will just do whatever you say), or because they clarify a technical point that shows a unacceptable flaw in the library that can't be readily fixed.
CGAL, which is focused on computational geometry, and Fernando knows well, ends his philosophy page with this text that is interesting.
http://www.cgal.org/philosophy.html -------------------------------------------------- Beyond Robustness
Let us conclude by pointing out that guaranteed robustness is not the only (but probably the most important) aspect in which CGAL makes a difference. Another major feature of CGAL is its flexibility. CGAL closely follows the generic programming approach of the C++ Standard Template Library. This for example means that you can feed most CGAL algorithms with your own data: instead of converting them to some CGAL format, you can adapt the algorithm to work directly with your data.
Last but not least, CGAL's range of functionality is by now very large, and it's still growing. CGAL offers solutions for almost all basic (and a lot of advanced) problems in computational geometry. CGAL is an open source project with a large number of developers (eventually, you might become one of them). Many of us have tight connections to computational geometry research, and to application domains that involve geometric computing. It is this combination of expertise that we believe makes CGAL unique.
I'm not sure what your goal is, here. Yes, CGAL strives to be a very good geometry library. The team wants it to be generic, broad and robust. However, a review where many people were quite conscious of CGAL came to the conclusion that Polygon was a worthy addition to Boost. How does this philosophy text matter to that? John