[Review] GGL review starts today, November 5th

Hi all, The formal review of the Generic Geometry Library (GGL) starts today, November 5, 2009 and will finish November 15, 2009. GGL is being developed by Barend Gehrels, Bruno Lalande and Mateusz Loskot, with substantial input from this Boost mailing list. ------------------------------------------------ About the library: GGL defines concepts for geometries and implements some algorithms on such geometries. GGL is header-only, and can be applied in all software where geometry plays a small or a larger role. Users of the library can use just only one function, such as distance: int a[2] = {1,1}; int b[2] = {2,3}; double d = ggl::distance(a, b); Library users can also use the library in combination with std::vector, boost::tuple's and boost::ranges, such as: std::vector<boost::tuple<double, double, double> > line; line.push_back(boost::make_tuple(1, 2, 3)); line.push_back(boost::make_tuple(4, 5, 6)); line.push_back(boost::make_tuple(7, 8, 9)); double length = ggl::length(line); GGL can also be used in combination with custom or legacy geometries, adapting them to the library specializing traits classes or using registration macro's. Formally, GGL contains a dimension-agnostic, coordinate-system-agnostic and scalable kernel, based on concepts, meta-functions and tag-dispatching. On top of that kernel, algorithms are built: area, length, perimeter, centroid, convex hull, intersection (clipping), within (point in polygon), distance, envelope (bounding box), simplify, transform, convert, and more. The library is also designed to support high precision arithmetic numbers, such as GMP. The GGL might be used in all domains where geometry plays a role: mapping and GIS, gaming, computer graphics and widgets, robotics, astronomy... The core is designed to be as generic as possible and support those domains. However, for now the development has been mostly GIS-oriented. The GGL team proposes an extension model, very similar to what is used by GIL. The proposal as it is in the Boost Sandbox included three extensions: - SVG because it is used in the samples and to generate the documentation - WKT because it is used in the tests - The geographic coordinate system because it is used in some of the examples, also showing how other coordinate systems can be implemented There are more extensions, not included now, a.o.: - Spatial index - Map projections The proposed GGL has seen four previews, many discussions and many changes in design and implementation, based on those discussions. Previews were published on this list January '08, March'08, October'08 and February'09. GGL can be found in the sandbox https://svn.boost.org/svn/boost/sandbox/ggl/formal_review, including source code, examples, unit tests and documentation. Documentation can also be found online: http://geometrylibrary.geodan.nl/formal_review ------------------------------------------------------- Everybody on this list is invited to participate in this formal review. I hope to see your review of this library, including your vote, and I welcome your participation in the discussions on the Boost mailing list. Please always state in your review, whether you think the library should be accepted as a Boost library. Additionally, please consider giving feedback on the following general topics: - What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Did you try to use the library? With what compiler? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain? Regards Hartmut Review Manager ------------------- Meet me at BoostCon http://boostcon.com

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Hartmut Kaiser Sent: Thursday, November 05, 2009 2:20 PM To: boost-announce@lists.boost.org; boost@lists.boost.org; boost-users@lists.boost.org Subject: [boost] [Review] GGL review starts today, November 5th
Please always state in your review, whether you think the library should be accepted as a Boost library.
Yes. It is mature, refined, has a user base (including a commercial product - a good sign), and is well documented. (Although Boost has recently accepted another somewhat similar library, I have no hesitation in recommending accepting both. They are sufficiently orthogonal - one very generic, one problem focussed. Users can decide which suits them best.)
Feedback on the following general topics:
- What is your evaluation of the design?
Generic. Extendable.
- What is your evaluation of the implementation?
Looks nice. (And this usually indicates quality code!). Test package looks plausible.
- What is your evaluation of the documentation?
Excellent. Looks very nice. Only an index is missing - but extensive Doxygenation including Doxygen comments are far above what passes for documentation in many packages. There are many fine examples. I noted that only VS 2008 Express edition was mentioned. I feel sure that someone has used the 'Standard' version with full optimisation. And perhaps the VS 2010 beta? The latter may be especially useful if it overcomes the difficulty Intellisense has in handling such a complex package without its intelligence circuits melting ;-)
- What is your evaluation of the potential usefulness of the library?
Invaluable in many problem domains.
- Did you try to use the library? No.
- How much effort did you put into your evaluation?
A quick read of documentation and glance at examples.
- Are you knowledgeable about the problem domain?
Negligibly. Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

Hi Paul, Thanks for reviewing, and for your positive reactions on our library.
I noted that only VS 2008 Express edition was mentioned. I feel sure that someone has used the 'Standard' version with full optimisation. And perhaps the VS 2010 beta? The latter may be especially useful if it overcomes the difficulty Intellisense has in handling such a complex package without its intelligence circuits melting ;-)
Yes, until now we only used VS 2005 and 2008 (both Express) (besides gcc). I'll ask around for other versions as well, and try 2010. If anyone on this list tries, feedback is welcome. Regards, Barend

Barend Gehrels wrote:
Hi Paul,
Thanks for reviewing, and for your positive reactions on our library.
I noted that only VS 2008 Express edition was mentioned. I feel sure that someone has used the 'Standard' version with full optimisation. And perhaps the VS 2010 beta? The latter may be especially useful if it overcomes the difficulty Intellisense has in handling such a complex package without its intelligence circuits melting ;-)
Yes, until now we only used VS 2005 and 2008 (both Express) (besides gcc). I'll ask around for other versions as well, and try 2010. If anyone on this list tries, feedback is welcome.
Barend, With GGL, I use Visual C++ compiler from Visual Studio 2005 Professional. I have also access to Visual Studio Team System 2008 and I build GGL using it from time to time. Actually, I would expect that GGL builds fine using Visual C++ 8.0 or later. I would not be surprised if it builds with Visual C++ 7.1 too. However, I doubt it can compile using 7.0 or earlier. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Charter Member of OSGeo, http://osgeo.org

Hi, this is not a review. Note that I have just read the introduction of the documentation and I find the it quite clear. Some words about SNIFAE and partial specialization: At the end of the introduction you say that
This SFINAE: * can be done on functions but not on structs (meta-functions). So the coordinate_type meta-function would still have to use tag dispatching"
First for structs (meta-functions) you can apply partial template specialization, so no need of SFINAE. Even if you neded SFINAE, and if I'm not wrong enable_if can also be applied to structs. From enable_if docummentation " Enabling template class specializations Class template specializations can be enabled or disabled with enable_if. One extra template parameter needs to be added for the enabler expressions. This parameter has the default value void. For example: template <class T, class Enable = void> class A { ... }; template <class T> class A<T, typename enable_if<is_integral<T> >::type> { ... }; template <class T> class A<T, typename enable_if<is_float<T> >::type> { ... };"
*gives often compiler troubles and headaches: if a user makes an error somewhere, the compiler will not select any of the methods, and/or it will give completely incomprehensible error listings, just because of this SFINAE"
I thought that this was and advantage not a liability. The message is short: method not found. What errors do you get when a user makes an error somewhere with tag dispatching? Are they clearer?
* does not support partial specializations because it is a function. The tag-dispatching function is of course also not supporting that, but it forwards its call to the dispatch struct where partial specializations (and member template functions) are possible. The SFINAE could do that as well but then: why not just add one tag more and have tag dispatching instead?"
Could you clarify which is the tag you add?
* is a trick to deceive the compiler. "As a language behavior it was designed to avoid programs becoming ill-formed" (http://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error), while tag dispatching is based on specialization, a core feature of C++ * looks more ugly
You know that this is a matter of taste, and the way you do force the user to share your taste.
*is simply not necessary because we have tag dispatching :-)
Not necessary do not meens that it is not good. Recently I have added some stuff to take care of ADL in the Boost.Conversion library base on the Boost.Swap library. The idea been to define a function that calls an internal function introdicing the default behavior by ADL. Thus when the user has provided some functions found by ADL they will take precedence as more specific. When not found the introduced default behavion take place. Next follows how I have defined convert_to: namespace boost { // trick to ensure specialization is always possible namespace dummy { template <typename T> struct base_tag {}; template <typename T> struct type_tag : public base_tag<T> {}; } // default behavior ussin gthe specialized tag type_tag namespace conversion { namespace partial_specialization_workaround { template < typename To, typename From > struct convert_to { inline static To apply(const From& val) { return To(val); } }; } template < typename To, typename From > To convert_to(const From& val, dummy::type_tag<To> const&) { return conversion::partial_specialization_workaround::convert_to<To,From>::apply(val); } } // specialized implementation namespace conversion_impl { template <typename Target, typename Source> Target convert_to_impl(Source const& from) { using namespace boost::conversion; //use boost::conversion::convert_to if ADL fails return convert_to(from, boost::dummy::type_tag<Target>()); } } // generic function calling to the ADL introduction template <typename Target, typename Source> Target convert_to(Source const& from, boost::dummy::base_tag<Target> const& p=boost::dummy::base_tag<Target>()) { return conversion_impl::convert_to_impl<Target>(from); } } Note that using the dummy trick we ensure that there are no infinite cycles. The default behavior could correspond to what you have already done. If there is nothing wrong on my design, do you think that this could be also applied to the GGL library at least for the concepts? Good work anyway :) Best, Vicente

Hi Vicente, Thanks for diving into our library.
Note that I have just read the introduction of the documentation and I find the it quite clear.
Good to hear.
Some words about SNIFAE and partial specialization:
I've no problem to rephrase the documentation-section about SFINAE or even to omit it. The section documents our choice for tag dispatching, but it might be not necessary to do it in this detail. I've also no problems with SFINAE in general, it is just a preference for usage in more complex libraries like GGL. We've used SFINAE in the past, for about half a year, in GGL. However, we did have much problems with it, getting it all right and compiling it on all compilers. We especially did have troubles with the boost concept checking in combination with sfinae (maybe I should add that). Dave Abrahams wrote about this (I quote here from a mail in january 2009):
Barend: (...) This works perfectly and distinguishes all geometry types at compile time, as long as the library or the developer provides the meta function geometry_type<T>::type which is currently implemented as an enumeration of TYPE_POINT, etc.
Dave: Seems like you might be better off using tag dispatching...
We went over to tag dispatching and all those problems disappeared. Maybe I should add (if the section is not going to be omitted) that the combination of SFINAE and the BCCL using boost-concept-requires was quite difficult. I Just rechecked some things and I think this still is the case. I checked using a small source with things as "apple", "pear", "banana", so you'll see them in my answers.
[snipped] Even if you neded SFINAE, and if I'm not wrong enable_if can also be applied to structs. From enable_if docummentation " Enabling template class specializations Class template specializations can be enabled or disabled with enable_if. One extra template parameter needs to be added for the enabler expressions. This parameter has the default value void. For example: template <class T, class Enable = void> class A { ... };
template <class T> class A<T, typename enable_if<is_integral<T> >::type> { ... };
template <class T> class A<T, typename enable_if<is_float<T> >::type> { ... };"
That is indeed possible, just tried it with enable_if. But, technically speaking, I don't think this is SFINAE. I think it is partial specialization, and specialization is done here on enable_if<...>::type, finding matching candidates and if no one is found, using the unspecialized class. We don't have the real "Substitution Failure" here, where one or more overloads are discarded from the candidate set based on failing substitution. So these classes with enable_if work about the same way as tag dispatching does, though no tag is necessary. But also here I prefer tag dispatching because (my personal opinion) I find it resulting in more readable code.
*gives often compiler troubles and headaches: if a user makes an error somewhere, the compiler will not select any of the methods, and/or it will give completely incomprehensible error listings, just because of this SFINAE"
I thought that this was and advantage not a liability. The message is short: method not found. What errors do you get when a user makes an error somewhere with tag dispatching? Are they clearer?
The messages you get depend on the type or error, in both cases. When I try to use a function based on sfinae, when there is no match, is: the message I get - from MSVC is: Failed to specialize function template..., for all overloads. - from gcc is: no matching function for call to.... (one) On tag dispatching you'll get: MSVC: 'type' : is not a member of 'tag<T>' with [T=banana] gcc, similar or (if the banana-tag is implemented) MSVC: 'apply' : is not a member of 'dispatch<T>' with [T=tag<banana>::type] gcc: 'apply' is not a member of 'dispatch<banana_tag>' sfinae is based on overloads, and the problem I encountered with it (and with its messages) that in case of errors I often don't understand where the error originates from. I remember having to number several times all overloads (of e.g. "eat") to e.g. eat1, eat2, eat3, etc, to see what is actually going wrong and why the overload was not called in that specific case. With tag dispatching you can also get pages of messages, but it is more clear where they come from, e.g. a missing specialization, a wrong input type.
* does not support partial specializations because it is a function. The tag-dispatching function is of course also not supporting that, but it forwards its call to the dispatch struct where partial specializations (and member template functions) are possible. The SFINAE could do that as well but then: why not just add one tag more and have tag dispatching instead?"
Could you clarify which is the tag you add?
If I understand your question: we're adding a geometry-tag. So for a point it specializes to point_tag, etc. What is meant by this section is that for some cases, code is similar. For example the "num_points" algorithm, the number of points of a linestring is the same as the number of points of a linear ring. The dispatch function can specialize partially on this similarity. I now realize, by just doing some re-checking that for SFINAE you can do the same, using SFINAE on a specific property to define an overload which applies to more than one geometry. So this sentence probably can go, it is possible in both cases.
* is a trick to deceive the compiler. "As a language behavior it was designed to avoid programs becoming ill-formed" (http://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error), while tag dispatching is based on specialization, a core feature of C++ * looks more ugly
You know that this is a matter of taste, and the way you do force the user to share your taste.
I can rephrase this. What I mean is that a function template <typename T> void eat(T const& fruit) { // tag dispatch on apple-tag } looks better than: template <typename T> void eat(T const& fruit #ifdef DOXYGEN_SHOULD_SKIP_THIS , typename boost::enable_if<typename is_apple<T>::type>::type* = NULL #endif ) { // (call) apple implementation } Maybe I should state "makes the main free function declarations shorter" to be more objective.
*is simply not necessary because we have tag dispatching :-)
Not necessary do not meens that it is not good.
Right, I agree. I still find (but that is an opinion indeed) that using tag dispatching results in clean and readable code, and cause less compiler troubles. Besides that, as far as I can tell, in combination with BOOST_CONCEPT_REQUIRES, a solution based on SFINAE alone is not possible using SFINAE. This set does not compile: template <typename T> BOOST_CONCEPT_REQUIRES( ((concept::Apple<T> )), (void) ) eat(T const& fruit, typename boost::enable_if<typename is_apple<T>::type>::type* = NULL) {...} template <typename T> BOOST_CONCEPT_REQUIRES( ((concept::Pear<T> )), (void) ) eat(T const& fruit, typename boost::enable_if<typename is_pear<T>::type>::type* = NULL) {...} With messages: 'sourness' : is not a member of 'pear' or in gcc: no type named 'sourness' in struct pear But the expected type "sourness" was defined for apple, in this case, and checked by the apple concept... the code IS right.... Combinations are listed in the errors (so apple/pear, pear/apple), if we add more fruit we would get much more. Using tag dispatching, it is not possible to phrase it like this, because there is only one free function, we cannot check the Apple concept here. But concepts can be dispatched as well, resulting in: template <typename T> BOOST_CONCEPT_REQUIRES( (( dispatched_concept<typename tag<T>::type, T> )), (void) ) eat(T const& fruit) {...} and this compiles. Tag dispatching of the concept, or a solution using enable_if in a struct, can also be used in the SFINAE solution but... then it is not pure SFINAE (if I'm right), it is (at least partly) based on tag dispatching or other partial specializations. But OK, you can combine SFINAE and tags, of course that is possible. By the way, we're currently using BOOST_CONCEPT_ASSERT (where concept checking is dispatched), but as found out here, we have the choice
[snipped] Do you think that this could be also applied to the GGL library at least for the concepts?
Interesting, I'll react on the convert_to in another e-mail later. Regards, Barend

Hi, ----- Original Message ----- From: "Barend Gehrels" <barend@geodan.nl> To: <boost@lists.boost.org> Sent: Sunday, November 08, 2009 2:34 PM Subject: Re: [boost] [Review] GGL review starts today, November 5th
Hi Vicente,
Thanks for diving into our library.
You are welcome.
Note that I have just read the introduction of the documentation and I find the it quite clear.
Good to hear.
Some words about SNIFAE and partial specialization:
I've no problem to rephrase the documentation-section about SFINAE or even to omit it. The section documents our choice for tag dispatching, but it might be not necessary to do it in this detail. I've also no problems with SFINAE in general, it is just a preference for usage in more complex libraries like GGL.
We've used SFINAE in the past, for about half a year, in GGL. However, we did have much problems with it, getting it all right and compiling it on all compilers. We especially did have troubles with the boost concept checking in combination with sfinae (maybe I should add that). Dave Abrahams wrote about this (I quote here from a mail in january 2009):
Barend: (...) This works perfectly and distinguishes all geometry types at compile time, as long as the library or the developer provides the meta function geometry_type<T>::type which is currently implemented as an enumeration of TYPE_POINT, etc.
Dave: Seems like you might be better off using tag dispatching...
We went over to tag dispatching and all those problems disappeared.
Maybe I should add (if the section is not going to be omitted) that the combination of SFINAE and the BCCL using boost-concept-requires was quite difficult. I Just rechecked some things and I think this still is the case. I checked using a small source with things as "apple", "pear", "banana", so you'll see them in my answers.
If there is a clear justification that SFINAE and BCCL do not matches, his should be added in the rationale. But don't we use SFINAE to support concepts? I don't see the need to combine them. Remember that there was a Concept Traits library that was intendedd to replace the BCCL. The library was abandoned because C++0x should include concepts and because some concepts couldn't be managed by traits. Maybe this library should be relaunched as Concepts are not part of C++0x now.
[snipped] Even if you neded SFINAE, and if I'm not wrong enable_if can also be applied to structs. From enable_if docummentation " Enabling template class specializations Class template specializations can be enabled or disabled with enable_if. One extra template parameter needs to be added for the enabler expressions. This parameter has the default value void. For example: template <class T, class Enable = void> class A { ... };
template <class T> class A<T, typename enable_if<is_integral<T> >::type> { ... };
template <class T> class A<T, typename enable_if<is_float<T> >::type> { ... };"
That is indeed possible, just tried it with enable_if.
But, technically speaking, I don't think this is SFINAE. I think it is partial specialization, and specialization is done here on enable_if<...>::type, finding matching candidates and if no one is found, using the unspecialized class. We don't have the real "Substitution Failure" here, where one or more overloads are discarded from the candidate set based on failing substitution.
So these classes with enable_if work about the same way as tag dispatching does, though no tag is necessary. But also here I prefer tag dispatching because (my personal opinion) I find it resulting in more readable code.
*gives often compiler troubles and headaches: if a user makes an error somewhere, the compiler will not select any of the methods, and/or it will give completely incomprehensible error listings, just because of this SFINAE"
I thought that this was and advantage not a liability. The message is short: method not found. What errors do you get when a user makes an error somewhere with tag dispatching? Are they clearer?
The messages you get depend on the type or error, in both cases.
When I try to use a function based on sfinae, when there is no match, is: the message I get - from MSVC is: Failed to specialize function template..., for all overloads. - from gcc is: no matching function for call to.... (one)
On tag dispatching you'll get: MSVC: 'type' : is not a member of 'tag<T>' with [T=banana] gcc, similar
or (if the banana-tag is implemented)
MSVC: 'apply' : is not a member of 'dispatch<T>' with [T=tag<banana>::type] gcc: 'apply' is not a member of 'dispatch<banana_tag>'
This message could be clear to you, but what the user is intendeed to do to correct the error?
sfinae is based on overloads, and the problem I encountered with it (and with its messages) that in case of errors I often don't understand where the error originates from. I remember having to number several times all overloads (of e.g. "eat") to e.g. eat1, eat2, eat3, etc, to see what is actually going wrong and why the overload was not called in that specific case.
With tag dispatching you can also get pages of messages, but it is more clear where they come from, e.g. a missing specialization, a wrong input type.
* does not support partial specializations because it is a function. The tag-dispatching function is of course also not supporting that, but it forwards its call to the dispatch struct where partial specializations (and member template functions) are possible. The SFINAE could do that as well but then: why not just add one tag more and have tag dispatching instead?"
Could you clarify which is the tag you add?
If I understand your question: we're adding a geometry-tag. So for a point it specializes to point_tag, etc.
What is meant by this section is that for some cases, code is similar. For example the "num_points" algorithm, the number of points of a linestring is the same as the number of points of a linear ring. The dispatch function can specialize partially on this similarity. I now realize, by just doing some re-checking that for SFINAE you can do the same, using SFINAE on a specific property to define an overload which applies to more than one geometry.
So this sentence probably can go, it is possible in both cases.
I agree.
* is a trick to deceive the compiler. "As a language behavior it was designed to avoid programs becoming ill-formed" (http://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error), while tag dispatching is based on specialization, a core feature of C++ * looks more ugly
You know that this is a matter of taste, and the way you do force the user to share your taste.
I can rephrase this. What I mean is that a function
template <typename T> void eat(T const& fruit) { // tag dispatch on apple-tag }
looks better than:
template <typename T> void eat(T const& fruit #ifdef DOXYGEN_SHOULD_SKIP_THIS , typename boost::enable_if<typename is_apple<T>::type>::type* = NULL #endif ) { // (call) apple implementation }
Why the user can not define eat on the namespace the apple class is defined namespace apple_ns { void eat(apple const& fruit) { // do apple implementation } } Do you see a problem with this approach?
Maybe I should state "makes the main free function declarations shorter" to be more objective.
[snipped]
[snipped] Do you think that this could be also applied to the GGL library at least for the concepts?
Interesting, I'll react on the convert_to in another e-mail later.
Happy to hear, Vicente

vicente.botet wrote:
If there is a clear justification that SFINAE and BCCL do not matches, his should be added in the rationale. But don't we use SFINAE to support concepts? I don't see the need to combine them. Remember that there was a Concept Traits library that was intendedd to replace the BCCL. The library was abandoned because C++0x should include concepts and because some concepts couldn't be managed by traits. Maybe this library should be relaunched as Concepts are not part of C++0x now.
I'm really fond of this idea. What's the current status of the Concept Traits library tough and where are the source file located ? -- ___________________________________________ Joel Falcou - Assistant Professor PARALL Team - LRI - Universite Paris Sud XI Tel : (+33)1 69 15 66 35

Hi, ----- Original Message ----- From: "joel" <joel.falcou@lri.fr> To: <boost@lists.boost.org> Sent: Sunday, November 08, 2009 4:06 PM Subject: [boost] Ressurecting BCCL (was GGL review)
vicente.botet wrote:
If there is a clear justification that SFINAE and BCCL do not matches, his should be added in the rationale. But don't we use SFINAE to support concepts? I don't see the need to combine them. Remember that there was a Concept Traits library that was intendedd to replace the BCCL. The library was abandoned because C++0x should include concepts and because some concepts couldn't be managed by traits. Maybe this library should be relaunched as Concepts are not part of C++0x now.
I'm really fond of this idea. What's the current status of the Concept Traits library tough and where are the source file located ?
You can as always see the LibrariesUnderConstruction wiki page. I try to maintain here all the libraries related to Boost. https://svn.boost.org/trac/boost/wiki/LibrariesUnderConstruction#Boost.Conce... Best, Vicente

Hi,
If there is a clear justification that SFINAE and BCCL do not matches, his should be added in the rationale. But don't we use SFINAE to support concepts? I don't see the need to combine them. Remember that there was a Concept Traits library that was intendedd to replace the BCCL. The library was abandoned because C++0x should include concepts and because some concepts couldn't be managed by traits. Maybe this library should be relaunched as Concepts are not part of C++0x now.
I agree that it is very good to consider it to be relaunched. But I don't know the details of that library. However, we currently have to use (or at least, I was encouraged to use) BCCL. I don't state that there is no way to combine it with SFINAE-based code. I explained that the BOOST_CONCEPT_REQUIRES is not working with SFINAE overloads. It can be added in the rationale. That BCCL is using SFINAE internally (I didn't know) is no problem for me, it is a different subject.
The messages you get depend on the type or error, in both cases.
When I try to use a function based on sfinae, when there is no match, is: the message I get - from MSVC is: Failed to specialize function template..., for all overloads. - from gcc is: no matching function for call to.... (one)
On tag dispatching you'll get: MSVC: 'type' : is not a member of 'tag<T>' with [T=banana] gcc, similar
or (if the banana-tag is implemented)
MSVC: 'apply' : is not a member of 'dispatch<T>' with [T=tag<banana>::type] gcc: 'apply' is not a member of 'dispatch<banana_tag>'
This message could be clear to you, but what the user is intendeed to do to correct the error?
Messages originating from template code are often difficult, I agree, and if you know the library it is easier to solve them, I agree. Let me try to state it differently. With SFINAE the real error is hidden behind the phrase "Failed to specialize". The compiler discards all overloads, because of an error somewhere, and you get this error with no clue how to go on. What you get is the error that it is just failing. All overloads are gone, the compiler is not wrong, there is an error somewhere, but the only visible error message which makes sense for the compiler to give is something like "failed to specialize" or "no matching function call". With tag dispatching you get the real error message. That can also be difficult, but the message(s), sometimes a whole list, give at least a clue of what's wrong. In this case: add the banana-tag or add an implementation for banana. The usage of concepts should reduce the length of the list and give a clearer error message. So the essence is: /compiler errors in code based on tag dispatching are easier to find than compiler errors in code based on SFINAE, because the SFINAE-case is based on discarding overloads and *meaningful error messages are discarded as well*./
I can rephrase this. What I mean is that a function
template <typename T> void eat(T const& fruit) { // tag dispatch on apple-tag }
looks better than:
template <typename T> void eat(T const& fruit #ifdef DOXYGEN_SHOULD_SKIP_THIS , typename boost::enable_if<typename is_apple<T>::type>::type* = NULL #endif ) { // (call) apple implementation }
Why the user can not define eat on the namespace the apple class is defined
namespace apple_ns { void eat(apple const& fruit) { // do apple implementation } }
Do you see a problem with this approach?
No, but I don't see the relation to tag dispatching vs. sfinae. Regards, Barend

vicente.botet wrote:
If there is a clear justification that SFINAE and BCCL do not matches, his should be added in the rationale. But don't we use SFINAE to support concepts? I don't see the need to combine them. Remember that there was a Concept Traits library that was intendedd to replace the BCCL. The library was abandoned because C++0x should include concepts and because some concepts couldn't be managed by traits. Maybe this library should be relaunched as Concepts are not part of C++0x now.
The SFINAE for expressions feature should allow to keep the current concept checking interface. I have wanted for a while to try working on this but unfortunately haven't got the time.

Hi Vicente, I promised to react on this part:
Recently I have added some stuff to take care of ADL in the Boost.Conversion library base on the Boost.Swap library. The idea been to define a function that calls an internal function introdicing the default behavior by ADL. Thus when the user has provided some functions found by ADL they will take precedence as more specific. When not found the introduced default behavion take place.
Next follows how I have defined convert_to:
namespace boost { // trick to ensure specialization is always possible namespace dummy { template <typename T> struct base_tag {}; template <typename T> struct type_tag : public base_tag<T> {}; } // default behavior ussin gthe specialized tag type_tag namespace conversion { namespace partial_specialization_workaround { template < typename To, typename From > struct convert_to { inline static To apply(const From& val) { return To(val); } }; } template < typename To, typename From > To convert_to(const From& val, dummy::type_tag<To> const&) { return conversion::partial_specialization_workaround::convert_to<To,From>::apply(val);
} } // specialized implementation namespace conversion_impl { template <typename Target, typename Source> Target convert_to_impl(Source const& from) { using namespace boost::conversion; //use boost::conversion::convert_to if ADL fails return convert_to(from, boost::dummy::type_tag<Target>()); } } // generic function calling to the ADL introduction template <typename Target, typename Source> Target convert_to(Source const& from, boost::dummy::base_tag<Target> const& p=boost::dummy::base_tag<Target>()) { return conversion_impl::convert_to_impl<Target>(from); } }
Note that using the dummy trick we ensure that there are no infinite cycles. The default behavior could correspond to what you have already done.
If there is nothing wrong on my design, do you think that this could be also applied to the GGL library at least for the concepts?
I assume this does not have anything to do with SFINAE? I like the idea of this conversion library and I think it is also applicable to geometry. Actually we have a "convert" function which converts e.g. a box to a polygon (so from min/max go to a polygon with the points of the box). So if Boost has a generic convert_to function, we're happy to support geometries there. After reading your documentation I tried your code to my "fruit" test-classes for sfinae/tag dispatching, and was able to convert apples to pears. Tried it using two ways, one using the partial_specialization_workaround, one overloading the convert_to with the dummy-tag. Added namespace to check ADL and all is working nicely. To refer to your question. "At least for the concepts", you probably mean the geometry concepts. Adapting geometries to concepts is not done using free functions. It is implemented using traits classes they should be implemented in namespace ggl::traits. If you refer to the algorithms, like area, yes, I think it could be done like this in GGL, also combined with tag dispatching. But it is currently not done like this. Example c04_a and c04_b show how functionality can be overriden by specializing either the function or (partly) specializing the dispatch structure. This is all in namespace GGL. You're adding a possibility to be able to specialize in your own namespace, and I think it would be feasable, though I'm not sure if it is really expected. Regards, Barend

Hartmut Kaiser wrote:
Everybody on this list is invited to participate in this formal review. I hope to see your review of this library, including your vote, and I welcome your participation in the discussions on the Boost mailing list.
Hello, without doing a full review, I have a question. In GGL, coordinate systems are assigned to points. However, one can compute stuff like length, angle, area, etc., with having only a proper inner product operator, without necessarily attaching that operator to each point. It then becomes easy to project some data in different spaces, and still being able run a quick hull. Think of geometric interpretations of non- coordinate data such as dictionaries, medical images, etc.. So, my question would be: is it that generic to assume that all points have a coordinate system? Cheers, Rutger

Hi Rutger,
In GGL, coordinate systems are assigned to points. However, one can compute stuff like length, angle, area, etc., with having only a proper inner product operator, without necessarily attaching that operator to each point. It then becomes easy to project some data in different spaces, and still being able run a quick hull. Think of geometric interpretations of non- coordinate data such as dictionaries, medical images, etc..
So, my question would be: is it that generic to assume that all points have a coordinate system?
We've implemented the hull as an "agnostic" strategy because it is not dependant on coordinate system itself. It is however dependant on two underlying strategies, "side" and "compare". So in cartesian coordinate systems the "side" is implemented differently from how it would be implemented in a spherical coordinate system. Same for compare (spherical compare assumes that the coordinates do not span more than halve of the sphere). So yes, for points we need to know in which coordinate system they are placed, in order to get the calculation strategy. Note that the point-type does not have to have one, but the Point Concept. So using a traits class you can bind a coordinate system to a point type. So if you define a "dictionary coordinate space" where the "side" is probably still as defined as it is now for cartesian in the library (it can just register the same calculation class), you can call the convex hull for it. If that answers your question... Regards, Barend

Barend Gehrels wrote:
We've implemented the hull as an "agnostic" strategy because it is not dependant on coordinate system itself. It is however dependant on two underlying strategies, "side" and "compare". So in cartesian coordinate systems the "side" is implemented differently from how it would be implemented in a spherical coordinate system. Same for compare (spherical compare assumes that the coordinates do not span more than halve of the sphere).
So yes, for points we need to know in which coordinate system they are placed, in order to get the calculation strategy. Note that the point-type does not have to have one, but the Point Concept. So using a traits class you can bind a coordinate system to a point type.
So if you define a "dictionary coordinate space" where the "side" is probably still as defined as it is now for cartesian in the library (it can just register the same calculation class), you can call the convex hull for it.
If that answers your question...
Regards, Barend
Hello Barend, thanks for the answer. I guess I am being pedantic :-). The name Geometry means something like the wikipedia definition for me: Geometry is a part of mathematics concerned with questions of size, shape, relative position of figures and with properties of space. A coordinate system is a property of a subset of types of spaces. A coordinate system is just a way to express where points are situated in a certain space. Distances are properties of a space -- not of a coordinate system. One of the consequences is that the distance between points in a Euclidean space expressed in Cartesian or polar (or ...) coordinate systems are, by definition, the same. This is probably one of the pieces that is causing my confusion. Cheers, Rutger

Hartmut Kaiser wrote:
The formal review of the Generic Geometry Library (GGL) starts today, November 5, 2009 and will finish November 15, 2009. GGL is being developed by Barend Gehrels, Bruno Lalande and Mateusz Loskot, with substantial input from this Boost mailing list.
This library looks like a very good example of generic interfaces for geometry objects and several well known and useful geometry algorithms. This is not a review. I found it difficult to answer my questions about the library based on the documentation. I am having difficulty coming up with a complete list of features for the library. It seems there is more implemented than documented. Specifically, I'd like to know more about the ggl::algorithms::overlay namespace, which appears to contain algorithms of interest to me, but is not documented. I'll ask my questions on the list to help me identify which header files I should look at to understand the interfaces and implementation of interesting algorithms in the GGL. 1. Does the library support other Boolean operations between pairs of polgyons than intersection? Union XOR AND NOT 2. If so, does the library support the union of more than two overlapping polygon simultaneously (three or more mutually overlapping polygons)? 3. Does the library support map overlay operations of two layers? Of more than two layers simultaneously? 4. If the non-self-intersecting invariant is violated at the input of the polygon pair intersection algorithm what does the algorithm do? 5. Is there a way to check self-intersecting invariant before attempting an intersection operation? 6. Is there a way to fix a polygon that is self-intersecting to be non-self intersecting? 7. When intersection points are snapped to the floating point grid at the output of the polygon pair intersection algorithm edges may move laterally some small distance which may introduce self intersection artifacts. How is this handled by the polygon intersection algorithm? 8. What is the algorithmic complexity of the polygon intersection algorithm? 9. Could you please provide the community with a description of the polygon intersection algorithm? 10. Is the polygon intersect bounding box algorithm linear time? 11. When a polygon is intersected against a bounding box the same problem can happen that self intersections are introduced in the output polygon(s) when intersection points are snapped to the floating point grid. These artifacts cannot be addressed by a linear time algorithm. Are the output polygons of the polygon intersects bounding box algorithm implemented by this library potentially self intersecting? 12. Is the distance algorithm implemented for linestring pairs? For polygon pairs? If so, what algorithmic complexity does it have, and can you describe the algorithm for the community? 13. Why is there no custom polygon example in the documentation? 14. Why do you need interior_type for the polygon concept traits? What requirements does the interior_type need to satisfy? Is it expected to be an stl container? 15. What happens if you instantiate the polygon pair intersection algorithm with integer coordinates? 16. What is required for a data type to satisfy the requirements of a coordinate type? 17. Can you tell us more about the support and usage of high precision numerical datatypes? With enough time and effort reading the code and exercising it I can answer all of these questions for myself, but a little help from the library author would be greatly appreciated. If the answers to some of my questions can be found in the documentation please let me know where. It is unfortunate that Fernando's buisness success leaves him so little time recently--hopefully he will be able to conduct a review. I think that as a CGAL author and user of both GPC and CGAL in his consulting work his review of the polygon intersection algorithm in GGL would be insightful. Thanks, Luke

Hi Luke, First, congratulations with the acceptance of your library.
This library looks like a very good example of generic interfaces for geometry objects and several well known and useful geometry algorithms.
[questions snipped]
Thanks for your interest and reading our documentation. You ask a lot and I'll answer a lot, but give me a day or two. Regards, Barend

Hi Luke, Herewith the answers on (most of) your questions (the rest wil come soon now). First, I added two additional pages of documentation, to not make this answering mail too long. - one about intersections (overlay): http://tinyurl.com/ggl-sets - one with custom polygon: http://tinyurl.com/ggl-c06 The first one should give the necessary theoretical background and implementation overview of the overlay algorithms.
1. Does the library support other Boolean operations between pairs of polgyons than intersection? Union XOR AND NOT
union (or): yes, supported intersection (and): yes We splitted here, we considered that for most users, most use cases, and for Formal Review purposes this would be enough. However, the other functions are actually already implemented because it's all the same source code. The doc-page I mentioned above will give the necessary background for this. But there are currently no free functions for (symmetric) difference. symmetric difference (xor): so: it is supported by the algorithm but has to be fit in, see the doc-page for backgrounds on this difference (and not): idem These are OGC-operations (and names) and they are all planned as we want to support the full OGC set.
2. If so, does the library support the union of more than two overlapping polygon simultaneously (three or more mutually overlapping polygons)?
Similar story: three (or more) polygons is supported by the algorithm itself, but has to be implemented (see doc-page) as a separate (or overloaded) function.
3. Does the library support map overlay operations of two layers? Of more than two layers simultaneously?
The GGL is geometry-oriented. All algorithms work on one or two geometries, and it is possible and useful for some algorithms to support more than two (e.g. your question 2), but it is not layer oriented. Layers can be processed by the code using the library, and it is advisible that they consider their envelopes (bboxes) as well.
4. If the non-self-intersecting invariant is violated at the input of the polygon pair intersection algorithm what does the algorithm do?
This depends on how the polygon self-intersects, and how the self-intersection intersects with the other polygon. If the self-intersection is untouched by the other polygon, you'll get exactly that self-intesection back (for a union) or it is discarded (for an intersection). The self-intersections are not even tried to be detected. This is on purpose because it saves another call of get_intersection_points, which is the most time-consuming in the current implementation. If the self-intersection interferes with the other polygon, you can get an invalid polygon back. As the input was already considered as invalid, this is what is to be expected. We've researched this recently because it occured in practice. The algorithm will find an invalid path, detect that and skip that output built up so far.
5. Is there a way to check self-intersecting invariant before attempting an intersection operation?
Yes. The function "intersects" can work on one geometry and returns true if it is self-intersecting.
6. Is there a way to fix a polygon that is self-intersecting to be non-self intersecting?
This is actually the same story as for (symmetric) difference and for intersection/union of more than two polygons. We have all the tools for this but it is currently not provided. I mean by this: it is possible to detect self-intersections, the traversal can walk through this and split it in an outer polygon and holes. It is not subscribed by the OGC interface (which assumes always valid input). It occurs to me now that we could also give an extra boolean flag to let the algorithm know that self-intersections should be considered as well. That would probably be a sufficient solution.
7. [...] FP
I will come back to FP/precision later.
8. What is the algorithmic complexity of the polygon intersection algorithm?
As the algorithm is splitted into some parts, the complexity also is. This is described in more detail in the doc-page.
9. Could you please provide the community with a description of the polygon intersection algorithm?
This is described in the doc-page
10. Is the polygon intersect bounding box algorithm linear time?
This is described in the doc-page.
11. [...] FP 12. Is the distance algorithm implemented for linestring pairs? For polygon pairs? If so, what algorithmic complexity does it have, and can you describe the algorithm for the community?
Sorry, this had to be postponed. I planned to do this for the review but you know, all those documentation, unit-tests, examples, etc. It will be there.
13. Why is there no custom polygon example in the documentation?
Good question (as all your questions, by the way), so it's provided now.
14. Why do you need interior_type for the polygon concept traits? What requirements does the interior_type need to satisfy? Is it expected to be an stl container?
- interior_type is there a polygon may have holes. If library users use polygons without holes, they can take a ring (or, stated more precisely: a type fulfilling the Ring Concept) - interior_type is expected to be a Boost.Range (for a read-only polygon) and to be STL container-conformant (for a mutable polygon).
15. What happens if you instantiate the polygon pair intersection algorithm with integer coordinates?
GGL is designed such that the input types and output types not need to be the same. You can therefore, for example, calculate the distance between an point with integers and a point with doubles. This is designed to be everywhere (though it might be that it is not yet applied in all algorithms). In intersection/union, it is the case. So the result of the intersection of two integer polygons can be a double polygon. The result of two integer polygons can also be an integer polygon. In that case all intersection points are rounded to integer. It is therefore the same as normal arithmetics: int d = sqrt(2) will deliver 1 double d = sqrt(2) will deliver 1.41 There is one mechanism more. In strategies you can add an optional CalculationType as an optional template parameter. If that is specified, that one is used. You can see that e.g. in the area strategy. This one is not yet in the intersection (strategy). But the mechanism works like this: - define a strategy with a double (or long double, or high-precision-number) as the last optional template parameter - input is two integer polygons, output are integer polygons - during all calculations, doubles are used (so intersection points are not rounded, or rounded to the precision the number is in) - in the very last phase, during the output, coordinates are rounded back to integer. Yes, this will lose precision, but that is what the library user asks for in this use case.
16. [...] FP 17. [...] FP
So sorry for the time it took me to answer, and the FP part will come asap. Regards, Barend

Barend Gehrels wrote:
Hi Luke,
Herewith the answers on (most of) your questions (the rest wil come soon now).
First, I added two additional pages of documentation, to not make this answering mail too long. - one about intersections (overlay): http://tinyurl.com/ggl-sets
I still don't know what the find_intersections algorithm's complexity is or how it works. What data structure does it currently use for indexing? Based on the documentation I'm guessing you use a 1D spatial index for red blue overlap detection between monotonic sections. Is that correct? My own implementation is a 1D spatial index on all edges, so there is considerable algorithmic advantage derived from both red-blue and monotonic section optimizations. Do you realize that pathological cases of large number of 45 degree edges with no intersection between them with result in O(n^2) work even with a good 2d spatial index based algorithm for finding overlaps because 2d spatial indexing works on bounding boxes. I was hoping to upgrade my algorithm to the optimal algorithm rather than apply optimizations to the sup-optimal algorithm, but that looks increasingly unlikely now. Holes are obviously associated to their enclosing outer rings by your algorithm because that is the semantics of the output data structure. My question is, how is it identified by your algorithm that a hole is enclosed by a polygon? This is something that is prone to pathological performance in the cases where the number of holes is very large, so it is an important part of the algorithm. I'll defer discussion of your answers to the self intersection questions until the precision questions are answered since these issues are inter-related.
- one with custom polygon: http://tinyurl.com/ggl-c06
Thank you, that is what I wanted to see.
14. Why do you need interior_type for the polygon concept traits? What requirements does the interior_type need to satisfy? Is it expected to be an stl container?
- interior_type is there a polygon may have holes. If library users use polygons without holes, they can take a ring (or, stated more precisely: a type fulfilling the Ring Concept) - interior_type is expected to be a Boost.Range (for a read-only polygon) and to be STL container-conformant (for a mutable polygon).
I'm not too keen on this requirement. It turns out that my own polygon with holes has a private data member which is an stl container that contains the holes, but I don't expose that in the public interface. I don't believe I could adapt my polygon_with_holes_data to your polygon concept without changing its implementation, however, I am certain that I could adapt your polygon data type to my polygon_with_holes_concept. I doubt there are many legacy polygon types that can be successfully adapted to your mutable polygon concept. This calls into question to the utility of the polygon concept. I would rather see it done the same way as linestring where you can either use stl style pushback and clear or supply your own clear and append traits. Regards, Luke

Hi Luke, Simonson, Lucanus J wrote:
I still don't know what the find_intersections algorithm's complexity is or how it works. What data structure does it currently use for indexing? Based on the documentation I'm guessing you use a 1D spatial index for red blue overlap detection between monotonic sections. Is that correct?
As in the documentation: monotonic sections, and the algorithm is named "get_intersection_points". How it works: I worked quite a day on that page and I hope it is clear enough for the detail needed for a review. If not, I'm sorry to hear that but I don't know if I can still improve this before the review period ends.
[...] Do you realize that pathological cases of large number of 45 degree edges with no intersection between them with result in O(n^2) work even with a good 2d spatial index based algorithm for finding overlaps because 2d spatial indexing works on bounding boxes. [...]
A large number of edges will result in multiple monotonic sections, as described in the new doc-page, so also then it will not be quadratic, in most cases. You're referring (I think) to a very large number of diagonal parallel edges and indeed all those bounding boxes will overlap. And I understand most spatial indexes will not help here (did you scan all variants?). As this seems to be problematic for both our libraries we can discuss this further (but at this moment it is not so convenient for me). Anyway, I think our figures are clear enough, there is no library found which even comes close, within several factors. So I would not worry too much about its current performance. If you find a real test-case where it is too slow, I'm interested, we're happy to (try to) improve it.
Holes are obviously associated to their enclosing outer rings by your algorithm because that is the semantics of the output data structure. My question is, how is it identified by your algorithm that a hole is enclosed by a polygon? This is something that is prone to pathological performance in the cases where the number of holes is very large, so it is an important part of the algorithm.
I agree, and it's currently sorted on descending area to get obvious possibilities. There are alternatives here such as bboxes, spatial indexes or multiple-point-in-polygon, there is room for research and possible improvement.
- interior_type is there a polygon may have holes. If library users use polygons without holes, they can take a ring (or, stated more precisely: a type fulfilling the Ring Concept) - interior_type is expected to be a Boost.Range (for a read-only polygon) and to be STL container-conformant (for a mutable polygon).
I'm not too keen on this requirement. It turns out that my own polygon with holes has a private data member which is an stl container that contains the holes, but I don't expose that in the public interface. I don't believe I could adapt my polygon_with_holes_data to your polygon concept without changing its implementation, however, I am certain that I could adapt your polygon data type to my polygon_with_holes_concept. I doubt there are many legacy polygon types that can be successfully adapted to your mutable polygon concept. This calls into question to the utility of the polygon concept. I would rather see it done the same way as linestring where you can either use stl style pushback and clear or supply your own clear and append traits.
Also here we're happy to provide it, it is not satisfying your needs, especially if this enhances interoperability we're certainly willing to adapt and finetune things. Actually I'm also glad to hear positive feedback about the way we designed it for linestring. Thanks, Barend

Barend Gehrels wrote:
Hi Luke,
Simonson, Lucanus J wrote:
I still don't know what the find_intersections algorithm's complexity is or how it works. What data structure does it currently use for indexing? Based on the documentation I'm guessing you use a 1D spatial index for red blue overlap detection between monotonic sections. Is that correct?
As in the documentation: monotonic sections, and the algorithm is named "get_intersection_points". How it works: I worked quite a day on that page and I hope it is clear enough for the detail needed for a review. If not, I'm sorry to hear that but I don't know if I can still improve this before the review period ends.
I have to read the code to learn the algorithm...
From get_intersection_points.hpp 00390 for (typename boost::range_const_iterator<sections1_type>::type 00391 it1 = sec1.begin(); 00392 it1 != sec1.end(); 00393 ++it1) 00394 { 00395 for (typename boost::range_const_iterator<sections2_type>::type 00396 it2 = sec2.begin(); 00397 it2 != sec2.end(); 00398 ++it2) 00399 { 00400 if (! ggl::detail::disjoint::disjoint_box_box( 00401 it1->bounding_box, it2->bounding_box)) 00402 { 00403 get_ips_in_sections 00404 < 00405 Geometry1, 00406 Geometry2, 00407 typename boost::range_value<sections1_type>::type, 00408 typename boost::range_value<sections2_type>::type, 00409 IntersectionPoints 00410 >::apply( 00411 source_id1, geometry1, *it1, 00412 source_id2, geometry2, *it2, 00413 false, 00414 intersection_points, trivial); 00415 } 00416 } 00417 }
Your algorithm is: for all red montonic sections, inspect all blue monotonic sections for bounding box overlap and if overlap exists inspect segments of monotonic section pair for line intersection and report said intersections. It has O(n^2) runtime complexity wrt. monotonic sections. Monotonic sections are in the worst case equivilent to the number of edges.
[...] Do you realize that pathological cases of large number of 45 degree edges with no intersection between them with result in O(n^2) work even with a good 2d spatial index based algorithm for finding overlaps because 2d spatial indexing works on bounding boxes. [...]
A large number of edges will result in multiple monotonic sections, as described in the new doc-page, so also then it will not be quadratic, in most cases. You're referring (I think) to a very large number of diagonal parallel edges and indeed all those bounding boxes will overlap. And I understand most spatial indexes will not help here (did you scan all variants?). As this seems to be problematic for both our libraries we can discuss this further (but at this moment it is not so convenient for me).
My library is not relevant to the discussion. I don't understand what you mean my not convenient. Do you mean it is not convenient to discuss the algorithmic complexity and worst case inputs for your algorithm during the review, or just that you are preoccupied with other matters?
Anyway, I think our figures are clear enough, there is no library found which even comes close, within several factors. So I would not worry too much about its current performance. If you find a real test-case where it is too slow, I'm interested, we're happy to (try to) improve it.
My concern about performance is that your benchmarking may not expose performance problems. Your claims about performance are based upon one benchmark (with sever minor variations of one of the two input geometries) from which you generalize about performance. The algorithm has a fatal weakness to
Holes are obviously associated to their enclosing outer rings by your algorithm because that is the semantics of the output data structure. My question is, how is it identified by your algorithm that a hole is enclosed by a polygon? This is something that is prone to pathological performance in the cases where the number of holes is very large, so it is an important part of the algorithm.
I agree, and it's currently sorted on descending area to get obvious possibilities. There are alternatives here such as bboxes, spatial indexes or multiple-point-in-polygon, there is room for research and possible improvement.
This is not a complete description of an algorithm. Are you saying you sort outer rings and inner rings by descending area and then do all pairs point in polygon check between pair of outer and inner ring that could possibly have containment relationship by area (and I'd guess bounding box) criteria to find which outer shells contain which holes? That algorithm is O(n^2) wrt edges in the pathological case, which is exactly what I feared. This is really the fundamental weakness of the Atherton-Wieler algorithm: that it forces handling of hole containment to post processing. The algorithm is O(n^2) wrt. both monotonic sections and holes. This means that for all inputs with large numbers of holes or low average length of monotonic sections but large total number of edges the algorithm with suffer alogorithmic (quadratic) slowness. These weaknesses are not exposed in your benchmark testing.
- interior_type is there a polygon may have holes. If library users use polygons without holes, they can take a ring (or, stated more precisely: a type fulfilling the Ring Concept) - interior_type is expected to be a Boost.Range (for a read-only polygon) and to be STL container-conformant (for a mutable polygon).
I'm not too keen on this requirement. It turns out that my own polygon with holes has a private data member which is an stl container that contains the holes, but I don't expose that in the public interface. I don't believe I could adapt my polygon_with_holes_data to your polygon concept without changing its implementation, however, I am certain that I could adapt your polygon data type to my polygon_with_holes_concept. I doubt there are many legacy polygon types that can be successfully adapted to your mutable polygon concept. This calls into question to the utility of the polygon concept. I would rather see it done the same way as linestring where you can either use stl style pushback and clear or supply your own clear and append traits.
Also here we're happy to provide it, it is not satisfying your needs, especially if this enhances interoperability we're certainly willing to adapt and finetune things. Actually I'm also glad to hear positive feedback about the way we designed it for linestring.
I think fixing the polygon traits to be like the linestring/ring traits would improve both consistency and genericity of the interfaces and should be required before the library is accepted into boost. Regards, Luke

Hi Luke,
Your algorithm is: for all red montonic sections, inspect all blue monotonic sections for bounding box overlap and if overlap exists inspect segments of monotonic section pair for line intersection and report said intersections. It has O(n^2) runtime complexity wrt. monotonic sections. Monotonic sections are in the worst case equivilent to the number of edges.
Sure. As I said, there is room for improvement there. But it is not necessary. See below.
My library is not relevant to the discussion.
Was mentioned by you in terms of "My own implementation...", "I was hoping to upgrade my algorithm..." etc. But I agree, let's leave it out.
I don't understand what you mean my not convenient. That I did want to go to sleep.
My concern about performance is that your benchmarking may not expose performance problems. Your claims about performance are based upon one benchmark (with sever minor variations of one of the two input geometries) from which you generalize about performance. The algorithm has a fatal weakness to
to ... everything? We've had this discussion several times before, off-list and on-list. You asked me to create a more-intersection-dense environment and I did. It *is* testing pathological cases. It *is* testing the diagonals you mentioned. You asked me to deliver cases and pictures where things went wrong in another library and I did. You asked me if I could create tests testing 90-degree and 45-degree polygons and I didn't, because I said you could create those tests yourself, they belong to your environment, but I didn't receive them. You're objecting against our interior-ring approach and I will test it, but not now. You asked for more documentation and I did. All the things I do and write are returned by words like fear and fatal, but nothing more than such words. So I think it's useless to continue like that. Anyway, worst case tested: - 19 minutes (GGL, best) - 73 minutes (second best) - 20 hours (worst) The last one is widely used within the GIS world and has variants in Java, C++ and C#. It's 60 times slower in pathological cases and ~10 times slower in more normal cases, so I think that potential Boost users can safely use our library without taking the risk that it is too slow. Full info: http://trac.osgeo.org/ggl/wiki/IntersectionMore The tested polygons have 10000 vertices, have the form of a star, which points are located horizontally, vertically and diagonally across the input. A factor more, 100000 points didn't fit in computer memory, with all libraries tested. It *is* a pathological case The testsuite is open source and open for anyone to reproduce the results or want to extend it. The input can be choosen, it is a runtime parameter. And I repeat, if the first phase, which is quadratic wrt sections and so quadratic in worst cases, even then takes too long there is room for improvement. As said in our paper (http://www.boostcon.com/site-media/var/sphene/sphwiki/attachment/2009/05/05/...) the Priority R-Tree spatial index, optimal in worst case scenario's, might be useful. So you can repeat that our implementation is fatally weak, but it is not, on the contrary. That our tests are wrong and grains of salt but they are not, on the contrary.
Also here we're happy to provide it, it is not satisfying your needs, especially if this enhances interoperability we're certainly willing to adapt and finetune things. Actually I'm also glad to hear positive feedback about the way we designed it for linestring.
I think fixing the polygon traits to be like the linestring/ring traits would improve both consistency and genericity of the interfaces and should be required before the library is accepted into boost.
Our concepts are completely clear and sound, documented and have examples. We're conforming to boost ranges and, for mutable, to std-library. I mentioned adaption to increase interoperability with a recently accepted library, your library. Objecting acceptance on return of such an offer, adapting to a library accepted one hour after announcement of review of ours, is quite surprising. Regards, Barend

Barend Gehrels wrote:
Your algorithm is: for all red montonic sections, inspect all blue monotonic sections for bounding box overlap and if overlap exists inspect segments of monotonic section pair for line intersection and report said intersections. It has O(n^2) runtime complexity wrt. monotonic sections. Monotonic sections are in the worst case equivilent to the number of edges.
Sure. As I said, there is room for improvement there. But it is not necessary. See below.
My library is not relevant to the discussion. Was mentioned by you in terms of "My own implementation...", "I was hoping to upgrade my algorithm..." etc. But I agree, let's leave it out.
I don't understand what you mean my not convenient. That I did want to go to sleep.
I owe you an apology for this question, it was unfair, and I commend you for responding to it in a professional manner.
My concern about performance is that your benchmarking may not expose performance problems. Your claims about performance are based upon one benchmark (with sever minor variations of one of the two input geometries) from which you generalize about performance. The algorithm has a fatal weakness to
to ... everything?
We've had this discussion several times before, off-list and on-list. You asked me to create a more-intersection-dense environment and I did. It *is* testing pathological cases. It *is* testing the diagonals you mentioned. You asked me to deliver cases and pictures where things went wrong in another library and I did. You asked me if I could create tests testing 90-degree and 45-degree polygons and I didn't, because I said you could create those tests yourself, they belong to your environment, but I didn't receive them. You're objecting against our interior-ring approach and I will test it, but not now. You asked for more documentation and I did. All the things I do and write are returned by words like fear and fatal, but nothing more than such words. So I think it's useless to continue like that.
Yes, I plan to perform my own benchmarking of you implementation to verify whether my concerns about its scalability are warrented.
Anyway, worst case tested: - 19 minutes (GGL, best) - 73 minutes (second best) - 20 hours (worst)
The last one is widely used within the GIS world and has variants in Java, C++ and C#. It's 60 times slower in pathological cases and ~10 times slower in more normal cases, so I think that potential Boost users can safely use our library without taking the risk that it is too slow.
Full info: http://trac.osgeo.org/ggl/wiki/IntersectionMore The tested polygons have 10000 vertices, have the form of a star, which points are located horizontally, vertically and diagonally across the input. A factor more, 100000 points didn't fit in computer memory, with all libraries tested. It *is* a pathological case
Your complexity is n*m wrt. red and blue monotonic sections. In your benchmark n is basically the average of the number of monotonic sections of all the countries in the world. I don't know what it is, but I expect the avage length of monotonic sections in a country border is pretty large, implying the number of sections is small. By varying m but not n you will not be able to observe quadratic behavior of your algorithm, it will appear to scale linearly. Furthermore, if n is sufficiently small on average your algorithm should be expected to perform nearly linear regardless of how large m is.
The testsuite is open source and open for anyone to reproduce the results or want to extend it. The input can be choosen, it is a runtime parameter.
I plan to do so.
And I repeat, if the first phase, which is quadratic wrt sections and so quadratic in worst cases, even then takes too long there is room for improvement. As said in our paper (http://www.boostcon.com/site-media/var/sphene/sphwiki/attachment/2009/05/05/...) the Priority R-Tree spatial index, optimal in worst case scenario's, might be useful.
So you can repeat that our implementation is fatally weak, but it is not, on the contrary. That our tests are wrong and grains of salt but they are not, on the contrary.
Last night I redirected my admittedly unbecoming negative emotions that seeing your nested for loop brought to the surface to something positive. I enhanced my own line intersection algorithm with a 1D spatial indexing based divide and conquer. Let me quickly explain. Previously I sorted all line segments by their low x coordinate and then for each segment in sorted order scanned in the sorted sequence comparing all segment pairs until the low x of the segment found in the inner loop exeeds the high x of the segment in the outer loop. This gives the benefit of 1D spatial indexing in the x dimension. On top of that I add divide and conquer optimization in the y dimension. I construct a histogram of the number of line segments that overlap intervals in the y orientation (the same thing that Joachim's ITL does) then recursively subdivide the y dimension by finding the min horizontal cutline in the middle 1/3 of the histogram. I then bin edges into the horizontal strips and run the original line segment intersection algorithm on each strip. This provides algorithmic speedup and is effective at speeding up even the hand xor spiral test case that I have a picture of in my documentation. This should be equivilent to the speedup achievable by using a 2D spatial index (such as kd-tree) but is only about 75 lines of code. I'd be willing to submit a patch for your line segment intersection algorithm that adds both the x and y spatial optimization on top of your montonic sections optimization. This would fix the majority of algorithmic weakness to large n and m. Similarly I can fix the algorithmic weakness to large numbers of holes. My polygon formation algorithm is independent of line intersection and boolean operation stages, and should be compatible with floating point coordinates. In the case that the number of holes in the output is large you can opt to call my polygon formation algorithm on all the polygon edges to assocate holes to outer shells in n log n time.
Also here we're happy to provide it, it is not satisfying your needs, especially if this enhances interoperability we're certainly willing to adapt and finetune things. Actually I'm also glad to hear positive feedback about the way we designed it for linestring.
I think fixing the polygon traits to be like the linestring/ring traits would improve both consistency and genericity of the interfaces and should be required before the library is accepted into boost.
Our concepts are completely clear and sound, documented and have examples. We're conforming to boost ranges and, for mutable, to std-library. I mentioned adaption to increase interoperability with a recently accepted library, your library.
Objecting acceptance on return of such an offer, adapting to a library accepted one hour after announcement of review of ours, is quite surprising.
You misunderstand my objection. I don't object because you can't adapt my polygon data type to your polygon concept. I object because the can't adapt a class of polygons to your polygon data type, but could easily do so with a simple change that makes your interfaces more consistent and simple. It is better that this change be made before the library is accepted to boost rather than after. As an aside, if I can adapt your polygon concept to my polygon concept that would make my polygon formation algorithm drop in compatible with your polgyon intersection algorithm output interfaces, which would make it easy for you to use my polygon formation algorithm to optimally associate holes to outer shells. By the way, I think that relaxing the requirement on linestring that the iterator be random access would be a good idea. You can specialize the function that requires random access to call different code for forward and random access iterators if random access is needed for an important optimization. Regards, Luke

Simonson, Lucanus J wrote:
Similarly I can fix the algorithmic weakness to large numbers of holes. My polygon formation algorithm is independent of line intersection and boolean operation stages, and should be compatible with floating point coordinates. In the case that the number of holes in the output is large you can opt to call my polygon formation algorithm on all the polygon edges to assocate holes to oute r shells in n log n time.
Upon reflection the polygon formation algorithm requires robust comparisions in the tree, which floating point arithmetic cannot provide, so the code cannot be adpated to solve the problem with holes in GGL. Regards, Luke

Hi Luke, Herewith answers on questions regarding FP-precision.
7. When intersection points are snapped to the floating point grid at the output of the polygon pair intersection algorithm edges may move laterally some small distance which may introduce self intersection artifacts. How is this handled by the polygon intersection algorithm?
This is not handled explicitly. On segment-segment intersection in the epsilon-regions, the intersection will fall between the endpoints of both segments. Even if it is rounded, it will not surpass those endpoints. Therefore, self-intersections will be unlikely in most cases. I'll not say that it is impossible, but it is unlikely. Our implementation is not yet checked on 100% numerical robustness. If using floating point (IEEE-single), there might be errors in the 1e-3x regions. If using double precision, there will be less errors. That is to say: in close-to-100% of the use-cases: no errors. But in the regions of 1e-3xx differences, errors might be expected, and in some cases they might even cause self-intersections, I'll not deny that now, we've to do more research here. But you can go higher then, if you want that precision, use long double (on MSVC). Or use the high-precision arithmetic support that we're providing, which is slower but more exact. It's explained below in some more detail.
11. When a polygon is intersected against a bounding box the same problem can happen that self intersections are introduced in the output polygon(s) when intersection points are snapped to the floating point grid. These artifacts cannot be addressed by a linear time algorithm. Are the output polygons of the polygon intersects bounding box algorithm implemented by this library potentially self intersecting?
I cannot say much more here than I answered on your question 7. In other words: if you really want IEE-single, you can expect this effect in rare cases. Using IEEE-double, it will be very unlikely. If you're using coordinates with differences in the regions of 1e-3XX, I would advice a high-precision number data type. The additional mechanism I already mentioned in previous answer, on the integer/floating point issue: even if you're using a point type based on IEEE-single coordinates, you can still do the calculations in IEEE-double or long double. Or high precision. In all my career (18 years GIS) I didn't encounter problems, using IEEE-double (using single, I did). Therefore, and because of recent tests, I mentioned repeatedly that it is rare or unlikely. But it is not impossible, and therefore we're willing to support users who want to go beyond certain limits. We decided to go first for high precision arithmetic numbers. See below the answer on your next question. That is exact (or quite exact) but slow. We're also willing to tweak things to be more robust in the (long) double regions, provided that it'll not affect performance. But as I'm not an expert on numerical robustness (did read the introductions though), we'll probably need support for that and Fernando did offer his help here on this list.
17. Can you tell us more about the support and usage of high precision numerical datatypes?
Sure. We worked on a wrapper for GMP and CLN and tested with that. That wrapper (numeric_adaptor) might be redundant because there is also support in Boost.Math bindings. We were somehow not aware of that but, however, it was a useful exercise because it works in a similar way, and we could adapt our library to using high-precision scenario. Not all algorithms are adapted, currently, but we did some to research and to prove that it works well. The basic example is in x02 (x02__numeric__adaptor__example.cpp), on the website and in the example folder. You can use a point which is based on GMP or CLN (or other arbitrary precision types) coordinate types. All coordinates are stored in that type then, and calculations are done in that type. Besides that, you can store a point in double or long double, and do calculations in high-precision. Like a mentioned above and in previous answer. Hope this makes this more clear. Regards, Barend

Hello, This is not a review. I have a few remarks, since I am from realtime 3D world (and not really GIS), and I did not see in previous comments much about this. I took 2 hours to read through the example and documentation and hpp. I also read all exchanges about the GGL since september09. Please note also maybe most of my comment is wrong because it's more about maths sometimes than geometry itself. It also might be more about graphs than geometry. In that case, don't hesitate to tell me! // ==============remarks A What I would like from a geometric library (for realtime 3D) is 1/ basics # intersections ### between (for) : lines / segments / rays / triangles / spheres / boxes / frustum / planes / cylinder # projections ### all the previous ones on planes # matrix / vertex / 4D homogeneous coordinates # multiplication / projection etc... # conversions of position/rotation/scale/normal ### for the following coordinates systems : world / view / parent / local 2/ less basic # meshes ## triangulation ## mesh normals proposition ## interpolation of vertex attributes # infinite positions of points (for lines, meshes etc...) # frustum manipulation. ### extraction of corners / plans # projections ### on triangles / mesh / cylinder / spheres A list of basic stuffs is for example the functions of the cml library (configurable maths library). 3/ more advanced # uvw proposition # projection of mesh -> mesh ### interpolation of vertex attributes ### results stored in maps + uvw # mesh blendings # extraction of some mesh features # segmentation of meshes 4/ very advanced You can check xNormal.net website to get an idea of the traditionnal mapping and mesh manipulations (which are out of scope of the thread right now). All of this is of course extremely application oriented. And I did not even spoke of animations (inverse cinematic, motions graphs etc...). To not have any of theses features would not mean that ggl lacks a lot, because geometry is very wide subject. It just might not really be suited for realtime 3D or even toolchain creations in its current state. // ==============remarks B Now given my point of view the documentation is not easy at all to follow, even if the tutorials are very good. Please have a look at the "geometric tools"/ wild magic documentation to get an idea of what is a perfect documentation in my opinion. The current ggl doxygen just looks like it was barely configured. Though I see an enormous amount of work, it could be better advertised by completing / pruning the documentation. This seems to me mandatory if you want people to develop extensions for your library, or even use it. The examples you provided are very clear, but the reference is not as good. There is an extraordinary amount of work behind the ggl. But if you want external contributions for realtime, you might wonder : can it be extented by someone who do not use more templates than the stl ? how does he find at least the basics 1/ 2/ in the documentation ? // ==============remarks C I personnaly think boost.geometry would be much clearer to the reader. (there is even someone how posted something about the 'CGL' on boost mailing list..). Perfect lib would be easy as the cml, documented as geometric tools, and with algorithms like those of xnormals. But, as I stated, maybe this is not "real geometric" problems, but graphs problems / analysis problems / very application specific problems. What do you think of these views ? Pierre Morcello

Hi Pierre, Thanks a lot for your feedback. About remark A: I'm also more interested in real time 3D and if I had had more time to spend, the algorithms you're talking about would be there already. I mostly participated in the design, not algorithms. Barend worked a lot on algorithms so he brought his part, that is GIS. This doesn't prevent from bringing other application domains. We constantly made sure that the design allows to add that easily. Also, note that we mentioned a project structure based on extensions. The kernel is "agnostic" regarding any precise application domain, while extensions allow to propose more specific stuff. Although the limit between both is sometimes hard to define, this normally avoids having many people wanting to see in the kernel the algorithms *they* are interested in. So typically, some of the things you are asking for could be included in the kernel (lines, rays...) and other ones would rather come as specific extensions (frustums, meshes...). Hope this makes sense to you? About remark B: Good point about the fact that documentation is more user-oriented than extender-oriented, this is something we should try to improve. Regarding the user part, I sincerely think that what we have is sufficient (I mean, adapting legacy geometries and using algorithms on them). Regarding the extender part, it will indeed require us to give better insight about the actual design. However, I would like to be clear about this sentence: "can it be extented by someone who do not use more templates than the stl". The answer is no. The library has been designed to be easily *used* by someone who basically doesn't know template. But to extend it (e.g. add mesh-related concepts and implement algorithms on them), knowledge is obviously required. You'll have to specialize some traits, play with tags, etc... the point is rather to ensure that the design gives sufficiently precise directions and instructions to keep extensions consistent. About remark C: Indeed, several persons have made the same remark already :-) Regards Bruno

Barend worked a lot on algorithms so he brought his part, that is GIS. This doesn't prevent from bringing other application domains.
Very good, it is somewhat what I had understood.
the user part, I sincerely think that what we have is sufficient .
Well, maybe it is sufficient, but my point is only if someone checks the doxygen "classes" for example, he can be quite confused by all the templated elements. Maybe I am only not enough used to read (so much) template-driven library documentation. The question is the needed level to read the doxygen document. Maybe some elements should not appear in the doxygen to make it clearer for the user ?
So typically, some of the things you are asking for could be included in the kernel (lines, rays...) and other ones would rather come as specific extensions (frustums, meshes...).
Hope this makes sense to you?
Yes completely. But I have a question about extension. What is the complexity of extending for a new kernel 'type' for example ? Because STL is known for its "number containers + number algorithms" system. Is it comparable ? Or is it more a "Sum for each element of ggl kernel of: "number of algorithms supported by the ggl::element"" ?
But just to show a small sample of a geometry created with GGL using transformations and 3D, in the GIS context, you can look at http://trac.osgeo.org/ggl/wiki/WhoUsesGGL
Thanks!
Within GGL you can present points in different coordinate systems. Even if they are both cartesian, then can be marked as different
Ok very good, I missed it. Best regards, Pierre Morcello

Well, maybe it is sufficient, but my point is only if someone checks the doxygen "classes" for example, he can be quite confused by all the templated elements. Maybe I am only not enough used to read (so much) template-driven library documentation. The question is the needed level to read the doxygen document. Maybe some elements should not appear in the doxygen to make it clearer for the user ?
OK I see your point. We will check that, maybe some conventions of syntactic shortcuts can be adopted to come up with something lighter indeed.
Yes completely. But I have a question about extension. What is the complexity of extending for a new kernel 'type' for example ? Because STL is known for its "number containers + number algorithms" system. Is it comparable ? Or is it more a "Sum for each element of ggl kernel of: "number of algorithms supported by the ggl::element"" ?
Well, I would say it's between both. The point-point distance algorithm is available for each thing than can be seen as a point (by adapted it using registration macros etc), but obviously if you decide to add a new *concept* (let's say, an ellipse) it will hardly handle it. So there's one algorithm for each concept, and all the possible models of each concept (that is the actual type you adapt) are handled by the corresponding version of the algorithm. Does that answer your question? Regards Bruno

Hi Pierre,
I have a few remarks, since I am from realtime 3D world (and not really GIS), and I did not see in previous comments much about this.
I'll add to Bruno's answer nothing about gaming, because I'm not good at that. But just to show a small sample of a geometry created with GGL using transformations and 3D, in the GIS context, you can look at http://trac.osgeo.org/ggl/wiki/WhoUsesGGL About this:
# conversions of position/rotation/scale/normal ### for the following coordinates systems : world / view / parent / local
We have this in GIS as well. Within GGL you can present points in different coordinate systems. Even if they are both cartesian, then can be marked as different (within GIS this is e.g. the case for EPSG). The generic "transform" function can then transform (e.g. scale, rotate, translate) between coordinate systems, using matrix calculations. Regards, Barend

2009/11/5 Hartmut Kaiser <hartmut.kaiser@gmail.com>:
Hi all,
The formal review of the Generic Geometry Library (GGL) starts today, November 5, 2009 and will finish November 15, 2009.
Hi Barend, I am currently reading your docs and I think your design is *very* good. I will take more time to investigate the lib and send a review at the end of the review period. Thanks to you, Bruno and Mateusz for this contribution. Joachim

Hi, This is not yet a review. (There will be a real review, and I see no reasons why it should not be positive.) 1) I wonder whether it would be possible to change the name "GGL" to "Boost.Geometry". Sure, it is no problem for the initialized to know what "GGL" can do, but the newcomer is just confronted with yet another acronym that tells him nothing. I first thought that this change of name would disturb existing users of the library, but found out that they will have to change their code anyway: using namespace ggl; must be replaced with using namespace boost::ggl; and #include <ggl/ggl.hpp> must be replaced with #include <boost/ggl/ggl.hpp> after the library has been accepted into boost. The same is true for the library itself, so the change from "ggl" to "boost::geometry" or "boost/geometry" is probably not more work than the change from "ggl" to "boost::ggl" and "boost/ggl" Then I thought that the name GGL might be a reference to GIL. So I looked into the GIL library to find out why it emphasizes the "generic" part so much. It turned out that there is not a single best representation for images, and that GIL itself provides multiple different representations. This is a bit similar to STL that provides multiple containers, because there is not a single best container. GGL on the other hand provides just one point, linestring, linear_ring, polygon, box and segment class template, because the representation isn't really the challenge in the context of geometry. This is all perfect, but I just think that the "generic" in case of GGL refers to all the different "legacy" classes that essentially all use the same underlying representation of the geometry (point, box, ...), but are incompatible because of different names and conventions, whereas the generic in GIL and STL refers to fundamentally different representations of the underlying object that is unavoidable in principle. The bottom line is that I think a real name should be preferred over an acronym if there is not a very good reason to use an acronym. I just tried to explain why I can't see this very good reason in this case. 2) A second remark concerns the OGC names. Let's take the following code from 07_graph_route_example.cpp to illustrate: // Init a bounding box, lateron used to define SVG map ggl::box_2d box; ggl::assign_inverse(box); // Read the cities typedef boost::tuple<point_type, std::string, vertex_type> city_type; std::vector<city_type> cities; read_wkt<point_type>("data/cities.wkt", cities, box); ... // Read an ASCII file containing WKT's, fill a vector of tuples // The tuples consist of at least <0> a geometry and <1> an identifying string template <typename Geometry, typename Tuple, typename Box> void read_wkt(std::string const& filename, std::vector<Tuple>& tuples, Box& box) { ... ggl::combine(box, ggl::make_envelope<Box>(geometry)); ... } What does "ggl::assign_inverse" and "make_envelope" do? These are apparently OGC names, but I spend quite some time trying to verify that this is indeed the case. In fact, I just ended up accepting that this is probably indeed that case. These OGC names are OK, but ask yourself the question whether you would be able to understand the quoted code if you would not already know what "ggl::assign_inverse" and "make_envelope" do (I know it's easy to look it up in the documentation, but that is not the point). In fact, I will not tell you now, but I tried to be fair when quoting the source code, so as not to remove potentially clarifying comments or context. The bottom line is that when the OGC conventions already force me to accept confusing names, it would at least be nice to know how I can verify that these are indeed OGC names. What are the reference documents I have to read to learn this? Regards, Thomas

Thomas Klimpel wrote:
1)
I wonder whether it would be possible to change the name "GGL" to "Boost.Geometry". [...] I first thought that this change of name would disturb existing users of the library, but found out that they will have to change their code anyway: [...]
The layout of GGL source tree follows Boost convention. In spite of that, you are right, users will likely have to update their GGL-based source code
Then I thought that the name GGL might be a reference to GIL.
That's how I personally refer to name GGL.
GGL on the other hand provides just one point, linestring, linear_ring, polygon, box and segment class template, because the representation isn't really the challenge in the context of geometry.
Not really. GGL proposes some definitions of types for basic fundamental geometries. However, user does not have too stick to them. Moreover, it is expected that users may want to not to touch these (pre-)defined types but define their own types. It is presented in examples with "custom" word in name http://geometrylibrary.geodan.nl/formal_review/examples.html In other words, GGL works well with user-defined types for geometries and such geometries shall be assumed to be first-class citizens.
This is all perfect, but I just think that the "generic" in case of GGL refers to all the different "legacy" classes that essentially all use the same underlying representation of the geometry (point, box, ...), but are incompatible because of different names and conventions, whereas the generic in GIL and STL refers to fundamentally different representations of the underlying object that is unavoidable in principle.
As I've explained above, I believe, the genericness that can be observed in GIL and STL is also observable in GGL
2)
A second remark concerns the OGC names. Let's take the following code from 07_graph_route_example.cpp to illustrate:
// Init a bounding box, lateron used to define SVG map ggl::box_2d box; ggl::assign_inverse(box);
// Read the cities typedef boost::tuple<point_type, std::string, vertex_type> city_type; std::vector<city_type> cities; read_wkt<point_type>("data/cities.wkt", cities, box);
...
// Read an ASCII file containing WKT's, fill a vector of tuples // The tuples consist of at least <0> a geometry and <1> an identifying string template <typename Geometry, typename Tuple, typename Box> void read_wkt(std::string const& filename, std::vector<Tuple>& tuples, Box& box) { ... ggl::combine(box, ggl::make_envelope<Box>(geometry)); ... }
What does "ggl::assign_inverse"
It performs a kind of default-initialisation on the box, so the box may be considered as not valid in terms of domain: "the min corner is very large, the max corner is very small" Obviously, zero-initialisation would not be appropriate here, so the inverse one is performed http://geometrylibrary.geodan.nl/formal_review/group__access.html
and "make_envelope" do?
It calculates envelope of given geometry: http://geometrylibrary.geodan.nl/formal_review/group__envelope.html
These are apparently OGC names, but I spend quite some time trying to verify that this is indeed the case.
Strictly speaking, they are not accurate OGC names. However, "envelope: refers to concept described in OGC specifications. AFAIK, the "assign_inverse" is a custom concept, not OGC.
In fact, I just ended up accepting that this is probably indeed that case. These OGC names are OK, but ask yourself the question whether you would be able to understand the quoted code if you would not already know what "ggl::assign_inverse" and "make_envelope" do (I know it's easy to look it up in the documentation, but that is not the point). In fact, I will not tell you now, but I tried to be fair when quoting the source code, so as not to remove potentially clarifying comments or context.
The make_envelope follows convention like make_pair utility. I agree that purpose of assign_inverse may not be obvious from its name. Perhaps, name like make_inverse would be more clear and consistent. Good naming is an art on it's own :-)
The bottom line is that when the OGC conventions already force me to accept confusing names,
As far as my own understanding is considered, GGL does not aim to force OGC conventions. However, it uses some of OGC definitions, those that are applicable to general geometry problems. For example, envelope is a short but still descriptive name of MBR http://en.wikipedia.org/wiki/Minimum_bounding_rectangle
it would at least be nice to know how I can verify that these are indeed OGC names. What are the reference documents I have to read to learn this?
Regarding use of OGC elements in GGL, the basic documents are those directly related to OGC Simple Features http://en.wikipedia.org/wiki/Simple_Features It is "OpenGIS Simple Features Specification", specifically "OpenGIS Simple Features Specifications For SQL" (99-049) Thomas, thank you for comments. Barend, I think it would be good to give precise references here: http://geometrylibrary.geodan.nl/formal_review/ogc.html Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

Mateusz Loskot wrote:
The layout of GGL source tree follows Boost convention. In spite of that, you are right, users will likely have to update their GGL-based source code
What I mean is that a file like cartesian2d.hpp which currently reads more or less // Predeclare common Cartesian 2D points for convenience #include <ggl/geometries/geometries.hpp> namespace ggl { typedef point_xy<double, cs::cartesian> point_2d; typedef linestring<point_2d> linestring_2d; typedef linear_ring<point_2d> ring_2d; typedef polygon<point_2d> polygon_2d; typedef box<point_2d> box_2d; typedef segment<point_2d> segment_2d; } // namespace ggl must be changed to // Predeclare common Cartesian 2D points for convenience #include <boost/ggl/geometries/geometries.hpp> namespace boost { namespace ggl { typedef point_xy<double, cs::cartesian> point_2d; typedef linestring<point_2d> linestring_2d; typedef linear_ring<point_2d> ring_2d; typedef polygon<point_2d> polygon_2d; typedef box<point_2d> box_2d; typedef segment<point_2d> segment_2d; } } // namespace boost::ggl before inclusion in a boost release. You are right that the user can explicitly add boost to his include path and add a global "use namespace boost;", and then has at least a chance that he doesn't need to change anything else. If you prefer the GIL style, you could also change #include <ggl/geometries/geometries.hpp> to #include "geometries.hpp"
GGL on the other hand provides just one point, linestring, linear_ring, polygon, box and segment class template, because the representation isn't really the challenge in the context of geometry.
Not really. GGL proposes some definitions of types for basic fundamental geometries. However, user does not have too stick to them. Moreover, it is expected that users may want to not to touch these (pre-)defined types but define their own types.
Are you sure? Even so I was vaguely aware that you haven't fully understood what I meant by representation, I still took a look at "linestring_concept.hpp" to check whether you enforce a certain representation to the user. Now I'm totally confused, because "geometries/adapted/std_as_linestring.hpp" registers std::line<P> as a linestring, but the following concept requirement for linestring seem to rule out std::list<p>: BOOST_CONCEPT_ASSERT( (boost::RandomAccessRangeConcept<Geometry>) ); Now I should try to explain what I mean by a representation. GIL gives the example that the RGB values might be saved interleaved as RGB RGB RGB or as separate arrays RRR GGG BBB and that both representations might have advantages and disadvantages for different use cases. One of the goals of GIL is to allow writing algorithms that will work on both (and most other sensible) representations. So in the context of GGL, we could think about a polygonal domain (roughly equivalent to the multi polygon concept) and its different representations. For a use case where we quickly want to answer the question which points in a given list of random points are inside the polygonal domain, another representation is optimal than in the case where we want to compute the intersection of two polygonal domains (similar to std::set and std::vector). However, generalizing over this type of generic representations is not the goal of ggl, and it would probably be an inappropriate goal for any 2d focused geometry library. Yes, the user of GGL can use his own types for the basic fundamental geometries, but these types should better use the same underlying representation as the types proposed by GGL itself do. As I said before, this is perfectly fine, because this is probably exactly what the user of the library wants. (Well, the user would probably also be happy to have access to different representations of polygonal domains for different use cases, but he would probably have no problem if writing algorithms that will work on different representations will not be possible.) Regards, Thomas

Thomas Klimpel wrote:
Mateusz Loskot wrote:
The layout of GGL source tree follows Boost convention. In spite of that, you are right, users will likely have to update their GGL-based source code
What I mean is that a file like cartesian2d.hpp which currently reads more or less [...]
Yes. I've agreed on that exactly, above.
GGL on the other hand provides just one point, linestring, linear_ring, polygon, box and segment class template, because the representation isn't really the challenge in the context of geometry. Not really. GGL proposes some definitions of types for basic fundamental geometries. However, user does not have too stick to them. Moreover, it is expected that users may want to not to touch these (pre-)defined types but define their own types.
Are you sure? Even so I was vaguely aware that you haven't fully understood what I meant by representation, I still took a look at "linestring_concept.hpp" to check whether you enforce a certain representation to the user. Now I'm totally confused, because "geometries/adapted/std_as_linestring.hpp" registers std::line<P> as a linestring, but the following concept requirement for linestring seem to rule out std::list<p>:
BOOST_CONCEPT_ASSERT( (boost::RandomAccessRangeConcept<Geometry>) );
Yes, you are right. Thanks for spotting this issue. There has been changes applied recently that I apparently overlooked. As Barend knows best what's the current status of this, I will let Barend to clarify. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Charter Member of OSGeo, http://osgeo.org

Hi Thomas, Thanks for your pre-review. I try to answer the pieces not yet handled. I've no problem with Boost.Geometry if it is accepted, and users have indeed to change some things anyway after acceptance. > > > [...] I still took a look at "linestring_concept.hpp" to check whether you enforce a certain representation to the user. Now I'm totally confused, because "geometries/adapted/std_as_linestring.hpp" registers std::line<P> as a linestring, but the following concept requirement for linestring seem to rule out std::list<p>: > > BOOST_CONCEPT_ASSERT( (boost::RandomAccessRangeConcept<Geometry>) ); > You are right. A list is not supported, it used to be in the past but now, it is not anymore. We're using Boost.Range now everywhere, including boost::size, not supported by lists. So like Mateusz said. We will remove it from the adaption file. > Now I should try to explain what I mean by a representation. GIL gives the example that the RGB values might be saved interleaved as > > RGB RGB RGB > > or as separate arrays > > RRR GGG BBB > > and that both representations might have advantages and disadvantages for different use cases. One of the goals of GIL is to allow writing algorithms that will work on both (and most other sensible) representations. > I think this has some analogy to what GGL does: it works on rings and polygons having both orientations, clockwise and counter clockwise. Both occur. The OGC convention is clockwise. We first only supported that. But in the end the difference is not that large, in implementation. We call it generic because it should support different point-types, different line-types, etc, even in one algorithm. We're also agnostic in coordinate system and number of dimensions. Not that all is supported in 3D, in fact that support is still limited, but it is support is provided by the kernel. Finally people can add coordinate systems or even geometries (like triangle, ellipse), all is dispatched the same way. But we don't support "everything" indeed. For example, linestrings with non-linear interpretations between its vertices were discussed, but not more than that. > What does "ggl::assign_inverse" and "make_envelope" do? These are apparently OGC names, but I spend quite some time trying to verify that this is indeed the case About OGC names: - you are right about that it can be confusing now and that it should be more clear - assign and assign inverse are not OGC names; combine is also not an OGC name - envelope is the OGC name. But as we also use std:: and boost:: conventions, we decided to let functions returning a geometry being preceded by make_. So make_envelope, make_centroid. The user has to specify the type then. - read_wkt is not the exact OGC name. This has a little history again. We first had "as_text", the OGC convention. But that was too general, we also have svg and dsv, and VE-shapes, all is text. So we decided to deviate here and call the streaming "wkt". Reading wkt is not in the SF spec (Mateusz gave the link), databases call it "from text", also too general. So we call input now "read_", output is just streaming with the name I'll come with a clarification of the exact OGC names/support/deviations. They were there in earlier documentation but there are many people not interested in GIS or OGC, so we don't want to pay it too much attention everywhere in the documentation. Regards, Barend

Hi Thomas,
The bottom line is that when the OGC conventions already force me to accept confusing names, it would at least be nice to know how I can verify that these are indeed OGC names. What are the reference documents I have to read to learn this?
There is a new table here: http://trac.osgeo.org/ggl/wiki/OGC which summarizes which is OGC and which is not, and clarify names and possible name changes. Regards, Barend

Barend Gehrels wrote:
Hi Thomas,
The bottom line is that when the OGC conventions already force me to accept confusing names, it would at least be nice to know how I can verify that these are indeed OGC names. What are the reference documents I have to read to learn this?
There is a new table here: http://trac.osgeo.org/ggl/wiki/OGC which summarizes which is OGC and which is not, and clarify names and possible name changes.
Barend, In the section "OGC functions, not planned", it lists functions like X, Y, Z, and M but actually it might be valid to consider those accessors get<D> and set<D> as equivalents (for D in range [0, 3]) In other words, I would consider that GGL does provide some sort of equivalent of these four OGC functions. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Charter Member of OSGeo, http://osgeo.org

Hi, this is not a review, and there won't be one because I'm not "knowledgable about the problem domain", but some some quick comments about the traits classes after reading the design rationale and looking at some code. - first of all, I'm curious why you chose to use one traits class for each "agnosticism". this differs from the standard library, so I guess there is a specific reason for it? e.g. std::iterator_traits defines iterator_category, pointer, reference and so on. the equivalent in a GGI Point concept is tag, coordinate, dimension..., which are defined in 5 different traits classes. - is there a default implementation of the traits classes? I haven't found one in code/coordinate_dimension. this would simplify writing concept models in many cases, especially with 5 different traits for a Point. e.g., the iterator_traits default implementation looks like this: template<typename _Iterator> struct iterator_traits { typedef typename _Iterator::iterator_category iterator_category; typedef typename _Iterator::value_type value_type; typedef typename _Iterator::difference_type difference_type; typedef typename _Iterator::pointer pointer; typedef typename _Iterator::reference reference; }; - on a minor note, tag<> seems a little non-descriptive, compared to iterator_category in the STL. Boost.Iostreams and Boost.PropertyMap both use "category", which might also be suitable for GGI, since it has its own namespace and there are no other categories I assume.

Hi Stefan, Thanks for your comments.
- first of all, I'm curious why you chose to use one traits class for each "agnosticism". this differs from the standard library, so I guess there is a specific reason for it?
Yes. It was discussed on the Boost List before. We're conforming to MPL conventions here, so creating meta-functions, name things "type", etc. We didn't want to create one blob trait.
- is there a default implementation of the traits classes? I haven't found one in code/coordinate_dimension. this would simplify writing concept models in many cases, especially with 5 different traits for a Point.
But there are registration macro's. They support different use cases. So most people will not have problems adapting their classes to the concepts. For points we've no default. For linestring we have (you can override push_back). For polygon the clockwise orientation is default.
- on a minor note, tag<> seems a little non-descriptive, compared to iterator_category in the STL.
It is mainly for internal usage. It is the base for "tag dispatching" so we thought that tag was appropriate. "geometry_category" could indeed be an alternative but that never came up in our minds. And it is a little long.
Boost.Iostreams and Boost.PropertyMap both use "category", which might also be suitable for GGI, since it has its own namespace and there are no other categories I assume.
Yes, category is also an alternative and there are no other categories. But it still is "tag dispatching" so if no one has big problems with tag, I prefer to leave it like it is now. Library users will not have to use it, unless they're implementing algorithms, or adapting geometries without using macro's. Regards, Barend

Hi Mathias, Mathias Gaunard wrote:
Barend Gehrels wrote:
For polygon the clockwise orientation is default.
Since GGL is based around a mathematical framework, wouldn't it make more sense to make the direct (counter clockwise) orientation the default?
It is clockwise because of our OGC convention, which has clockwise only. So we began with clockwise and existing users use clockwise polygons. It's inconvenient to remove or change the default in ggl::polygon. However, the default can be removed from the Polygon Concept. polygon and Polygon do not necessarily have to have the same defaults. Regards, Barend

Barend Gehrels wrote:
It is clockwise because of our OGC convention, which has clockwise only.
I had forgotten about that. Shouldn't the fact you're following OGC conventions appear more clearly in the documentation though? I didn't notice it when browsing it quickly.
So we began with clockwise and existing users use clockwise polygons. It's inconvenient to remove or change the default in ggl::polygon. However, the default can be removed from the Polygon Concept. polygon and Polygon do not necessarily have to have the same defaults.
Having them be different would break the principle of least surprise, so I think it's best to keep it that way.

Hi Mathias,
I had forgotten about that. Shouldn't the fact you're following OGC conventions appear more clearly in the documentation though? I didn't notice it when browsing it quickly.
It's on the main page and on the OGC page. I agree that it is useful to write it at polygon orientation as well.
Having them be different would break the principle of least surprise, so I think it's best to keep it that way.
OK, makes sense. Regards, Barend

Am Wednesday 11 November 2009 23:17:09 schrieb Barend Gehrels: hi,
- first of all, I'm curious why you chose to use one traits class for each "agnosticism". this differs from the standard library, so I guess there is a specific reason for it?
Yes. It was discussed on the Boost List before. We're conforming to MPL
I see.
- is there a default implementation of the traits classes? I haven't found one in code/coordinate_dimension. this would simplify writing concept models in many cases, especially with 5 different traits for a Point.
But there are registration macro's. They support different use cases. So most people will not have problems adapting their classes to the concepts.
I think these macros are unnecessary. they do not hide implementation details, or simplify type registration beyond what would be possible without macros. it is always my intention to avoid macros in public interfaces when possible. if you agree with that, you could replace them with something like this: template<typename Geometry> struct tag{ typedef typename geometry_traits<Geometry>::tag tag; }; template<typename Geometry> struct geometry_traits{ typedef typename Geometry::tag tag; ... ... }; that way you can keep the MPL metafunctions, but the default implementation instantiates a traits "blob", whose default implementation in turn gets its information from the geometry type itself. so in most cases, the user can simply add the traits to the geometry type, without using macros: struct my_point{ typedef point_tag tag; ... ... }; or in the case of a legacy point type, specialize one traits class (geometry_traits), which isn't much more verbose than the current macros. these changes could also be implemented easily, since the metafunctions, accessed by algorithms, stay the same. Stefan

Hi Stefan,
- is there a default implementation of the traits classes? I haven't found one in code/coordinate_dimension. this would simplify writing concept models in many cases, especially with 5 different traits for a Point.
But there are registration macro's. They support different use cases. So most people will not have problems adapting their classes to the concepts.
I think these macros are unnecessary.
You can do without if you don't like them. Besides that, Boost has many macro's. Personally I also don't like them too much, but for registration purposes they make sense.
they do not hide implementation details, or simplify type registration beyond what would be possible without macros. it is always my intention to avoid macros in public interfaces when possible. if you agree with that, you could replace them with something like this:
template<typename Geometry> struct tag{ typedef typename geometry_traits<Geometry>::tag tag; };
I understand and we've considered that. But this way, you'll lose MPL compatibility. The way how it is now was discussed by this Boost list before.
these changes could also be implemented easily, since the metafunctions, accessed by algorithms, stay the same.
I agree, but we preferred to not create large trait classes... Regards, Barend

Hi,
I think these macros are unnecessary.
You can do without if you don't like them. Besides that, Boost has many macro's. Personally I also don't like them too much, but for registration purposes they make sense.
Exactly. Macros must be avoided in many cases, but registration is one of the widely accepted use cases. See Boost.Math default policy declarations as one of the many examples. So if the whole point of making a facade that hides the many traits is just to avoid macros for the principle they're bad, I'm afraid I just don't feel the need to do that.
they do not hide implementation details, or simplify type registration beyond what would be possible without macros.
GEOMETRY_REGISTER_POINT_2D(my_point, double, cs::cartesian, x(), y()) Just to be sure, what does it give with your approach? Regards Bruno

Am Friday 13 November 2009 23:14:25 schrieb Bruno Lalande:
GEOMETRY_REGISTER_POINT_2D(my_point, double, cs::cartesian, x(), y())
Just to be sure, what does it give with your approach?
struct my_point{ typedef point_tag tag; typedef mpl::int_<2> dimension; typedef double coordinate_type; typedef cs::cartesian coordinate_system; //accessors here, haven't lookup up how those are implemented }; I do consider this better than the macro, even in this case. with 3d points, we are up to a macro with 8 arguments. note that you can still use traits, see my previous email.

Hi Stefan,
struct my_point{ typedef point_tag tag; typedef mpl::int_<2> dimension; typedef double coordinate_type; typedef cs::cartesian coordinate_system; //accessors here, haven't lookup up how those are implemented };
I do consider this better than the macro, even in this case.
Personally I don't but I guess it's a matter of taste. A private discussion shew that there might not be unanimity about those macros internally as well. So we might decide to make the existence of the 2 ways clearer in the docs (I mean: manual and macro registration) rather than focusing on the macro way. However I don't think we will go for the third way you propose - 3 ways to do would be a bit confusing to the reader. Does that make sense to you? Regards Bruno

Hi, In order to be able to step through the examples and tests with the VC debugger, I created CMakeLists.txt files for both. There were .sln files there, but they don't seem to work with VC9express. I initially also planned to test the interactions and comparisons with the other libs, but I gave up when I had to build soci and was presented with a .sln file not working with VC9express again. One of the files should be placed in lib/ggl/example, the other should be placed in lib/ggl/test. In cmake, you will most probably have to adjust the BOOST_ROOT_DIR to point to your boost installation. The tests "strategies/segment_intersection.cpp" and "multi/algorithms/multi_centroid1.cpp" fail to build, but this is no surprise since they are also excluded in the corresponding jamfiles. I tested the CMakeLists.txt files with VC9express and gcc-4.3.4 under cygwin. Regards, Thomas

Thomas Klimpel wrote:
Hi,
In order to be able to step through the examples and tests with the VC debugger, I created CMakeLists.txt files for both. There were .sln files there, but they don't seem to work with VC9express.
I found that it may behave awkward sometimes, but it's non-GGL issue. The .sln files sometimes are tricky, so it's better to try to open .vcproj files. However, do not double-click on them but try to open from IDE (File -> Open).
I initially also planned to test the interactions and comparisons with the other libs, but I gave up when I had to build soci and was presented with a .sln file not working with VC9express again.
This may be inconvenient, indeed. Unfortunately, there is no other way to present interaction with 3rd-party libraries than require their dependency. (as a side note, I'm working on CMake configuration for SOCI and new release including more friendly build configuration is on its way, so, stay tuned :-))
One of the files should be placed in lib/ggl/example, the other should be placed in lib/ggl/test. In cmake, you will most probably have to adjust the BOOST_ROOT_DIR to point to your boost installation.
Thank you for the CMake files. As a fan of CMake, I have it on my todo list to configure GGL tests and examples for CMake, so your contribution is definitely helpful.
The tests "strategies/segment_intersection.cpp" and "multi/algorithms/multi_centroid1.cpp" fail to build, but this is no surprise since they are also excluded in the corresponding jamfiles.
Yes, they are excluded. This test needs some work.
I tested the CMakeLists.txt files with VC9express and gcc-4.3.4 under cygwin.
Great. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Charter Member of OSGeo, http://osgeo.org

Hi Thomas, In addition to Mateusz' answer:
I initially also planned to test the interactions and comparisons with the other libs, but I gave up when I had to build soci and was presented with a .sln file not working with VC9express again.
The comparisons are independant from soci, they're only dependant on shapelib (the ones in "comparisons") or not dependant (the new ones in "comparison_star_comb"). And of course on the libraries compared with. Thanks for your CMakefiles. Regards, Barend

Hartmut Kaiser wrote:
Please always state in your review, whether you think the library should be accepted as a Boost library.
The library should be accepted contingent on several concerns discussed below being addressed. Feedbacks labeled "Concern" are things I feel the library should be accepted contingent upon changes. Feedbacks labeled "Minor Concern" are issues I have identified, but should not prevent the library from being released as part of boost. I would like to integrate the polygon intersection (and perhaps convex hull and centroid) algorithms into my own library, and my concerns are things that I think need to be addressed before I would undertake integrating the two libraries.
Additionally, please consider giving feedback on the following general topics:
- What is your evaluation of the design?
The design of the generic interfaces is good. The library is extensible and flexible. I like how easily stl data structures can be adapted to satisfy geometry concepts. I don't like that stl interfaces are required to satisfy some concepts, and in particular I'd like the polygon concept made more generic. Concern: polygon concept interfaces need to be made more generic by making them consistent with the linestring and ring concept interfaces. Concern: linestring requires random access iterator semantics. This should be relaxed to forward iterator semantics.
- What is your evaluation of the implementation?
I focused my evaluation of the implementation on the polygon intersection algorithm, where I am most knowledgeable. I found the implementation very clean, but also apparently incomplete. I have several concerns about the implementation of polygon intersection. Concern: only interfaces intersection and union between two polygons are currently implemented. Intersection, union, xor and AND NOT operations should all be supported on multi-polygons. Also the union of a single multi-polygon should be supported. AND NOT and xor are both composable from intersection and winding direction inversion. I find it strange that the author brought the library to review without implementing these. Concern: as submitted the polygon intersection algorithm is very weak in the line intersection algorithm. This flaw can either be addressed by the author by applying his query-tree based optimization, or by myself by applying divide and conquer in one axis and sorted scan in the orthogonal axis. Minor Concern: the polygon intersection algorithm is weak to large numbers of holes. This is fundamental to the choice of the graph-traversal polygon formation algorithm, which provides no information about what polygons contain what holes. This is a common problem in polygon clipping implementations and is exhibited by some of the best and most expensive closed source implementations. I would suggest that the author provide a multi-ring interface to the polygon intersection algorithm such that polygons and holes are written out distinguished by winding direction and not associated with each other. This would allow the user to avoid the costly step of associating holes to outer shells in the circumstances where that information is not required. Concern: I found a bug in the implementation of polygon intersection during the review. This bug should be fixed before the library can be accepted. Concern: I am skeptical that adequate testing of the polygon intersection algorithm has been done. I'd like to see a more comprehensive testing strategy pursued including high volume, randomly generated test inputs. This stress testing is essential to ensure that bugs like the one I found don't make it into a boost release. These tests should not be considered unit tests, and need not run when boost unit tests are run. Concern: The algorithm is not robust. The line segment intersection predicate is not robust with floating point calculations, meaning that the results produced can be incorrect from a topological point of view; intersections can be missed or erroneously inserted. Also, because the intersection point is calculated with floating point there are a significant number of bits of precision lost, meaning that the intersection point can end up moving much farther than a single floating point grid position. Lateral movement of line segments due to intersection point insertion due to loss of precision and or snapping to the coordinate grid and result in self intersecting outputs being produced. The line segment intersection predicate needs to be enhanced to make it robust and the output needs to be checked for self intersection and recycled through the algorithm until the check passes.
- What is your evaluation of the documentation?
I really did not like the doxygen generated content. I found it difficult to find the real documentation among all the doxygen generated fluff. The documentation that was authored by a human was very good, though somewhat incomplete. I'm concerned that people may misinterpret the documentation and conclude that features that are not yet implemented are already supported because the interfaces such as distance() appear to accept all geometry pairs, but in fact do not. Minor Concern: it should be more clear what concepts do and don't work with a given interface. If a feature is planned and not yet implemented it should be stated in the documentation for its interface and the documentation can be updated when the feature is implemented.
- What is your evaluation of the potential usefulness of the library?
My strongly held belief is that a polygon clipping implementation that is not robust has no value. The library is potentially very useful if the polygon intersection algorithm is enhanced to make it robust. I consider the polygon intersection algorithm to be the highest (potential) value piece of the library. I think the stated scope of the library (generic computational geometry) differs from the scope of what has been implemented (generic 2d floating point GIS focused geometry). I feel that the scope of the library should be brought in line with what is implemented and planned, and that scope should be reflected in its name.
- Did you try to use the library? With what compiler? Did you have any problems?
I compiled it with gcc 4.2.2.
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I spent about two days working with the documentation and the code. I gave some portions of the library a glance, others a quick reading and the polygon intersection received an in-depth study.
- Are you knowledgeable about the problem domain?
For polygon clipping I've read all the relevant papers and implemented my own. For GIS, I'm an outsider. For generic interfaces I guess I'm at least somewhat knowledgeable. Thanks, Luke

Hi Luke, Thanks, we were happy to read your review, and understand your concerns. We want to clarify one point, also because there were more remarks on this.
I think the stated scope of the library (generic computational geometry) differs from the scope of what has been implemented (generic 2d floating point GIS focused geometry).
The word "generic" in our name denotes generic programming. It is a generic library on geometry. Most libraries on GIS or geometry are not based on generic programming. So that was why we choose for this name. This might indicate indeed a generic library on generic geometry, and we want to support many branches; but actual name is derived from GP. Regards, Barend

I had already stated some comments on GGL. (I am from real-time 3D world, not GIS). Barend has given me links to examples of use of ggl in realtime. Now I am trying to use the proposed ggl and I got some suprises with some elements of the design (this is still not a review). 1/ I am evaluating for myself the 'extensibility of ggl for real-time 3D (le'ts write rt-3D)'. How do I represent points at the infinity? This is pretty basic stuff in rt-3D. For example a directionnal light is positionned at (0.0,1.0,0.0,0.0), using the homogeneous coordinate. Can it really be integrated in the ggl easily? Did I miss it in the documentation? Intersection of polygons with infinite points is also pretty basic when you begin to work with volume shadow (please note that there is much better than volume shadow, and I use this just an example). I just would like to make sure you thought to this kind of things and have considered them while designing the library. If you think there is no problem, this is fine with me. 2a/ from ggl intersection module documentation: "intersection: calculate new geometry containing geometries A and B" I think this definition is dangerous : looks too much like union. 2b/ The library seems to suppose the user already know the result of an intersection, which feels strange to me. But maybe that is because I do not know GIS too much and more 'school traditional' geometry. For example, if I want to calculate a line-line intersection, I have to provide a template parameter for the output type that seems to me completely unknown when I am requesting the calculation : line-line can be : nothing / a point / a line. So will we need to create a new function intersection (geometric_intersection<>), or does the template specialization of intersection_inserter allow to already write it correctly? A triangle - triangle intersection can be nothing / a plane / a line /a triangle / a point / etc... Is the ggl extensivity strong enough for this use? This is all about my current questions on the ggl extensivity. Just a little thought about geometry and precision. This library is seems much more oriented towards direct evaluation geometry than geometry (which is for example determining what is the volume of the intersection between a cube and a sphere of the same center, the result being expressed as a function of Pi ). So let's say this not too much about formal geometry (Since nobody spoke of formal geometry while there was such a deal about FP correctness, I think that it was easy to loose sight of it). On the same subject, some algorithms are also quite stable, theorically perfect, but these requirements are also not enough to give the desired results. Example is Monte Carlo approach for calculating a volume in a space of degree 100 (you will in reality never get the exact result, but this still allows to do good quick calculations). So the user of the library would in a perfect world have the ability to select a not-so-exact algorithm for his calculations. As a consequence, I think this is not good to try to provide an algorithm that fits every need, and hides too much to the user. It's better if the user can write its own or select one of the provided implementation / strategy. In that case, he is supposed to have a fair knowledge of the domain, or at least have read in the doc the recommanded usage of the provided algorithms. From what I have read, it seems it will be possible in ggl. As a consequence, I think that the precision issue is adressed correctly in ggl. About ggl presentation Somebody on the list spoke of his book about different kinds of geometrys (epipolar/projective geometry etc...). Thanks to him, the subject appears in its big wide, showing that many elements won't be cover at all before a while in ggl. As a consequence, I have 2 opposing propositions : 1/it might be interesting to restrain the library goal to it's current usage (let's say templated GIS library), in order to wider it after a while and some extensions? 2/ if not, it looks to me mandatory to make very clear the current state (not complete about geometry, good for GIS, but also waiting for extension on subjects : rt3D, etc). This would allow the 'hypothetical future user' not to be depressed by not finding what he is looking for in a generic geometry library, but also give a clear plan what is the development strategy of the library and its 'versionning' goal. Best regards, Pierre

Hi Pierre, Thanks again for your interest. Herewith our answers.
1/ I am evaluating for myself the 'extensibility of ggl for real-time 3D (le'ts write rt-3D)'. How do I represent points at the infinity? This is pretty basic stuff in rt-3D. For example a directionnal light is positionned at (0.0,1.0,0.0,0.0), using the homogeneous coordinate. Can it really be integrated in the ggl easily? Did I miss it in the documentation? Intersection of polygons with infinite points is also pretty basic when you begin to work with volume shadow (please note that there is much better than volume shadow, and I use this just an example). I just would like to make sure you thought to this kind of things and have considered them while designing the library. If you think there is no problem, this is fine with me.
Indeed, points with infinite coordinates (in polygons or any other geometry) are not supported right now. As usual, this would be a 2-step task: - Defining a way to allow users to say whether a point coordinate is infinite. That can be done with a function in a traits class, that the user will have to specialize and define. Some of them will decide to have the information as a separate boolean in their coordinate type, others will decide to rely on a special value, etc... it's really up to the user. - Taking the information into account in the algorithms themselves. Obviously this will be done progressively, so the documentation will have to state clearly which algorithms support infinity and which don't. The infinity function should be defined as returning "false" by default. This way, with the help of the optimizer, branches that handle infinity in algorithms can be discarded at compile time. All this sounds feasible without breaking anything existing. Would it fit your needs, and do you think it's a good approach in general?
2a/ from ggl intersection module documentation: "intersection: calculate new geometry containing geometries A and B" I think this definition is dangerous : looks too much like union.
2b/ The library seems to suppose the user already know the result of an intersection, which feels strange to me. But maybe that is because I do not know GIS too much and more 'school traditional' geometry.
A and B is the commonly accepted definition. We extended the documentation recently at the start of our review, as there were more questions on backgrounds, you can look at the page http://geometrylibrary.geodan.nl/formal_review/sets.html explaining intersections and unions in more detail.
For example, if I want to calculate a line-line intersection, I have to provide a template parameter for the output type that seems to me completely unknown when I am requesting the calculation : line-line can be : nothing / a point / a line. So will we need to create a new function intersection (geometric_intersection<>), or does the template specialization of intersection_inserter allow to already write it correctly?
Yes, specifying the output parameter means specifying the behaviour. If you've an output iterator of points, it will deliver zero or more points. If you want lines, you get lines. Please not that line-line is not yet supported.
A triangle - triangle intersection can be nothing / a plane / a line /a triangle / a point / etc... Is the ggl extensivity strong enough for this use? This is all about my current questions on the ggl extensivity.
If we say for intersection: R = A & B, all three parameters are tempated. The dispatch function can be fully specialized on all three cases to support all combinations.
On the same subject, some algorithms are also quite stable, theorically perfect, but these requirements are also not enough to give the desired results. Example is Monte Carlo approach for calculating a volume in a space of degree 100 (you will in reality never get the exact result, but this still allows to do good quick calculations). So the user of the library would in a perfect world have the ability to select a not-so-exact algorithm for his calculations. As a consequence, I think this is not good to try to provide an algorithm that fits every need, and hides too much to the user. It's better if the user can write its own or select one of the provided implementation / strategy. In that case, he is supposed to have a fair knowledge of the domain, or at least have read in the doc the recommanded usage of the provided algorithms. From what I have read, it seems it will be possible in ggl. As a consequence, I think that the precision issue is adressed correctly in ggl.
Sure, thanks. Both implementation and coordinate type can be varied.
About ggl presentation Somebody on the list spoke of his book about different kinds of geometrys (epipolar/projective geometry etc...). Thanks to him, the subject appears in its big wide, showing that many elements won't be cover at all before a while in ggl. As a consequence, I have 2 opposing propositions : 1/it might be interesting to restrain the library goal to it's current usage (let's say templated GIS library), in order to wider it after a while and some extensions? 2/ if not, it looks to me mandatory to make very clear the current state (not complete about geometry, good for GIS, but also waiting for extension on subjects : rt3D, etc). This would allow the 'hypothetical future user' not to be depressed by not finding what he is looking for in a generic geometry library, but also give a clear plan what is the development strategy of the library and its 'versionning' goal.
We see your points. The current usage is (I think) somewhat broader than GIS, though it is heavily influenced by it. For example, polygon clipping is implemented as polygon/polygon and polygon/box, and you see that in many domains, in GIS but also in gaming. As is on the example on the main page, small pieces of the library can help in GUI applications (such as Qt). Regards, Barend

Hi all, First I would like to say that I appreciate all the hard work that has gone into the library. Implementing a fully generic geometry library is a very difficult undertaking. Having said that I don't think that GGL is at the level of maturity that I would expect from a Boost.Geometry library. During the course of my review I would constantly ask myself when using a given feature whether this is something that I would use in my professional work without hesitation, and my answer was often that it didn't seem quite ready for production use. I think with a few more iterations and more collaboration with the Boost community it could be. So my formal vote is not to accept at this time. - What is your evaluation of the design? The quality of the interface varies, but is in general OK. The use of tag-dispatching and trait specialization seems reasonable as does the strategy design. Other parts of the design seem a bit more ad-hoc. For example: template<typename DegreeOrRadian> struct geographic { typedef DegreeOrRadian units; }; Which seems to be a way to specify a unit type for angular coordinates. I'm not sure how such a construct extends into other unit types such as feet/meters on distance coordinates. It is perhaps not so relevant in the context of the operations required to be performed on a given unit type. I think a better design would have incorporated these nuances into a trait for the given coordinate. Given the availability of Boost.Units I'm sure something neat could be done on this idea. Further to this I found a type trait struct 'is_radian<T>' but none for 'is_degree' (or is_meter, is_feet). I think these artifacts will present issues if the geometry library were to be later abstracted to support units on the coordinate types. Dimensionally qualified quantities can be calculated for geometries for whose dimensions the quantity makes no sense (e.g. area( point ), length( point ).) These functions return default constructed instances of their corresponding return type as shown here: template<typename ReturnType, typename Geometry, typename Strategy> struct calculate_null { static inline ReturnType apply(Geometry const& , Strategy const&) { return ReturnType(); } }; In my test ReturnType was double. This would seem to silently compile and return garbage on these inputs. In practice on vs2008( VC9 ) these return values were 0 (with optimizations on) even though the return resolves to double();. I'm guessing other platforms won't be so lucky. The geometry concepts seem sound at a glance, but I wonder how they would fare in the light of extension. For example, the ggl::point_type< Geometry > meta-function would seem to presume that all geometries have an underlying point representation. This doesn't necessarily extend to lines or planes, and so may present issues later on when those types are added. Some of the default geometry types hold data by reference which makes their usage in containers a bit difficult (specifically the segment type.) Much of the nomenclature of functions and types in the library seems to come from the OGC (Open Geospatial Consortium) or so I assume. I don't think this consortium is widely recognized as a standard for geometry names in general. I'm not suggesting that there is a standard, but I think better names may have been chosen in light of more common usages in academic computational geometry literature. For example, we have polygon, but then linestring (rather than polyline). The term 'envelope' is used to refer to what I believe is known in the literature as an axis-aligned bounding box. This may come down to stylistic preference, but I would expect a Boost.Geometry to have a higher level of fidelity in the nomenclature. - What is your evaluation of the implementation? The implementation seems technically competent. The segment intersection algorithm matches the speed of what we use in our production code. The boolean operations seem fast. My tests of the area/length/centroid operations gave correct results. - What is your evaluation of the documentation? The documentation is not sufficient. I would suggest better tutorials for all common use cases. A simple query on the user newsgroup would probably give a good list of those most common cases. Overall, the documentation is probably the least polished part of the submission. - What is your evaluation of the potential usefulness of the library? Tremendously useful. - Did you try to use the library? With what compiler? Did you have any problems? Yes. VS 2008 (vc9). Yes, a few. The most troubling was a seg-fault while trying to synthesize a boolean difference operation. Some of the default geometry concept implementations were clunky to use in what I would consider a common use case. (As I said above the segment type isn't default constructible due to holding points by reference..). This latter issue is a bit trivial as I can write my own, but I figured it might speak to the library's maturity as rough edges like this tend to be fixed with wide use. - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? In-depth study. - Are you knowledgeable about the problem domain? Yes. Over the course of my professional career I have worked on 2 geometry libraries. I think there is a lot of potential here and hope that work continues. Best of luck, Brandon

On Thu, Nov 19, 2009 at 3:14 PM, Brandon Kohn <blkohn@hotmail.com> wrote:
The term 'envelope' is used to refer to what I believe is known in the literature as an axis-aligned bounding box. This may come down to stylistic preference, but I would expect a Boost.Geometry to have a higher level of fidelity in the nomenclature.
FWIW, in GIS, "envelope" *is* the common term. I had a gaming background before moving into GIS so I too was more familiar with AABB than with envelope. Perhaps these higher level constructs should be in their own namespace, with terminology based on target domain. The underlying implementation could be the commonly accepted mathematical/academic term. Something like boost::gis envelope(const T &geometry); // forwards to minimum_bounding_rect boost::gfx aabb(const T &geometry); // forwards to minimum_bounding_rect --Michael Fawcett

--- Michael Fawcett wrote :
FWIW, in GIS, "envelope" *is* the common term. I had a gaming background before moving into GIS so I too was more familiar with AABB than with envelope.
Just to make things clear, "envelope" (which usually refers in mathematics to some curves) refers in the ggl to "bounding box" (in realtime 3D), and not to "axis aligned bounding box" (aabb in realtime 3D). The mathematical "envelope" of a 2D polygon is most of the time a succession of segments (and not only 4, like in BB). And I think that only mathematical denomination should be allowed as a base in ggl (not ogc). On top of that, the proposition of Michael Fawcett seems good to me (ggl::gis::enveloppe or ggl::rt3D::bb for example), even if I don't have real argumentation to support it.

Brandon Kohn wrote:
template<typename ReturnType, typename Geometry, typename Strategy> struct calculate_null { static inline ReturnType apply(Geometry const& , Strategy const&) { return ReturnType(); } };
In my test ReturnType was double. This would seem to silently compile and return garbage on these inputs. In practice on vs2008( VC9 ) these return values were 0 (with optimizations on) even though the return resolves to double();. I'm guessing other platforms won't be so lucky.
That is well defined to return built-in types zero-initialized. _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

In my test ReturnType was double. This would seem to silently compile and return garbage on these inputs. In practice on vs2008( VC9 ) these return values were 0 (with optimizations on) even though the return resolves to double();. I'm guessing other platforms won't be so lucky.
That is well defined to return built-in types zero-initialized.
Thanks Rob! I was just looking for a reference for this. Regards, Barend

Brandon Kohn wrote:
Dimensionally qualified quantities can be calculated for geometries for whose dimensions the quantity makes no sense (e.g. area( point ), length( point ).) These functions return default constructed instances of their corresponding return type as shown here:
template<typename ReturnType, typename Geometry, typename Strategy> struct calculate_null { static inline ReturnType apply(Geometry const& , Strategy const&) { return ReturnType(); } };
In my test ReturnType was double. This would seem to silently compile and return garbage on these inputs. In practice on vs2008( VC9 ) these return values were 0 (with optimizations on) even though the return resolves to double();. I'm guessing other platforms won't be so lucky.
This expression is correct and well defined. If I may, here is reference from the C++ standard document: """ 5.2.3 Explicit type conversion (functional notation) 2 The expression T(), where T is a simple-type-specifier (7.1.5.2) for a non-array complete object type or the (possibly cv-qualified) void type, creates an rvalue of the specified type, which is value-initialized (8.5; no initialization is done for the void() case). """ and """ 5 To zero-initialize an object of type T means: — if T is a scalar type (3.9), the object is set to the value of 0 (zero) converted to T; — if T is a non-union class type, each nonstatic data member and each base-class subobject is zero- initialized; — if T is a union type, the object’s first named data member89) is zero-initialized; — if T is an array type, each element is zero-initialized; — if T is a reference type, no initialization is performed. To default-initialize an object of type T means: — if T is a non-POD class type (clause 9), the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); — if T is an array type, each element is default-initialized; — otherwise, the object is zero-initialized. To value-initialize an object of type T means: — if T is a class type (clause 9) with a user-declared constructor (12.1), then the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); — if T is a non-union class type without a user-declared constructor, then every non-static data member and base-class component of T is value-initialized; — if T is an array type, then each element is value-initialized; — otherwise, the object is zero-initialized A program that calls for default-initialization or value-initialization of an entity of reference type is ill-formed. If T is a cv-qualified type, the cv-unqualified version of T is used for these definitions of zero-initialization, default-initialization, and value-initialization. """ Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Charter Member of OSGeo, http://osgeo.org

Hi Brandon, Thanks for your thorough review. A few remarks and answers on some of your points.
Which seems to be a way to specify a unit type for angular coordinates. I'm not sure how such a construct extends into other unit types such as feet/meters on distance coordinates.
We did address it (it is in the EPSG system, which is in the GIS extension and was not in the Formal Review). So you are right, it was not clear. It works like this: template<std::size_t Code> struct epsg { static const std::size_t epsg_code = Code; }; template <std::size_t C> struct cs_tag<cs::epsg<C> > { typedef cartesian_tag type; }; So any epsg geometry is classified as the cartesian coordinate system family (btw, that can be doubted, because latlong does also has an EPSG code, but in general projections are cartesian coordinate systems, uniquely numbered by EPSG). Units are not in this example (though projections have units, usually meters) but you could implement them similarly, and indeed, using Boost.Units might be an option here.
Further to this I found a type trait struct 'is_radian<T>' but none for 'is_degree' (or is_meter, is_feet). I think these artifacts will present issues if the geometry library were to be later abstracted to support units on the coordinate types. is_radian is used internally to know if spherical coordinates should be converted to radians, because all math calculations are done in radians. It is therefore necessary to know it. We didn't consider is_degree or is_meter necessary; if all coordinate systems have units, we probably should add another traits to define that.
The geometry concepts seem sound at a glance, but I wonder how they would fare in the light of extension. For example, the ggl::point_type< Geometry > meta-function would seem to presume that all geometries have an underlying point representation.
Good point. It seems also not strictly necessary for a box defined like: struct box { double ul,ll,ur,lr; }. There is no point used here, but the meta-function should still be defined for a box, just because a point type is used in, probably all algorithms. We cannot do without, at least, currently we cannot.
I don't think this consortium is widely recognized as a standard for geometry names in general.
We perfectly understand your point. Michael suggested namespaces on your point, seconded by Pierre. We consider this as a perfect solution to support different domains with different naming conventions, etc. OGC conventions (at least most of them) are shared by OGC, ISO (a.o. SQL/MM). http://www.ogcnetwork.net/node/398#_Toc206572745
Over the course of my professional career I have worked on 2 geometry libraries.
Again, you certainly have many things to add, as you stated somewhere else. Let me repeat that you're welcome to contribute, probably I can state this, to a boost geometry library, one, (in case GGL is accepted:) other, both, or in its cooperation. Regards, Barend

Hello Barend, Here are my answers and remarks : //=== about homogenous coordinates
Indeed, points with infinite coordinates (in polygons or any other geometry) are not supported right now.[...]Defining a way to allow users to say whether a point coordinate is infinite. That can be done with a function in a traits class, that the user will have to specialize and define. Some of them will decide to have the information as a separate boolean in their coordinate type, others will decide to rely on a special value, etc... it's really up to the user.
I realize I was not very clear. I was referring to 'homogeneous coordinates'. As consequence I suppose it to be more a : ggl::cs::homogeneous. Homogeneous coordinates exists in 1D,3D, ... 100D etc... It's like a cartesian system + an inverse scaling factor for the previous coordinates. So homogeneous point at (1, 1, 1, 4) is in cartesian system at (0.25, 0.25, 0.25). There are very helpfull in projective geometry (application : shadow maps in rt3D, 3D reconstruction, motion capture, tracking, imaging...). I thought this was quite used in GIS too ? Anyway, from ggl documentation I read : "Many algorithms such as distance or transform use coordinate systems to select the strategy to use. " This is also described in the strategy rationnale. So there it should be highly possible to 'discards' incompatibles algorithms through strategy + a cs::homogeneous. You can consider it as an alternative proposition to the one you made of an'infinity function' (which could work too). In that case, ggl seems perfectly adequate to support homogeneous coordinates. What do you think of the approach a coordinate system? //=== about intersection documentation
Pierre wrote : from ggl intersection module documentation: "intersection: calculate new geometry containing geometries A and B" I think this definition is dangerous : looks too much like union. Barend wrote : A and B is the commonly accepted definition. We extended the documentation recently at >the start of our review, as there were more questions on backgrounds, you can look at >the page http://geometrylibrary.geodan.nl/formal_review/sets.html explaining intersections and unions in more detail.
Yes I had read this pages, but this title is still not really convincing. If I tell you "I got a geometry containing a sphere and two cubes" what is the image that comes to your mind : the set intersection of all three or the sum (union) of them? For me it's the union. Maybe you could write " new geometry containing what is common to A and B "? Or you might use the 'AND'('inversed U') symbol from the set theory. That could make it much more clearer that you are reffering to set theory. What do you think? //=== about intersection design (not set theory, but traditionnal intersection) Pierre wrote:
For example, if I want to calculate a line-line intersection, I have to provide a template parameter for the output type that seems to me completely unknown when I am requesting the calculation : line-line can be : nothing / a point / a line. So will we need to create a new function intersection (geometric_intersection<>), or does the template specialization of intersection_inserter allow to already write it correctly?
Barend wrote:
Yes, specifying the output parameter means specifying the behaviour. If you've an output iterator of points, it will deliver zero or more points. If you want lines, you get lines. Please not that line-line is not yet supported.
I am not completely convinced by this approach. I will explain first why, and then make a little proposition. Let's take an example. Example A : "I have a line and a plane. I wonder : does the line touch the plane, and if it does what is the nature of the intersection, and what is the value of the intersection?". If I can only 'expect' a return type, I might have to make 2 query (one with a line as output, and another time with a point as output). From my point of view, such approach looks wrong, because it is not performant, and present the problem from a narrower point of view than what is required by the mathematical problem. There is a solution to that situation, that is to use for example a boost::variant as return type. This would allow the user to get both calculation results, and also result type. One of the difficulties in that case is to determine exactly what are the appropriate variations of the return type, from a mathematical point of view. On the other end, it is true that in every applications, people will most of the time care only for one type of intersection (let's say 'point result' for a plane-line intersection). But in that case, I would consider it as a "special case", not really the default behavior. I think a variant (or some custom struct) as an "exact result" would be appropriated in most cases. It seems to me this is possible to do (I don't say it's easy). Can you confirm this is possible? Do you think of some drawbacks of such variant return ? //=== About description of the library Pierre wrote :
1/it might be interesting to restrain the library goal to it's current usage (let's say templated GIS library), in order to wider it after a while and some extensions? 2/ if not, it looks to me mandatory to make very clear the current state (not complete about geometry, good for GIS, but also waiting for extension on subjects : rt3D, etc). This would allow the 'hypothetical future user' not to be depressed by not finding what he is looking for in a generic geometry library, but also give a clear plan what is the development strategy of the library and its 'versionning' goal.
Barend wrote:
We see your points. The current usage is (I think) somewhat broader than GIS, though it is heavily influenced by it. For example, polygon clipping is implemented as polygon/polygon and polygon/box, and you see that in many domains, in GIS but also in gaming. As is on the example on the main page, small pieces of the library can help in GUI applications (such as Qt).
Yes, this is already in the introduction : "The GGL might be used in all domains where geometry plays a role: mapping and GIS, gaming, computer graphics and widgets, robotics, astronomy... The core is designed to be as generic as possible and support those domains. However, for now the development has been mostly GIS-oriented." But in that case, you need to provide some roadmap / todo list (done / not done) / priorities for the developments : current features and future features. Some kind of 'future versionning' for example would be appreciated. Best regards, Pierre

Hi Pierre,
I realize I was not very clear. I was referring to 'homogeneous coordinates'. As consequence I suppose it to be more a : ggl::cs::homogeneous. Homogeneous coordinates exists in 1D,3D, ... 100D etc... It's like a cartesian system + an inverse scaling factor for the previous coordinates. So homogeneous point at (1, 1, 1, 4) is in cartesian system at (0.25, 0.25, 0.25). There are very helpfull in projective geometry (application : shadow maps in rt3D, 3D reconstruction, motion capture, tracking, imaging...). I thought this was quite used in GIS too ? Anyway, from ggl documentation I read : "Many algorithms such as distance or transform use coordinate systems to select the strategy to use. " This is also described in the strategy rationnale. So there it should be highly possible to 'discards' incompatibles algorithms through strategy + a cs::homogeneous. You can consider it as an alternative proposition to the one you made of an'infinity function' (which could work too). In that case, ggl seems perfectly adequate to support homogeneous coordinates. What do you think of the approach a coordinate system?
This approach will work, and might be easier and/or more fitting into the design than the one we described. Thanks for the suggestion! Yes, I think that a homogenous coordinate system will be very useful here and can indeed block distance functions, or enable implementing functions in the correct way for homogenous coordinates. Strategies can also be mixed (so working on two points of two different coordinate systems), and that might be useful for this as well.
//=== about intersection documentation
Yes I had read this pages, but this title is still not really convincing. If I tell you "I got a geometry containing a sphere and two cubes" what is the image that comes to your mind : the set intersection of all three or the sum (union) of them? For me it's the union. Maybe you could write " new geometry containing what is common to A and B "? Or you might use the 'AND'('inversed U') symbol from the set theory. That could make it much more clearer that you are reffering to set theory. What do you think?
I see your point. Yes, the inversed U and U symbols will be indeed more clear, and for the users who don't know them, we can make the description more clear. I agree completely.
//=== about intersection design (not set theory, but traditionnal intersection)
Pierre wrote:
For example, if I want to calculate a line-line intersection, I have to provide a template parameter for the output type that seems to me completely unknown when I am requesting the calculation : line-line can be : nothing / a point / a line. So will we need to create a new function intersection (geometric_intersection<>), or does the template specialization of intersection_inserter allow to already write it correctly?
Barend wrote:
Yes, specifying the output parameter means specifying the behaviour. If you've an output iterator of points, it will deliver zero or more points. If you want lines, you get lines. Please note that line-line is not yet supported.
I am not completely convinced by this approach. I will explain first why, and then make a little proposition.
Let's take an example. Example A : "I have a line and a plane. I wonder : does the line touch the plane, and if it does what is the nature of the intersection, and what is the value of the intersection?". If I can only 'expect' a return type, I might have to make 2 query (one with a line as output, and another time with a point as output). From my point of view, such approach looks wrong, because it is not performant, and present the problem from a narrower point of view than what is required by the mathematical problem.
There is a solution to that situation, that is to use for example a boost::variant as return type. This would allow the user to get both calculation results, and also result type. One of the difficulties in that case is to determine exactly what are the appropriate variations of the return type, from a mathematical point of view.
On the other end, it is true that in every applications, people will most of the time care only for one type of intersection (let's say 'point result' for a plane-line intersection). But in that case, I would consider it as a "special case", not really the default behavior. I think a variant (or some custom struct) as an "exact result" would be appropriated in most cases.
It seems to me this is possible to do (I don't say it's easy). Can you confirm this is possible? Do you think of some drawbacks of such variant return ?
I think it the scenario possible, yes. But I don't know if the boost::variant will do, depending on how you implement it. I give another example. In case an intersection of two polygons, you can get: - the intersection points (if the output geometry type is point) - the intersection polygon(s) (the 'normal' behaviour) - the intersection linestrings (the lines where the two polygons happen to intersect in a linear form, e.g. shared borders) Using a variant, you get one of them (because a variant is a multi-type, single-value). In this case, the implementation does not know which one is the most appropriate. Using a template parameter you can specify what you want. Here you get one type of them back, without the possibility to specify which type. Seeing your description, this is probably not your intention. Another option, still using a variant, is having the output iterator adding variants. In this way all output of the intersection, points, lines and polygons, are just append to e.g. a vector of variants. This will probably do. I don't know how common this scenario is. It is possible, there is one consequence though, it would require three output template parameters (to specify how output points, output linestrings, output polygons will look like...) You mention performance and indeed, this would save calculating the intersection points two or three times (because internally they are always used). Another option to avoid multiple calculations is to split the algorithm (actually it is split internally). Users can then specify precalculated intersection points in the algorithm (or in an overloaded version). We will take your points into account, it is very interesting. There is always room for adding another function (e.g. intersection_variant) which returns a vector of variants.
Yes, this is already in the introduction : "The GGL might be used in all domains where geometry plays a role: mapping and GIS, gaming, computer graphics and widgets, robotics, astronomy... The core is designed to be as generic as possible and support those domains. However, for now the development has been mostly GIS-oriented." But in that case, you need to provide some roadmap / todo list (done / not done) / priorities for the developments : current features and future features. Some kind of 'future versionning' for example would be appreciated.
Yes. Good point, this would be very useful and we will add this. Thanks, regards, Barend

Another option, still using a variant, is having the output iterator adding variants. In this way all output of the intersection, points,
Hi Barend, Thanks for your answer. Concerning the result of an intersection and using or not a variant : Barend wrote: lines and polygons, are just append to e.g. a vector of variants. This is what I had in mind. But reading your points, I came to the conclusion that it was far from good, and that something simplier could even work : instead of a variant, a tuple of back_inserter could do the trick very nicely. vector<point2D> myPoints; vector<line2D> myLines; vector<polygon2D> myPolygons; typedef tuple<vector<point2D>, vector<line2D>, vector<polygon2D> > AFakeTupleType; intersection_inserter<AFakeTupleType>(poly1, poly2, make_tuple(back_inserter(myPoints), back_inserter(myLines), back_inserter(myPolygons)); Please note there is no object of type AFakeTupleType constructed. This type is just used to drive the algorithm. The output can be dispatched to different vectors, and the input is still 1 object. Then, the user can simply check the lenght of each result to check if there was an intersection. This is less mathematics oriented, but still completely usable, and more simplier, because of the redirection of the result to the selected vectors. You already use tuple, so maybe references wrapper in AFakeTupleType would be needed to make the distinction with your other tuple. What do you think of that? Do you see any drawbacks? Best regards, Pierre

Hi Pierre,
[...] and that something simplier could even work : instead of a variant, a tuple of back_inserter could do the trick very nicely.
vector<point2D> myPoints; vector<line2D> myLines; vector<polygon2D> myPolygons;
typedef tuple<vector<point2D>, vector<line2D>, vector<polygon2D> > AFakeTupleType;
intersection_inserter<AFakeTupleType>(poly1, poly2, make_tuple(back_inserter(myPoints), back_inserter(myLines), back_inserter(myPolygons));
Please note there is no object of type AFakeTupleType constructed. This type is just used to drive the algorithm.
The output can be dispatched to different vectors, and the input is still 1 object. Then, the user can simply check the lenght of each result to check if there was an intersection.
This is less mathematics oriented, but still completely usable, and more simplier, because of the redirection of the result to the selected vectors.
You already use tuple, so maybe references wrapper in AFakeTupleType would be needed to make the distinction with your other tuple.
What do you think of that? Do you see any drawbacks?
I think it will work. It would certainly solve the performance issue, you get all three collections back in one calculation. Potential drawbacks (or, better stated, issues): - it is more complex; I think (personally) that most users want only the intersected polygon back. To always require the complexity described here might be a bit too much - but that is no real problem, because we can create another function for it, or even automatically detecting what is happening here, in the tag dispatching - the AFakeTupleType (tuple of vectors) does not match the make_tuple (tuple of output iterators), as you said - but also that is no real issue (actually, it is the same as it is working now, the OutputType and the output iterator) I think it is feasable and a very good idea; but not by default (because of the complexity involved, also for the library user). But if people want to get all these three different things back, yes, I think this is the best option discussed until now. Does the trick very nicely, as you said, indeed. Thanks! Regards, Barend

Please always state in your review, whether you think the library should be accepted as a Boost library.
This is a difficult question in this case, because it is not completely clear what it means. I found out during my review that the library as submitted for review is "work in progress", at least in parts. So I vote for acceptance, but try to explain how to interpret that vote. The library as a whole is not ready for release, but it is ready enough for a formal review, especially if we consider that it is normally expected that a library author will modify his library after a formal review to address the most important issues that came up during the review. The following parts of the library look completely finished and polished to me: - The design of the tag dispatching system, including all implementation details, the documentation of the rationale for it and the explanation of how it works and how it can be extended. - The configuration of the algorithms by the strategies, including all implementation detail, the documentation of the rationale and the explanation of how to use it. - The overall file and directory structure, including the setup of the regression-test system and the regression-tests themselves. The following parts of the library failed to give me the impression that they are finished: - The user level documentation - The geometry concepts One of the underlaying problems of the unfinished parts seems to be caused by an unclear position with respect to the relationship to certain OGC specifications. (And I'm not referring to the names of the concepts and certain algorithms here.) As an example, box and segment seem to be considered as second class citizens sometimes, because they are not part of the OGC specifications. Another example is that a clear documentation of the ring concept is considered not so important, because it is part of the OGC specifications (Barend wrote: "Agree that it could be enhanced, but please add that we refer to OGC which documents the meaning of geometric semantics and concepts in detail."). On the other hand, one OGC specification has a triangle as a specialization of a polygon, but GGL has neither a triangle concept, nor do the polygon concept allows to easily create a triangle class that fulfills the polygon concept. There are examples that try to show how to create a triangle class that fulfills the ring concept instead of the polygon concept, and as it turns out even this doesn't really work well. In theory, it should be clear that the relationship to OGC must stay on a purely naming-consistency level, if GGL really wants to be a generic geometry library and not a GIS geometry library. In practice, it seems to be difficult to ignore all the experience that you have collected in the GIS domain, especially because OGC provides so many nice reference documents. So how can I vote for acceptance when I consider two vitally important parts like the user level documentation and the geometry concepts as unfinished? Well, first of all, the finished and polished parts are really excellently done (in my opinion). So I hope that the authors will acknowledge that some parts are not yet finished, and that the yet unfinished parts will be of the same high quality as the already finished parts after they have been finished and polished. (I guess the authors would have waited longer before they requested a formal review if there would not have been the Boost.Polygon review. So I think there are good excuses why they submitted "work in progress" for formal review.) Another reason why I vote for acceptance is that GGL has three devoted developers that try to collaborate well with other developers (like incorporating the work of Federico J. Fernández on spatial indexes or trying to work with Brandon Kohn or Lucanus Simonson).
- What is your evaluation of the design?
Are we taking about the design of the "dimension-agnostic, coordinate-system-agnostic and scalable kernel" here? Honestly, I don't know exactly what is meant by this. So let's evaluate the "tag dispatching system" and the "configuration of the algorithms by the strategies" instead. The tag dispatching system seems to work well and it's details are well thought out and explained. Inheritances between the concepts is currently not present (which is sad, a ring as a specialization of a polygon would have been interesting), so it is difficult to directly compare it to the corresponding SFINAE dispatching system of Boost.Polygon. But the code is indeed easy to read, I got the impression that I was able to understand how it works from the explanations and would be able to write my own extensions, if necessary. And it didn't had any troubles with different compilers. The much more finished Boost.Polygon on the other hand really was constantly fighting with different compilers. So tag dispatching really seems to have advantages over SFINAE. The "configuration of the algorithms by the strategies" seems to works well and it's details are well thought out and explained. The design of the geometry concepts is not yet perfect. Some details of problems with the current geometry concepts were given above. How about the "dimension-agnostic"? The existing geometry concepts are quite focused to 2D, both 1D and 3D fall short. Cartesian products of spaces or geometries and quotient space constructions are completely ignored. (An axis aligned box is a Cartesian product of 1D intervals, for example. Representing it by two points is convenient and makes sense as a representation, but ignoring the Cartesian product part is still not good.) It's good that there are segments, because a polygon is mostly defined by its segments, in the same vain as a triangulated surface is mostly defined by its triangles. This also explains for me why requiring a polygon/ring to have the first and last point equal could make sense. After all, these points just define the corresponding segments. But nevertheless, I'm not sure whether this requirement is really a good idea. The requirement that a ring/polygon be closed has many consequences. First of all, it seems to make adapting legacy polygon classes much more difficult. On the other hand, a Polygon is not so much a collection of points as a collection of segments, so it is understandable that we want to make it easy and efficient to iterate over the segments of a polygon. Does the library enables me to write the geometric algorithms that I want in a generic way? For 2D algorithms, the answer is probably yes. A thing I noticed is that the different functions don't follow any obvious conventions about the order of their arguments. Sometimes the 'intended' return parameters (out parameters) are the first parameters (as most of us would expect), sometimes they are in the middle, sometimes they are at the end.
- What is your evaluation of the implementation?
The implementation looks quite nice to me. I especially like that the algorithms and the concepts are not in the same source file (as is often the case for Boost.Polygon). However, one question arises with respect to the extensions, because they already exist, but are not reviewed. Will the extensions be added to the library in boost without further review? Will they be distributed separately?
- What is your evaluation of the documentation?
The documentation is not sufficient. However, this is not a statement about the quantity, but about the organization, and about what is documented. Many individual algorithms and functions actually have a quite nice documentation. However, the overall picture is missing, and the reference to OGC specifications or examples is no substitute for a proper documentation. The position of GGL with respect to open/closed set interpretation of geometric objects (= how to treat the boundary of geometric objects) is not clearly documented. I actually collected many typos during my review, and corrected some in the corresponding source files, but I will have to send this in a separate mail. (And I probably should also elaborate more on the points were the documentation could/should be improved.)
- What is your evaluation of the potential usefulness of the library?
I guess we all agree that a foundation for a geometry libraries and algorithms in general would be quite useful. And the proposed library is more than just this foundations. There are utilities for input (WKT) and output (WKT, GeoJSON, SVG), distance computations with respect to different coordinate systems nad much more additional functionality like arithmetics, simplify, boolean operations, compare, ???
- Did you try to use the library? With what compiler? Did you have any problems?
I used it with VC9express and gcc-4.3.4. The crashing/hanging intellisense of VC9express is really annoying, but is this really the fault of GGL?
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I read the documentation, tried the examples, had a look at some of the tests, stepped through some of the existing code with a debugger, tried modifying some of the existing examples slightly (with disappointing effects...). I also tried to look up the referenced OGC specifications, which was probably not a good idea. I tried to build the soci examples, but gave up when I had to build soci.
- Are you knowledgeable about the problem domain?
What is the problem domain? GIS? no, I'm not knowledgeable about GIS. VLSI? yes, I'm quite familiar with it, but that was the problem domain of Boost.Polygon. Computational Geometry? a bit. Generic Geometry? Sorry, never heard of that problem domain. Is this related to the Erlang Program of Felix Klein? Generic Computational Geometry? not really. I heard that CGAL should cover quite some ground there, but I never worked with CGAL, only browsed over some of its documentation. Geometry? I know enough geometry to have some feeling about all the uncountable many things in geometry that I don't know. Final comments: The library is "work in progress". I looked in the policies, and didn't found anything that disallows accepting such libraries. But it should be clear that the first public release must meet higher quality standards. I heard that GGL has its own mailing lists. When GGL gets accepted as boost library, this mailing list should at least be referenced in the "http://www.boost.org/community/groups.html" section of the boost web page. It might also be nice if in this case, that list would be open to discussions regarding the development of geometry related boost libraries and questions about them in general. Regards, Thomas

Hi Thomas, Thank you very much for all the time you spent on the review of our library. Your review is really impressing, objective, and thorough. And of course we are glad with the, also well-considered, outcome. We will take all the points you raised into account.
The documentation is not sufficient. However, this is not a statement about the quantity, but about the organization, and about what is documented. Many individual algorithms and functions actually have a quite nice documentation. However, the overall picture is missing, and the reference to OGC specifications or examples is no substitute for a proper documentation.
We agree with this. If GGL is accepted, we will (probably) go to QuickBooks, as suggested on this list. All documentation will then be revisited and extended, with help of all the comments made by you and the other reviewers last weeks. We can then leave the imposed structure of Doxygen which was sometimes useful and sometimes not convenient.
I actually collected many typos during my review, and corrected some in the corresponding source files, but I will have to send this in a separate mail. (And I probably should also elaborate more on the points were the documentation could/should be improved.)
They are welcome!
I heard that GGL has its own mailing lists. When GGL gets accepted as boost library, this mailing list should at least be referenced in the "http://www.boost.org/community/groups.html" section of the boost web page. It might also be nice if in this case, that list would be open to discussions regarding the development of geometry related boost libraries and questions about them in general.
Certainly, the GGL mailing list is open to the public, and open for discussions on GGL and geometry. Thanks again, Barend

Barend Gehrels wrote:
Thomas Klimpel wrote:
I actually collected many typos during my review, and corrected some in the corresponding source files, but I will have to send this in a separate mail. (And I probably should also elaborate more on the points were the documentation could/should be improved.)
They are welcome!
I decided that it's easiest to fix all found typos in the sources. Find attached the corresponding patch. So I won't elaborate more on the points were the documentation could/should be improved right now, but I may review the improved documentation when it becomes available.
I heard that GGL has its own mailing lists. When GGL gets accepted as boost library, this mailing list should at least be referenced in the "http://www.boost.org/community/groups.html" section of the boost web page. It might also be nice if in this case, that list would be open to discussions regarding the development of geometry related boost libraries and questions about them in general.
Certainly, the GGL mailing list is open to the public, and open for discussions on GGL and geometry.
So, now that GGL has been accepted, how about adding this link to "http://www.boost.org/community/groups.html"? Should I file a ticket for the boost-webpage to add that link?
Thanks again, Barend
Congratulations to the fact that GGL has been formally accepted into Boost. I'm also happy to hear the you decided to go with the name Boost.Geometry. I'm also happy that you started discussing the handling of the extensions. As you may guess, what I've done now is to cleanup/erase my notes that I took during the GGL review. One last possibly interesting observation that wasn't included in my review is that way the specializations for algorithms working on multi-geometry are handled can lead to quadratic runtime complexity behavior in some cases, for example when determining the minimum distance of two multi-point geometries. But to be honest, I don't think that this is really important, and if it were really important, it would still be possible to override the specialization of the algorithm for the cases were it is important. Regards, Thomas

Hi Thomas,
I decided that it's easiest to fix all found typos in the sources. Find attached the corresponding patch. So I won't elaborate more on the points were the documentation could/should be improved right now, but I may review the improved documentation when it becomes available.
Thanks for your patch and we will notice you wen the improved documentation is ready for you, it is welcome!
So, now that GGL has been accepted, how about adding this link to "http://www.boost.org/community/groups.html"? Should I file a ticket for the boost-webpage to add that link?
As soon as we've completely decided how we will fill in the details, we can add it.
Congratulations to the fact that GGL has been formally accepted into Boost. Thanks!
As you may guess, what I've done now is to cleanup/erase my notes that I took during the GGL review. One last possibly interesting observation that wasn't included in my review is that way the specializations for algorithms working on multi-geometry are handled can lead to quadratic runtime complexity behavior in some cases, for example when determining the minimum distance of two multi-point geometries. But to be honest, I don't think that this is really important, and if it were really important, it would still be possible to override the specialization of the algorithm for the cases were it is important.
That is right, it is currently quadratic for multi-multi. Having a good spatial index will help certainly here. We will look at that in the future. Regards, Barend

Thomas Klimpel wrote:
Please always state in your review, whether you think the library should be accepted as a Boost library.
This is a difficult question in this case, because it is not completely clear what it means. I found out during my review that the library as submitted for review is "work in progress", at least in parts. So I vote for acceptance, but try to explain how to interpret that vote.
The library as a whole is not ready for release, but it is ready enough for a formal review, especially if we consider that it is normally expected that a library author will modify his library after a formal review to address the most important issues that came up during the review.
The following parts of the library to give me the impression that they are finished: - The user level documentation
To me it seems like a bad idea to accept libraries that are submitted with unfinished documentation, for the following reason: it's much harder to review a poorly-documented library. It's more friendly to reviewers if they don't have to search through the source or ask questions here to determine how to do basic things, and the limited time that reviewers have could be spent on more productive things. I'd like to discourage setting a precedent of accepting libraries with inadequate docs, even when it's believed that it would be improved later, so that future library authors are not tempted to submit their libraries too soon. Regards, Phil.

On Fri, Nov 20, 2009 at 5:57 PM, Phil Endecott <spam_from_boost_dev@chezphil.org> wrote:
To me it seems like a bad idea to accept libraries that are submitted with unfinished documentation, ...
Contingency! The review should be contingent on an "adequate" level of documentation. Jon

Jonathan Franklin wrote:
On Fri, Nov 20, 2009 at 5:57 PM, Phil Endecott <spam_from_boost_dev@chezphil.org> wrote:
To me it seems like a bad idea to accept libraries that are submitted with unfinished documentation, ...
Contingency! The review should be contingent on an "adequate" level of documentation.
Thanks. Jeez! What a tough crowd! I have to say that I laughed out loud when I read that a precedent of accepting things into boost with inadequate documentation should not be set! Too late! Some existing boost libraries took quite awhile to come up to the level of documentation we enjoy today. There are still noticeable differences from library to library in the wonderfulness that is boost documentation. Almost everyday on the user's group someone is posting about some library saying, "Help! I've read all the documentation on this library and I can't get a clue in the forest of clues on two for 1 free clue day!" (I may be paraphrasing a bit;) Now--all of a sudden--documentation has to be complete and professional to be considered for inclusion in boost. Hope there's a grandfather clause! Patrick

2009/11/5 Hartmut Kaiser <hartmut.kaiser@gmail.com>
------------------------------------------------------- Everybody on this list is invited to participate in this formal review. I hope to see your review of this library, including your vote, and I welcome your participation in the discussions on the Boost mailing list.
Please always state in your review, whether you think the library should be accepted as a Boost library.
My Review: GGL is an ambitious project with a large scope. While it's design is well thought-out and many algorithms are probably mature, there is obviously unfinished work pending in fields that seem to be important and that are added to the library only recently. This puts me as a reviewer in a difficult situation. I have a tendency to reject the library simply because important parts are not finished. I have the impression that the rules are bent here, because the contributors mistook generic library development for a horse race. On the other hand there is a lot of substance, work and dedication in this library and the project deserves approval. So my vote is a weak YES in the end.
- What is your evaluation of the design?
The design is very clear. I think it can serve as a standard example of how to cover a big non trivial problem domain using meta programming, partial specialisation and tag dispatch to make it uniformly accessible by a set of generic algorithms.
- What is your evaluation of the documentation?
The part of the documentation I liked most was the design rationale page. It clearly describes the libraries basic design ideas step by step. The reader not only gets a good understanding of the GGL's design but is able to use these techniques in her own projects. Important parts, like that one on "set theoretic operations" have been added during the review, which nurtured the "horse race impression". The doxygen generated docs are minimal and have a few minor inconsistencies (e.g. template parameter S and Strategy within one page). Missing is a synopsis what algorithms are actually implemented for what combinations of geometries with the complexities for each such combination (or at least for the most efficiency critical algorithms). There were sometimes benchmark figures given in a paragraph "performance", but these are almost useless without a context. When I had problems with coding some toy programs the documentation appeared to be insufficient in providing the needed information.
- What is your evaluation of the potential usefulness of the library?
There is no question that a general generic geometry library is of very high usefulness.
- Did you try to use the library? With what compiler? Did you have any problems?
Used it with msvc-9. All examples worked well. Trying to run the tests resulted in some errors. Which I have not tried no analyse: ...failed updating 59 targets... ...skipped 169 targets... ...updated 279 targets... I have tried to run some toy programs and encountered problems very quickly, when I tried to initialize and use multi_polygons, e.g. ggl::multi_polygon<polygon_2d> mupo; std::cout << ggl::dsv(mupo) << std::endl; // produced this error message boost\ggl/util/write_dsv.hpp(344) : error C2039: 'apply' : is not a member of 'ggl::dispatch::dsv<Tag,IsMulti,Geometry>' Also I was not successful in assigning or appending a polygon to a multi_polygon. In the experiments, that I did with Boost.Polygon I was able to customize own classes, build a polygon generator and run law based tests within a few hours. In GGL this was not possible because of basic problems with multi_polygons.
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
Reading the documentation. Browsing some code, including test code. Trying to run some toy programs.
- Are you knowledgeable about the problem domain?
I'm not an expert in computational geometry but an experienced developer in the field generic programming. Best regards Joachim Faulhaber

Hi Joachim, Again, thanks for reviewing our library!
Important parts, like that one on "set theoretic operations" have been added during the review [...]
No excuse, but for your information, this page, and the page about robustness, were added as answers to (sometimes many) questions on this list during the review.
I have tried to run some toy programs and encountered problems very quickly, when I tried to initialize and use multi_polygons, e.g.
ggl::multi_polygon<polygon_2d> mupo; std::cout << ggl::dsv(mupo) << std::endl; // produced this error message
I agree that multi* deserves more attention in examples and documentation. But, just to be complete now, including "ggl/multi/multi.hpp" would have solved this problem. Well, actually, this one was in the documentation (related pages, compiling). This was probably also be the cause of other problems you've had with multi.
Also I was not successful in assigning or appending a polygon to a multi_polygon.
mupo.push_back(po); // for appending What is written in the docs is that a multi-polygon behaves like a range. For editing it behaves like a std::container. There were more comments about this, similar as to the inner containers of a polygon. We will come back to this, the RangeEx functionality might help here. Regards, Barend
participants (20)
-
Barend Gehrels
-
Brandon Kohn
-
Bruno Lalande
-
Hartmut Kaiser
-
Joachim Faulhaber
-
joel
-
Jonathan Franklin
-
Mateusz Loskot
-
Mathias Gaunard
-
Michael Fawcett
-
Patrick Horgan
-
Paul A. Bristow
-
Phil Endecott
-
Pierre Morcello
-
Rutger ter Borg
-
Simonson, Lucanus J
-
Stefan Strasser
-
Stewart, Robert
-
Thomas Klimpel
-
vicente.botet