Preview 2 of a template Geometry Library

Dear list, Herewith the URL to the second preview of Geodan's Geometry Library, including documentation and examples. The first preview was sent to this list in January. There were several valuable comments and they are processed. The documentation contains a page with the most important changes. The very most important change is done in the type geometry::point. The new design is now described here: http://geometrylibrary.geodan.nl/points.html As in the first preview, there are still implementation issues and algorithmic details to be figured out or to be implemented. This is a preview and not yet a real submission. However I think that the community will get a feeling how it works and/or should work. The URL with complete documentation (now generated using Doxygen) is here: http://geometrylibrary.geodan.nl/index.html, sources and headerfiles can be downloaded from "Related pages". Best regards, Barend Gehrels ------------------------------------- Barend Gehrels, Geodan Holding b.v., Amsterdam, The Netherlands, www.geodan.nl

This is a topic I find extremely interesting since by the will of fate I keep coming back to it in the course of my everyday work. In fact, I have implemented (fully or in part) a similar set of library facilities 3 times in a row... and I can't very well opensource any of the three implementations :( Nothing prevents me from collaborating on an existing open source project though :) I have only glanced briefly through the documentation so my comments may be based on an incomplete picture; I apologize in advance if that's the case. Nevertheless I am going to respond right away just so you know you're not alone ;). I'll also try to spend more time reviewing your library later today. - I very much like the traits-based approach to determining the type of calculation results. However a good extensible facility for deriving the result type from argument types appears to be generally useful and not at all specific to geometry. Unless there is one already in Boost (I am not aware of it and quick check of the docs didn't help), perhaps it may be introduced as a separate library? - While generalized point types (2D vs. 3D and XY vs. lat/lon) are OK, it is often extremely useful in graphics code to make your objects fit the underlying representation of an externally defined structure (such as POINT type in Win32) -- for example by publicly or privately inheriting from it. This lets you use e.g. vectors of points as arguments for system APIs. Can be done by adding an extra template argument (with a default) to point<> and moving most of the implementation details into a traits class. - There is a box type but not a (1D) range type (I don't think you allow 1D points, or do you?). It is often convenient to define operations on bounding rectangles in terms of ranges. The comment about making point types compatible with externally defined structure applies to boxes too. - I didn't find any facilites for linear transformations in your list. Am I not looking in the right place or they aren't there? If the latter, it is probably the biggest omission. Or do you think this need is best served with uBLAS? I have no idea if this can be done using uBLAS in both efficient and convenient way so would like to hear everyone's thoughts. I am very much used to writing things like: pt = shift( dst_offs ) * rotate( angle ) * shift( -src_offs ) * pt; or when it is a repeated operation: transform2D xf = shift(dst_offs) * rotate(angle) * shift(-src_offs); ....... pt = xf * pt; or maybe even for_each( pts.begin(), pts.end(), _1 = xf * _1 ); // using lambda - In 2D geometry, very many computations can be most efficiently expressed in terms of dot product and cross product, so it wouldn't hurt to have these operations implemented as part of the library. It probably makes sense (I haven't done that before) to have a type for offset vectors in geometrical sense separate from points: vec = pt - pt; pt = pt + vec; et. al. The difference is that the offset part of a linear transformation should apply to point but not to vector. Again, is this something best done with uBLAS? Ok, really need to go back to work. I'll probably have more to say on this later. ...Max...

- While generalized point types (2D vs. 3D and XY vs. lat/lon) are OK, it is often extremely useful in graphics code to make your objects fit the underlying representation of an externally defined structure (such as POINT type in Win32) -- for example by publicly or privately inheriting from it. This lets you use e.g. vectors of points as arguments for system APIs. Can be done by adding an extra template argument (with a default) to point<> and moving most of the implementation details into a traits class.
In my understanding, the goal of point is not to wrap your own point structure but to be replaced with it, provided that it satisfies the same concept : value<>() functions, coordinate_type, coordinate_count... If you do so, all the algorithms are supposed to work with your structure. Bruno

Bruno Lalande wrote:
- While generalized point types (2D vs. 3D and XY vs. lat/lon) are OK, it is often extremely useful in graphics code to make your objects fit the underlying representation of an externally defined structure (such as POINT type in Win32) -- for example by publicly or privately inheriting from it.
Ouch, no!
This lets you use e.g. vectors of points as arguments for system APIs. Can be done by adding an extra template argument (with a default) to point<> and moving most of the implementation details into a traits class.
In my understanding, the goal of point is not to wrap your own point structure but to be replaced with it, provided that it satisfies the same concept : value<>() functions, coordinate_type, coordinate_count... If you do so, all the algorithms are supposed to work with your structure.
This has been well discussed many times: make Point a concept and allow many models of the Point concept. It is a mistake to have a single do-it-all point class. What if I want a 2 dimensional point with a short for x and a long for y? Your class can never satisfy all possibilities. The best strategy is to allow for multiple point models be usable in your algorithms. Example: liba::point p1; libb::point<int> p2; POINT p3; // MFC cout << distance(p1, p2) << endl; cout << distance(p2, p3) << endl; Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

This has been well discussed many times: make Point a concept and allow many models of the Point concept. It is a mistake to have a single do-it-all point class.
The only potential problem with this is operator overloading, though for the specific case of Win32 (or X Window for that matter) point structure it's not a big deal since the original class does not have its own arithmetics defined. If it did... well, one can always provide a simple wrapper in the right namespace to resolve the ADL conflict. So I guess I don't disagree. ...Max...

Joel de Guzman wrote:
This has been well discussed many times: make Point a concept and allow many models of the Point concept. It is a mistake to have a single do-it-all point class. What if I want a 2 dimensional point with a short for x and a long for y?
This is the statement a software engineer makes not someone tasked with solving computational geometry problems. At some point the library (underlying concepts) has to place strategic limits on what is and is not a viable representation for a geometric entity - why stop at a point why not call everything a simplex and parametrize things accordingly? The Original poster's submission (not sure if it is a review) falls dismally short of what a generic templates computational library should be at the bare minimum let alone something useful by the general populace. Far from simple nD objects, the spaces in which those objects exists must be defined or defaulted, the kinds of operations their bounds and limits must have definable taps within the library - the "I'll template everything off-of traits and force the end user to figure all the details out" view will not work here, as most people intending on using the library will want to actually use it not develop {for}it. In your example:
cout << distance(p1, p2) << endl; cout << distance(p2, p3) << endl;
(not assuming the previous defs), In what space is this distance metric being computed (euclidean? hilbertspace?)..? what kind of precision are you after? should distance(p1, p2) == distance(p2, p1), what about performance? I' want to write a gaming or graphics engine, how can I get around the checks and balances that say a round such as is_point_in_triangle will endure? Will I have to reinvent the wheel every time I can't get the exact thing I want out of the library? None of these kinds of concepts are being described or addressed and these are only a few of the simpler requirements. Writing a geometry library is hard, look at CGAL over $1million Euros in funding, both commercial and academic wings and look at the monstrosity they have produced. If you can determine the intersection point of 2 line segments in under 15lines of code using CGAL I take my hat off to you :> I would love it if there was a formal discussion on this ML about what would constitute a computational geometry library that is worth of the BOOST mark - not just random people coming up with their latest attempt at one. You really have to have done some work in this area before you can even begin to consider the technical issues that are involved. A passing experience usually gives the wrong impression about the complexity of the problem(s) in this field. Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

The Original poster's submission (not sure if it is a review) falls dismally short of what a generic templates computational library should be at the bare minimum let alone something useful by the general populace. Quite a statement. I limit my answer to this: the submission is a
preview, as mentioned in my posting.
One other thing I forgot to mention, was when looking at the provided link http://geometrylibrary.geodan.nl/points.html
The distance definition takes 2 templated point types but always returns a double. The lesson here is that the precision required for the the result may be more than the precision of the inputs. In this case the best strategy is to at the minimum return the same numeric type of the points as the result type or better.
The distance of two integers may result in a floating point value. Therefore distance returns a double. Internally, you will see, often, the same numeric types or better is used as you describe. See select_type_traits and large_type_traits.
The other issue is that the points should be of the same type, not differing types. This strategy_traits approach where assuming there will be an implementation for the combination of types is not adequate, what if one of the types can be easily promoted to the other?
Points of different types can be handled. If you use a derived class, you have to provide an appropriate traits class. It is not promoted. At least not in the compilers I use. Besides that, the library user might choose to specify a strategy.
eg: double conversion to my_super_duper_arbitrary_precision_floating_point_type via copy constructor? does that mean I have to provide a wrapper around the explicit call to my implementation of distance with my specific type? Indeed the distance result could use a large_type_traits as well.
Barend Gehrels

Barend Gehrels wrote:
Quite a statement. I limit my answer to this: the submission is a preview, as mentioned in my posting. I should not have been so brash, its just very exciting to see some momentum in this topic again on the BOOST ML.
The distance of two integers may result in a floating point value. Therefore distance returns a double. Internally, you will see, often, the same numeric types or better is used as you describe. See select_type_traits and large_type_traits.
I couldn't find one for the instance of distance, perhaps I wasn't looking properly, in any case the simplest definition in my opinion could be: template<typename point_type> point_trait<point_type>::value_type distance(const point_type& p1, const point_type& p2); btw will there be support for base value inputs? template<typename T> T distance(const T& x1,const T& y1, const T& y2,const T& x2);
Points of different types can be handled. If you use a derived class, you have to provide an appropriate traits class. It is not promoted. At least not in the compilers I use. All the compilers I use do it quite easily.
class my_rational { public: my_rational(const int& i); my_rational& operator=(const int& i); }; void foo(my_rational value){...} { int i = 123; foo(i); } as per the example above if I provided a distance strategy/routine using my_rational, then I should not need to do anything else in order to be able to pass in a series of integers into the routine - the result would of course be my_rational but the class my also have a few cast operators to convert into the various C++ PODs
Besides that, the library user might choose to specify a strategy. Thats great, but consider me a really dumb user, I have 4 values I've just read in from a file and i want to compute the euclidian distance, I would like to be able to say the following without having to worry about strategies or converting my values into point types etc:
dist = boost::geometry::distance(x1,y1,x2,y2); In-fact to further this discussion before any proposal could be made, one must consider the use-cases people will want to have. This concept in comp-geo s/w is called layering and the computational issues surrounding it are called folding ("Robustness In Geometric Computations" [Hoffman01]) Above is just one of many use-cases (one of the more important ones), there are others where the user will define a type could be a POD could be something else, and a kernel, others may want to also define mathematical operational variants (eg: new form of addition or subtraction that is valid in their space) as you can see we're still discussing the base types of the objects, not the objects themselves let alone the policy nature of the higher level routines eg: I want to compute a euclidean distance but my values represent astronomical/cosmological values in a meter basis, what is the best strategy here? - obviously as you suggested earlier a user defined distance, but this stuff should come later. There is so much to this topic, you indeed are very courageous to take it on. Regards, Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

The distance of two integers may result in a floating point value. Therefore distance returns a double. Internally, you will see, often, the same numeric types or better is used as you describe. See select_type_traits and large_type_traits.
I couldn't find one for the instance of distance, perhaps I wasn't looking properly, in any case the simplest definition in my opinion could be:
template<typename point_type> point_trait<point_type>::value_type distance(const point_type& p1, const point_type& p2);
See also the discussions on the other thread about distance and "distance_result". It turns out to be very useful to have a variant which is squared, in some cases, for fast comparisons. But not all point-types need that. Besides that the library as is in preview can compare two types. But all this, indeed, is not "the simplest definition".
btw will there be support for base value inputs?
template<typename T> T distance(const T& x1,const T& y1, const T& y2,const T& x2);
Good idea.
Points of different types can be handled. If you use a derived class, you have to provide an appropriate traits class. It is not promoted. At least not in the compilers I use. All the compilers I use do it quite easily.
class my_rational { public: my_rational(const int& i); my_rational& operator=(const int& i); };
void foo(my_rational value){...} { int i = 123; foo(i); }
as per the example above if I provided a distance strategy/routine using my_rational, then I should not need to do anything else in order to be able to pass in a series of integers into the routine - the result would of course be my_rational but the class my also have a few cast operators to convert into the various C++ PODs
This is not how I use it in the traits class, but I have to check again if its promoted there. Thought it was not.
Besides that, the library user might choose to specify a strategy. Thats great, but consider me a really dumb user, I have 4 values I've just read in from a file and i want to compute the euclidian distance, I would like to be able to say the following without having to worry about strategies or converting my values into point types etc:
dist = boost::geometry::distance(x1,y1,x2,y2);
The strategy is optional... Dumb users don't have to use them... But again, offering a calculation of the distance of four coordinates is a good idea.
(snip)
There is so much to this topic, you indeed are very courageous to take it on.
Thanks. Barend Gehrels, Geodan, Amsterdam

Arash Partow wrote:
In your example:
cout << distance(p1, p2) << endl; cout << distance(p2, p3) << endl;
(not assuming the previous defs), In what space is this distance metric being computed (euclidean? hilbertspace?)..?
what kind of precision are you after? should distance(p1, p2) == distance(p2, p1),
what about performance? I' want to write a gaming or graphics engine, how can I get around the checks and balances that say a round such as is_point_in_triangle will endure?
Will I have to reinvent the wheel every time I can't get the exact thing I want out of the library?
That's besides the point. The point is that you should be able to use different models of the point concept interchangeably much like STL algorithms can work on different container types. I think you are missing my point. I could very well rename "distance" to "foo" and my point still holds. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman wrote:
That's besides the point. The point is that you should be able to use different models of the point concept interchangeably much like STL algorithms can work on different container types. I understand what the original poster was trying to convey, and your opinion is valid, but they both need much more clarification than saying things should be interchangeable.
I think you are missing my point. I could very well rename "distance" to "foo" and my point still holds.
If we are talking high level abstraction principles in s/w design then this thread may never end :> Regards, Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

I understand what the original poster was trying to convey, and your opinion is valid, but they both need much more clarification than saying things should be interchangeable.
Indeed, things have to be clear on that point, and I'm a bit confused about what really constitutes the concept of a point. What I have understood until now is that a class is the point if it exposes: * the value<>() getter and setter * coordinate_type * coordinate_count and that besides this, the .x and .y of the point_xy class were just some syntactic helpers provided for an easier external use of this class. But I see that almost all the algorithms use .x and .y, which means that they won't work with the basic concept I just described. Shouldn't all those algorithms be implemented in terms of value<>() accesses ? Some headers that use .x and .y: area.hpp strategies_point_xy.hpp centroid.hpp correct.hpp envelope.hpp intersection_linestring.hpp intersection_segment.hpp Bruno

Bruno wrote:
I understand what the original poster was trying to convey, and
your
opinion is valid,
but they both need much more clarification than saying things should be interchangeable.
Indeed, things have to be clear on that point, and I'm a bit confused about what really constitutes the concept of a point. What I have understood until now is that a class is the point if it exposes: * the value<>() getter and setter * coordinate_type * coordinate_count
and that besides this, the .x and .y of the point_xy class were just some syntactic helpers provided for an easier external use of this class. But I see that almost all the algorithms use .x and .y, which means that they won't work with the basic concept I just described. Shouldn't all those algorithms be implemented in terms of value<>() accesses ?
Some headers that use .x and .y: area.hpp strategies_point_xy.hpp centroid.hpp correct.hpp envelope.hpp intersection_linestring.hpp intersection_segment.hpp
Bruno [John Femiani]
This is why I would like to see concepts spelled out like this: http://www.boost.org/libs/concept_check/reference.htm#basic-concepts as is done by other boost libraries which I like, such as gil http://stlab.adobe.com/gil/html/index.html, BGL http://www.boost.org/libs/graph/doc/graph_concepts.html , fusion, and others. They all clearly specify their concepts using the SGI style and provide concept checking classes. To me that makes them easy to understand. IMHO the core need from a point library is the concepts, not the models, because there are so many different well-tuned geometry libraries that exist, and the idea is that boost _algorithms_ work on any "point", rather than that boost have a great point class (those are easy). Also, I already disagree about the idea that a point should have coordinates. CoordinatesConcept should be part of a separate concept. Of course a useful point class will model both the coordinates and the point concept. To me a point should really support transform, scale, rotate, shear, and distance operations as free functions (maybe more). I don't know how to handle the idea of a 'space' or a 'kernel'. BTW, I came across lib2geom the other day and they seem to be using concepts: http://lib2geom.svn.sourceforge.net/viewvc/lib2geom/lib2geom/trunk/src/c oncepts.h?view=markup

I (John Femiani) wrote:
Also, I already disagree about the idea that a point should have coordinates. CoordinatesConcept should be part of a separate concept.
Of
course a useful point class will model both the coordinates and the point concept. To me a point should really support transform, scale, rotate, shear, and distance operations as free functions (maybe more).
I don't know how to handle the idea of a 'space' or a 'kernel'.
s/transform/translate Maybe there should also be a generic transform concept to but I dunno. Joel de Guzman wrote:
I am talking about generic programming and C++ concepts: http://www.research.att.com/~bs/popl06.pdf.
That paper uses syntax that is not yet part of _most_ C++ compilers, so I would like a proposed library to use the BCCL (http://www.boost.org/libs/concept_check ). Also there is a nice video from Doug Gregor http://video.google.com/videoplay?docid=-1790714981047186825 that helped me understand this subject (to the extent that I do) -- John Femiani

John Femiani wrote:
I (John Femiani) wrote:
Also, I already disagree about the idea that a point should have coordinates. CoordinatesConcept should be part of a separate concept. Of course a useful point class will model both the coordinates and the point concept. To me a point should really support transform, scale, rotate, shear, and distance operations as free functions (maybe more).
I don't know how to handle the idea of a 'space' or a 'kernel'.
s/transform/translate
Maybe there should also be a generic transform concept to but I dunno.
Joel de Guzman wrote:
I am talking about generic programming and C++ concepts: http://www.research.att.com/~bs/popl06.pdf.
That paper uses syntax that is not yet part of _most_ C++ compilers, so I would like a proposed library to use the BCCL (http://www.boost.org/libs/concept_check ).
Also there is a nice video from Doug Gregor http://video.google.com/videoplay?docid=-1790714981047186825 that helped me understand this subject (to the extent that I do)
Yes, of course. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

John Femiani wrote:
Also there is a nice video from Doug Gregor http://video.google.com/videoplay?docid=-1790714981047186825 that helped me understand this subject (to the extent that I do)
That video was quiet an interesting! It's a shame we wont be able to actively use the "concepts" concept until after 09. I can see some nice uses of concepts when doing triangulations.

On Wed, Mar 26, 2008 at 4:36 PM, John Femiani <JOHN.FEMIANI@asu.edu> wrote:
IMHO the core need from a point library is the concepts, not the models, because there are so many different well-tuned geometry libraries that exist, and the idea is that boost _algorithms_ work on any "point", rather than that boost have a great point class (those are easy).
I agree, what do you think of this extremely simple starting point? The names definitely need work. template <typename T> struct CompileTimePointConcept { void constraints() { typedef typename at_c<typename T::element_seq, 0>::type x_type; typedef typename at_c<typename T::element_seq, 1>::type y_type; x_type &x = get<0>(value); y_type &y = get<1>(value); } private: T value; }; template <typename T, int N> struct CompileTimePointConcept<T[N]> { void constraints() { T &x = get<0>(value); T &y = get<1>(value); } private: T value[N]; }; template <typename T> struct RuntimeIndexablePointConcept { void constraints() { typedef typename at_c<typename T::element_seq, 0>::type x_type; typedef typename at_c<typename T::element_seq, 1>::type y_type; BOOST_MPL_ASSERT((is_same<x_type, y_type>)); int i = 0; x_type &x = value[i]; y_type &y = value[i + 1]; } private: T value; }; template <typename T, int N> struct RuntimeIndexablePointConcept<T[N]> { void constraints() { int i = 0; T &x = value[i]; T &y = value[i + 1]; } private: T value[N]; }; As you can see, one behaves very much like a tuple, the other like an array. T::element_seq could be an mpl::vector or maybe Fusion could help adapt legacy structs easier (I've not played with Fusion yet). What else would a PointConcept need? "Point" almost feels like a misnomer since it implies much more than most algorithms require... --Michael Fawcett

Michael Fawcett wrote:
I agree, what do you think of this extremely simple starting point? The names definitely need work.
template <typename T> struct CompileTimePointConcept { }; template <typename T, int N> struct CompileTimePointConcept<T[N]> { };
template <typename T> struct RuntimeIndexablePointConcept { }; template <typename T, int N> struct RuntimeIndexablePointConcept<T[N]> { };
Will there be operators for these types? or will they be externally defined? That CompileTimePointConcept<T[N]> seems quite interesting but how much use can it really be?
What else would a PointConcept need? "Point" almost feels like a misnomer since it implies much more than most algorithms require...
The problem is every algorithm has its own set of requirements for what a point must be able to do/provide. Taking the approach mentioned in the DG video from a previous thread (continually simplfying templated inputs into routines) leads me to believe the most general point concept would be any empty class. Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

On Fri, Mar 28, 2008 at 9:17 AM, Arash Partow <arash@partow.net> wrote:
Michael Fawcett wrote:
I agree, what do you think of this extremely simple starting point? The names definitely need work.
template <typename T> struct CompileTimePointConcept {
}; template <typename T, int N> struct CompileTimePointConcept<T[N]> {
};
template <typename T> struct RuntimeIndexablePointConcept {
}; template <typename T, int N> struct RuntimeIndexablePointConcept<T[N]> { };
Will there be operators for these types? or will they be externally defined?
The operator required for a RuntimeIndexable point is operator[]. For a compile-time model, you must be indexable by boost::get<>() or provide your own get<>() that will be found by ADL.
That CompileTimePointConcept<T[N]> seems quite interesting but how much use can it really be?
dot_product() and cross_product() come to mind...normalize() as well. It is useful for any algorithm where it is known at compile-time which components you will be dealing with.
What else would a PointConcept need? "Point" almost feels like a misnomer since it implies much more than most algorithms require...
The problem is every algorithm has its own set of requirements for what a point must be able to do/provide. Taking the approach mentioned in the DG video from a previous thread (continually simplfying templated inputs into routines) leads me to believe the most general point concept would be any empty class.
I almost agree. Consider a 3 component dot_product() implementation (the N-dimensional case would of course be more complex to do at compile-time): template <typename L, typename R> /*typeof magic here*/ dot_product(const L &lhs, const R &rhs) { using boost::get; return get<0>(lhs) * get<0>(rhs) + get<1>(lhs) * get<1>(rhs) + get<2>(lhs) * get<2>(rhs); } The actual minimal concept required is simply that every element of the sequences is able to be indexed into, is able to be multipled, and that their result is able to be added. In this case, a boost::tuple would fulfill the model, so would a straight array with an appropriate get<> defined, so would boost::array...and with minimal work, so would any legacy point class. So I think I agree that there isn't really a "PointConcept", but as shown here, there are still concepts that the algorithms require. I suspect a lot of the algorithms dealing with points merely require that each element be indexable and also satisfy boost::is_arithmetic. --Michael Fawcett

Michael Fawcett wrote:
The actual minimal concept required is simply that every element of the sequences is able to be indexed into, is able to be multipled, and that their result is able to be added. In this case, a boost::tuple would fulfill the model, so would a straight array with an appropriate get<> defined, so would boost::array...and with minimal work, so would any legacy point class.
So I think I agree that there isn't really a "PointConcept", but as shown here, there are still concepts that the algorithms require. I suspect a lot of the algorithms dealing with points merely require that each element be indexable and also satisfy boost::is_arithmetic.
[John Femiani] 1. I don't think a generic point concept can avoid compile time indexing because you would have to support existing point types which are structs or even encapsulated classes. I don't know a portable, correct, and efficient way to do that. If my algorithm _could_ be implemented using CompileTimeIndexed instead of RunTimeIndexed then my algorithm _should_ be written to the CompileTimeIndexed concept in order to be more generic. 2. I think that one issue about 'points' is that they are often represented by coordinates. Coordinates are the things that should be indexable, and have arithmetic ops. Points, IMO, should be described in terms of the geometric transformations (rotate, scale, transform) that can be applied to them. It might not hurt to even factor those out into TranslatableConcept, RotatableConcept, etc. I also think that points should have some metafunctions that serve the role of a Kernel, so that you can figure out which distance metric is used by default, what its result type is, and what the vector type is, or what the inner product result should be. Maybe that would be flexible way to handle the robustness issues discussed in this thread. -- John Femiani

John Femiani wrote:
This is why I would like to see concepts spelled out like this: http://www.boost.org/libs/concept_check/reference.htm#basic-concepts as is done by other boost libraries which I like, such as gil http://stlab.adobe.com/gil/html/index.html, BGL http://www.boost.org/libs/graph/doc/graph_concepts.html , fusion, and others. They all clearly specify their concepts using the SGI style and provide concept checking classes. To me that makes them easy to understand.
IMHO the core need from a point library is the concepts, not the models, because there are so many different well-tuned geometry libraries that exist, and the idea is that boost _algorithms_ work on any "point", rather than that boost have a great point class (those are easy).
The questions here is should the concept(s) spell out each and every property of the object (point for example)? In the on going example of the distance routine, say I provide a point that is comprised of my_rational type. but I have not defined a sqrt function for it, should say stuff like that be definable as concept as well - if so where do we stop?
Also, I already disagree about the idea that a point should have coordinates.
I agree
CoordinatesConcept should be part of a separate concept. Of course a useful point class will model both the coordinates and the point concept. To me a point should really support transform, scale, rotate, shear, and distance operations as free functions (maybe more).
I don't know how to handle the idea of a 'space' or a 'kernel'.
Well they occur(are needed) when you want to re/define various operators or truths. It would be go to somehow link type,object,routine and space together - with obvious/common defaults provided by the library. Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

Bruno Lalande wrote:
Indeed, things have to be clear on that point, and I'm a bit confused about what really constitutes the concept of a point. What I have understood until now is that a class is the point if it exposes: * the value<>() getter and setter * coordinate_type * coordinate_count
and that besides this, the .x and .y of the point_xy class were just some syntactic helpers provided for an easier external use of this class. But I see that almost all the algorithms use .x and .y, which means that they won't work with the basic concept I just described. Shouldn't all those algorithms be implemented in terms of value<>() accesses ?
Some headers that use .x and .y: area.hpp strategies_point_xy.hpp centroid.hpp correct.hpp envelope.hpp intersection_linestring.hpp intersection_segment.hpp
Bruno, All right, thanks for examining the sources so closely. The .x and .y are not meant to be in most of those sources, but as it is a preview, and not final submission, they are still there. I didn't introduced strategies for the "side-geometries" box and circle, until now. Therefore in area, centroid, correct and envelope there are still .x and .y coordinates. The are only in the box and/or circle geometries there. I left them more or less untouched after the preview. This should be solved. As you will see, in point, linestring and polygon they are gone in the correct / envelope / centroid and area algorithms, as they are in simplify, length and distance. I also didn't yet introduce strategies for intersections yet. It is just a matter of time, available time and effort. I made comments about that in source and documentation. Will also be solved. The header strategies_point_xy.hpp uses .x and .y and there it is intentionally. I agree that the "point concept" should be made more clear and will work on that. Barend

Hi,
All right, thanks for examining the sources so closely. The .x and .y are not meant to be in most of those sources, but as it is a preview, and not final submission, they are still there.
OK perfect :-) So finally, regarding the different feedbacks from other people, it appears that implementing concept checking and changing the algorithms so that they actually work on them will be the main point for a 3rd preview.
The header strategies_point_xy.hpp uses .x and .y and there it is intentionally.
Oops sorry, I shouldn't have put this one into my list. Bruno

Bruno Lalande wrote:
Indeed, things have to be clear on that point, and I'm a bit confused about what really constitutes the concept of a point. What I have understood until now is that a class is the point if it exposes: * the value<>() getter and setter * coordinate_type * coordinate_count
This is somewhat correct but might be an over-engineering of things to the point where it may be unuseable to a group of potential users (people that have large data sets that need things done efficiently). I have 10^9 x,y values as doubles give me the convex hull - can you imagine in the overhead if each of those pairs were to be converted to a point_concept/class/instance?
and that besides this, the .x and .y of the point_xy class were just some syntactic helpers provided for an easier external use of this class. But I see that almost all the algorithms use .x and .y, which means that they won't work with the basic concept I just described. Shouldn't all those algorithms be implemented in terms of value<>() accesses ?
intersection_linestring.hpp intersection_segment.hpp The other thing to get right now is terminology,I'm guessing
Actually most algorithm implementations I've seen use index operators thats possibly because the underlying point types were always nD structures so it makes it easy to do something like the following: template<typename T, unsigned int dimension> struct point { T data_[dimension]; }; . . . point<double,101> point101D; These accessor differences can be taken care of with adaptors and such (though there may be a run-time penalty) linestring is meant to be line_segment_string? Though it looks like you're on the right track! Rgds, Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

intersection_linestring.hpp intersection_segment.hpp The other thing to get right now is terminology,I'm guessing
(snip) linestring is meant to be line_segment_string?
Though it looks like you're on the right track!
Rgds,
Arash Partow
Arash, linestring is the term OGC introduced for it. Used there but in many databases, KML, etc. The library-in-preview uses their terms. See also this: http://geometrylibrary.geodan.nl/ogc.html Barend Gehrels, Geodan, Amsterdam

I have 10^9 x,y values as doubles give me the convex hull - can you imagine in the overhead if each of those pairs were to be converted to a point_concept/class/instance?
If you speak about an algorithm like the convex hull, maybe you're right. If you speak about more simple ones like distance, so long as the compiler optimizes correctly and the programmer provides an inlinable point class, chances are that the code compiled will be quite the same. Example: inline double dist(double x1, double y1, double x2, double y2) { double dx = x2 - x1; double dy = y2 - y1; return sqrt(dx*dx + dy*dy); } struct point { point(double x_, double y_): x(x_), y(y_) {} double x; double y; }; inline double dist(const point& p1, const point& p2) { double dx = p2.x - p1.x; double dy = p2.y - p1.y; return sqrt(dx*dx + dy*dy); } On my computer with GCC -O3, this: dist(a, b, c, d); runs exactly as fast as this: dist(point(a, b), point(c, d)); and I'm pretty sure the compiled code is more or less the same. Usually I'm not really fan of decomposing each function that acts on points into the same function acting on their components. It usually ends up in endless series of overloads (even when sticking to 2D). So it should be reserved only to those functions for which the overhead is real. Bruno

AMDG Arash Partow wrote:
Bruno Lalande wrote:
Indeed, things have to be clear on that point, and I'm a bit confused about what really constitutes the concept of a point. What I have understood until now is that a class is the point if it exposes: * the value<>() getter and setter * coordinate_type * coordinate_count
This is somewhat correct but might be an over-engineering of things to the point where it may be unuseable to a group of potential users (people that have large data sets that need things done efficiently).
I have 10^9 x,y values as doubles give me the convex hull - can you imagine in the overhead if each of those pairs were to be converted to a point_concept/class/instance?
If you have containers of x, y values you don't necessarily need to create a physical vector of points. You can use a view of the x, y values that has the same interface as a container of points, but creates them on demand. Hopefully, the compiler can optimize this into direct access. In Christ, Steven Watanabe

Arash Partow wrote:
Bruno Lalande wrote:
Indeed, things have to be clear on that point, and I'm a bit confused about what really constitutes the concept of a point. What I have understood until now is that a class is the point if it exposes: * the value<>() getter and setter * coordinate_type * coordinate_count
This is somewhat correct but might be an over-engineering of things to the point where it may be unuseable to a group of potential users (people that have large data sets that need things done efficiently).
I have 10^9 x,y values as doubles give me the convex hull - can you imagine in the overhead if each of those pairs were to be converted to a point_concept/class/instance?
There's not any overhead. Back in 2002, inspired by Blitz++ I wrote a template class that is entirely conceptual. The vector doesn't store any data, it merely defines a conceptual vector of N dimensions to the outside world and a an interface to get & set those elements to the subclasses that define storage layouts. In the case of vectors that just a simple array of numbers, the code was just as optimal as if I'd hand-coded the arithmetic operators (+, -, dot product, cross product) inside a concrete class. Instead, the algorithms were written for the abstract class. To give an example of optimization, please examine the following code: Matrix<float, 3, 3> m1, m2, m3; m1 = 1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f, 9.0f; m2 = 3.0f, 5.0f, 2.0f, 7.0f, 1.0f, 3.0f, 9.0f, 6.0f, 4.0f; m3 = m1 + m2; If you look at how the comma-delimited assignment works, there is a 9-level recursively defined set of template class instances. The above code has two of those. Then there's the addition. MSVC6 would optimize that down to simply copying the 9 float values into m3 directly. There is no inherent penalty for programming to abstractions, at least in release mode. There is the possibility of expressivity and optimization.
Actually most algorithm implementations I've seen use index operators thats possibly because the underlying point types were always nD structures so it makes it easy to do something like the following:
This is what I'm used to also. I have toyed with using enums to tailor access to simulate domains. In game engines, you could have geometry vectors & points accessed using X, Y, & Z whereas you could have texture coordinates accessed with S, T and whatnot. -- Noah No virus found in this outgoing message. Checked by AVG. Version: 7.5.519 / Virus Database: 269.22.1/1346 - Release Date: 3/27/2008 10:03 AM

Arash Partow wrote:
Joel de Guzman wrote:
That's besides the point. The point is that you should be able to use different models of the point concept interchangeably much like STL algorithms can work on different container types. I understand what the original poster was trying to convey, and your opinion is valid, but they both need much more clarification than saying things should be interchangeable.
I think you are missing my point. I could very well rename "distance" to "foo" and my point still holds.
If we are talking high level abstraction principles in s/w design then this thread may never end :>
I am talking about generic programming and C++ concepts: http://www.research.att.com/~bs/popl06.pdf. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Arash Partow wrote:
Writing a geometry library is hard, look at CGAL over $1million Euros in funding, both commercial and academic wings and look at the monstrosity they have produced. If you can determine the intersection point of 2 line segments in under 15lines of code using CGAL I take my hat off to you :>
As someone who has *tried* to use CGAL to do intersections and a variety of other operations in under 15 lines of code, I second this statement. My only advice: don't let a Boost geometry library turn into CGAL. :-) Cheers, Demian

On Wed, Mar 26, 2008 at 07:30:07PM -0400, Demian Nave wrote:
Arash Partow wrote:
Writing a geometry library is hard, look at CGAL over $1million Euros in funding, both commercial and academic wings and look at the monstrosity they have produced. If you can determine the intersection point of 2 line segments in under 15lines of code using CGAL I take my hat off to you :>
There's an implied criticism that 15 lines is too much (?) Looking at the example intersection code [1], ignoring the typedefs and variable declarations, the intersection part is a function call and an "if" statement. I don't see how that can be simplified. Are you complaining about the typedefs? The fact that you have to take care of the case that segment intersection is itself a segment? [1] http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/Kernel_23/Chapter_main.h...
As someone who has *tried* to use CGAL to do intersections and an variety of other operations in under 15 lines of code, I second this statement. My only advice: don't let a Boost geometry library turn into CGAL. :-)
CGAL is huge, in part, because it provides all the higher-level stuff like triangulations, meshing, surfaces, and the like. The focus here is much much smaller so I don't think anyone imagines it turning into CGAL. Much of the discussion here surrounds defining a Point concept, which is valuable, but you typically also need related concepts Vector, Ray, Line, and Segment, together with the predicates (like point orientation) and constructions (like segment intersection). CGAL bundles all these together in a geometric traits class called a Kernel. One of the attractive features of CGAL, to me, is that their algorithms are templated over the Kernel type. So you can use exact geometric computing if desired, or play fast and loose with doubles, or do something in between (exact predicates, but inexact constructions) just by choosing the appropriate Kernel. Superficially, the discussion here seems to be talking only about one tiny part of a geometry kernel, namely points and simple functions like distance. Is the end goal just that: a set of primitives with no regard to robustness? Alternatively, if the ultimate goal is larger, then how does it differ from a CGAL kernel? If the CGAL kernel concept is too complicated, what is going to be omitted? Is there some part of a kernel that is unnecessary? Are there CGAL design mistakes (from its 12-year history) that can be done better? -Steve

Steve Robbins wrote:
CGAL is huge, in part, because it provides all the higher-level stuff like triangulations, meshing, surfaces, and the like. The focus here is much much smaller so I don't think anyone imagines it turning into CGAL.
Much of the discussion here surrounds defining a Point concept, which is valuable, but you typically also need related concepts Vector, Ray, Line, and Segment, together with the predicates (like point orientation) and constructions (like segment intersection). CGAL bundles all these together in a geometric traits class called a Kernel.
If all of these traits are all bundled together, then the CGAL Kernel concept really can't be very minimal, right? This means that algorithms written to this concept would require more than they should, which would them harder to use.
One of the attractive features of CGAL, to me, is that their algorithms are templated over the Kernel type. So you can use exact geometric computing if desired, or play fast and loose with doubles, or do something in between (exact predicates, but inexact constructions) just by choosing the appropriate Kernel.
I think the GTL submission used traits as well, but I prefer type generators and metafunctions like iterator_value<It>::type instead of std:iterator_traits<It>::value_type. That way you can provide or require only the traits that matter.
Superficially, the discussion here seems to be talking only about one tiny part of a geometry kernel, namely points and simple functions like distance. Is the end goal just that: a set of primitives with no regard to robustness?
Alternatively, if the ultimate goal is larger, then how does it differ from a CGAL kernel? If the CGAL kernel concept is too complicated, what is going to be omitted? Is there some part of a kernel that is unnecessary? Are there CGAL design mistakes (from its 12-year history) that can be done better?
For myself I want to know what interfaces to use in my code so that is more generic and easy to use. I want concepts to check with function_requires<> and I want archetypes to test that my code does not require more than is absolutely necessary. An algorithm written to the boost concepts should work on CGAL as well. In fact I propose interoperability with CGAL primitives should be a requirement for any boost geometry library. I think the answer to "what is going to be omitted" is that each algorithm should be able to omit some parts, the boost concepts & traits should be granular. Also, I think it is against the boost rules for a boost library to depend on CGAL. (Is that right?) -- John Femiani

Steve M. Robbins wrote:
There's an implied criticism that 15 lines is too much (?)
Looking at the example intersection code [1], ignoring the typedefs and variable declarations, the intersection part is a function call and an "if" statement. I don't see how that can be simplified. Are you complaining about the typedefs? The fact that you have to take care of the case that segment intersection is itself a segment?
[1] http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/Kernel_23/Chapter_main.h...
Steve I mispoke, not including your intersections library contributions (which I've been using for some time now - great stuff!) that would be the case, in fact all one needs to do is look at the insides of your routines to see how complex it "was".... Though to be honest the flexibilty CGAL provides wrt types/kernels is terrific and I believe is unparalleled (when not including matlab and mathematica)
Much of the discussion here surrounds defining a Point concept, which is valuable, but you typically also need related concepts Vector, Ray, Line, and Segment, together with the predicates (like point orientation) and constructions (like segment intersection).
That is a very correct and astute observation - I couldn't agree more predicates specificall and their definitions should be done concurrently with point concept definitions. specifically the responsiblity of use and precision should mentioned.
One of the attractive features of CGAL, to me, is that their algorithms are templated over the Kernel type.
Its also one of the aspects which makes its learning curve soo steep and the effort requirement for people to develop for and around CGAL so great.
So you can use exact geometric computing if desired, or play fast and loose with doubles, or do something in between (exact predicates, but inexact constructions) just by choosing the appropriate Kernel.
I also agree with you here, but would like to propose the EGC (Yap and Co.) related stuff be done within the object's value type not the algorithm. But then you'll say: well I can get better than normal precision from certain perdicates and PODs using Shewchuk's methods. To that I would say lets define a series of operators and predicates and then develop algorithms based on those: ie: distance that was: return std::sqrt((x1,x2) * (x1,x2) + (y1,y2) * (y1,y2)); now becomes: return sqrt(add(sqr(sub(x1,x2)),sqr(sub(y1,y2)))); where the routine making the call will have a computation_trait that will define add,sub,mul,div etc for that type and a policy oriented approach would easily allow the user to pick and choose between EGC and "fast and loose" computational precision/speed models.
Superficially, the discussion here seems to be talking only about one tiny part of a geometry kernel, namely points and simple functions like distance. Is the end goal just that: a set of primitives with no regard to robustness?
Damn you're on a roll today, I haven't agreed with anyone in an e-mail as I have with you!
Alternatively, if the ultimate goal is larger, then how does it differ from a CGAL kernel? If the CGAL kernel concept is too complicated, what is going to be omitted? Is there some part of a kernel that is unnecessary? Are there CGAL design mistakes (from its 12-year history) that can be done better?
What can be done better: 1. Better/easier access for simple users - Simple things, I've got 4 doubles give me the distance 2. Better use of new paradigms such a metaprogramming, and not just templates in the forms of generics 3. A generally acceptable license for both open and commercial uses 4. Syntax that when used with code such as STL doesn't look out of place (more cosmetic not priority) 5. Something that can hopefully be used in part as a future TR (TR101 perhaps? :D) Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

I also agree with you here, but would like to propose the EGC (Yap and Co.) related stuff be done within the object's value type not the algorithm. But then you'll say: well I can get better than normal precision from certain perdicates and PODs using Shewchuk's methods. To that I would say lets define a series of operators and predicates and then develop algorithms based on those:
ie: distance that was:
return std::sqrt((x1,x2) * (x1,x2) + (y1,y2) * (y1,y2));
now becomes:
return sqrt(add(sqr(sub(x1,x2)),sqr(sub(y1,y2))));
where the routine making the call will have a computation_trait that will define add,sub,mul,div etc for that type and a policy oriented approach would easily allow the user to pick and choose between EGC and "fast and loose" computational precision/speed models.
Would like to retract this part, due to the fact that if robustness is properly taken into account then a holistic solution which not only concerns itself at the operations level (add,sub etc..) but also at the routine level is required, as the greatest errors come from the combination of the results from each of the operators not the operators themselves. At the end of the day one may end up with a CGAL style naming convention where the specifics of the routine are spelt out in its name - not a good idea though. Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

Demian Nave wrote:
Arash Partow wrote:
Writing a geometry library is hard, look at CGAL over $1million Euros in funding, both commercial and academic wings and look at the monstrosity they have produced. If you can determine the intersection point of 2 line segments in under 15lines of code using CGAL I take my hat off to you :>
As someone who has *tried* to use CGAL to do intersections and a variety of other operations in under 15 lines of code, I second this statement. My only advice: don't let a Boost geometry library turn into CGAL. :-)
I happen to be one of the CGAL developers, so, given this (implicit) intersection example, how *exactly* would you prefer it to be like? Maybe you are just not using the library correctly? If not, how would you simplify it? P.S: We (the CGAL developers) are well aware of a couple of pending improvements to ease the user experience, but I get the feeling that some users just don't understand the inherent complexity that comes from our fundamental goal of robustness. Best Fernando Cacciola GeometryFactory

Fernando Cacciola wrote:
P.S: We (the CGAL developers) are well aware of a couple of pending improvements to ease the user experience, but I get the feeling that some users just don't understand the inherent complexity that comes from our fundamental goal of robustness.
Some users do, but don't see what it has to be soo apparent. Think of it in this manner: There is a street where every shop is a restaurant, the restaurants are ordered from say a McDonalds/Fish'n'Chip to some high upscale "we'll scratch you're butt if it needs scratching" style restaurant. Now from a user POV I (and most other users I know )want to always start from the McDonald's end of the street regardless and slowly as I determine what I want to eat gradually walk down the street to the other end. Some days I'd be content with McDonalds other days I may prefer something more edible/healthy and maybe once in a while I will go to the upscale place at the end of the street. I don't have any exact evidence but from my experiences both in Academia and industry "high end robustness" in problems requiring computational geometry is the exception not the norm, I'm not saying it is not important and should not be provided just that it would make more sense from a user experience/marketability pov that you provide a cleaner simpler interface. I digress - this is a BOOST ML and BOOST.Geometry is the topic at hand not issues relating to CGAL. Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

Hi Arash,
Fernando Cacciola wrote:
P.S: We (the CGAL developers) are well aware of a couple of pending improvements to ease the user experience, but I get the feeling that some users just don't understand the inherent complexity that comes from our fundamental goal of robustness.
Some users do, but don't see what it has to be soo apparent. Think of it in this manner:
There is a street where every shop is a restaurant, the restaurants are ordered from say a McDonalds/Fish'n'Chip to some high upscale "we'll scratch you're butt if it needs scratching" style restaurant.
Now from a user POV I (and most other users I know )want to always start from the McDonald's end of the street regardless and slowly as I determine what I want to eat gradually walk down the street to the other end. Some days I'd be content with McDonalds other days I may prefer something more edible/healthy and maybe once in a while I will go to the upscale place at the end of the street.
Indeed, I undestand the parabola (or is it a metaphor) and I totally agree with the concept. Coming up with the simplest deal for the simplest needs and the real thing for the real needs within the same library is the holly grail of framework design. But granted, this should a stated goal, even if really hard to meet.
I don't have any exact evidence but from my experiences both in Academia and industry "high end robustness" in problems requiring computational geometry is the exception not the norm
Unfrotunately, yes.
I'm not saying it is not important and should not be provided just that it would make more sense from a user experience/marketability pov that you provide a cleaner simpler interface.
Absolutely... FWIW in CGAL we are trying to move towards a cleaner interface for simple needs.
I digress - this is a BOOST ML and BOOST.Geometry is the topic at hand not issues relating to CGAL.
Right, but some degree of constructive cricticism of CGAL is right on topic as food for thought for the new library. My request intended to set a case which could be used to distinguish simple from simplistic, as, at the end of the day, I choose a complex but powerful library over a simplistic one that pretended to be simple. Best -- Fernando Cacciola SciSoft http://fcacciola.50webs.com http://groups.google.com/group/cppba

One other thing I forgot to mention, was when looking at the provided link http://geometrylibrary.geodan.nl/points.html The distance definition takes 2 templated point types but always returns a double. The lesson here is that the precision required for the the result may be more than the precision of the inputs. In this case the best strategy is to at the minimum return the same numeric type of the points as the result type or better. The other issue is that the points should be of the same type, not differing types. This strategy_traits approach where assuming there will be an implementation for the combination of types is not adequate, what if one of the types can be easily promoted to the other? eg: double conversion to my_super_duper_arbitrary_precision_floating_point_type via copy constructor? does that mean I have to provide a wrapper around the explicit call to my implementation of distance with my specific type? Arash Partow __________________________________________________ Be one who knows what they don't know, Instead of being one who knows not what they don't know, Thinking they know everything about all things. http://www.partow.net

Max, Thanks for your comments. Max Motovilov wrote:
(snip)( Nothing prevents me from collaborating on an existing open source project though :)
You're welcome!
(snip) - I very much like the traits-based approach to determining the type of calculation results. However a good extensible facility for deriving the result type from argument types appears to be generally useful and not at all specific to geometry. Unless there is one already in Boost (I am not aware of it and quick check of the docs didn't help), perhaps it may be introduced as a separate library?
I think you refer to the "select_type_traits". That might indeed be a separate library. I glanced through the promotion library, which is in the 1.35 version. That covers a part but it is not exactly the way it is used here.
- While generalized point types (2D vs. 3D and XY vs. lat/lon) are OK, it is often extremely useful in graphics code to make your objects fit the underlying representation of an externally defined structure (such as POINT type in Win32) -- for example by publicly or privately inheriting from it. This lets you use e.g. vectors of points as arguments for system APIs. Can be done by adding an extra template argument (with a default) to point<> and moving most of the implementation details into a traits class.
Sounds good. It is designed that those constructs should be possible. See also the comment of Bruno Lalande on this.
- There is a box type but not a (1D) range type (I don't think you allow 1D points, or do you?). It is often convenient to define operations on bounding rectangles in terms of ranges. The comment about making point types compatible with externally defined structure applies to boxes too.
The generic point type might be 1D, but I didn't provide any algorithms or strategies for it. The box uses the point as a template parameter, so a 1D point could be used as template argument to create indeed a range type. So I think that will work. For the externally defined structure, in most cases called "rectangle", it is probably more difficult. The code refers to min() and max() and that should then be resolved somehow.
- I didn't find any facilites for linear transformations in your list. Am I not looking in the right place or they aren't there? If the latter, it is probably the biggest omission. Or do you think this need is best served with uBLAS? I have no idea if this can be done using uBLAS in both efficient and convenient way so would like to hear everyone's thoughts. I am very much used to writing things like:
pt = shift( dst_offs ) * rotate( angle ) * shift( -src_offs ) * pt;
or when it is a repeated operation:
transform2D xf = shift(dst_offs) * rotate(angle) * shift(-src_offs); ....... pt = xf * pt;
or maybe even
for_each( pts.begin(), pts.end(), _1 = xf * _1 ); // using lambda
Linear transformations are intentionally left out because they are already there, in uBLAS indeed, didn't want to write vector/matrix calculations again. A sample could be useful to show this.
- In 2D geometry, very many computations can be most efficiently expressed in terms of dot product and cross product, so it wouldn't hurt to have these operations implemented as part of the library. It probably makes sense (I haven't done that before) to have a type for offset vectors in geometrical sense separate from points: vec = pt - pt; pt = pt + vec; et. al. The difference is that the offset part of a linear transformation should apply to point but not to vector. Again, is this something best done with uBLAS?
Yes, in fact those products are already used. Because they're so simple I didn't replace them yet by uBlas. But it is a good idea, they can probably be replaced without any problem. I'm also curious to the opinions of the list about this. Barend

Linear transformations are intentionally left out because they are already there, in uBLAS indeed, didn't want to write vector/matrix calculations again.
Regarding uBLAS: it doesn't look like it is the best match for the job. While the abstraction penalty there is probably very small or nonexistent for the 2x2, 2x3, 3x3 or 3x4 matrices that are likely to be useful for geometrical transformations (provided static arrays are used as underlying container), it just does not offer enough functionality: - No general matrix inversion, only triangular solvers. While in general case this may well indeed be out of scope for a foundation library, formulas for small matrices of known size are rather simple. And inverted matrices are extremely useful (at least as long as numeric robustness is not the top priority). - No special support for scale+shift matrices (2x3 or 3x4: diagonal + last column). Not that big of a deal, but formulas with them are simpler. More importantly, recognizing this special case as such at compile time helps write good code (e.g. don't want to start rotating raster images unless it is unavoidable...). - No syntactic sugar: constructors of shift/scale/rotation matrices, classification checks (is it a shift+scale matrix? a rotational matrix? are the X and Y scales identical? is it an invertible matrix? etc...) None of the above is a real show stopper: missing functionality may well be implemented on top of uBLAS. However I don't really see what non-trivial benefits does uBLAS provide for this specific case -- after all, products are really simple for small matrices of fixed size. Admittedly, I'm as far from uBLAS expert as can be so perhaps someone can point out important things I have missed. ...Max...

Max Motovilov wrote:
Linear transformations are intentionally left out because they are already there, in uBLAS indeed, didn't want to write vector/matrix calculations again.
Regarding uBLAS: it doesn't look like it is the best match for the job. While the abstraction penalty there is probably very small or nonexistent for the 2x2, 2x3, 3x3 or 3x4 matrices that are likely to be useful for geometrical transformations (provided static arrays are used as underlying container), it just does not offer enough functionality:
- No general matrix inversion, only triangular solvers. While in general case this may well indeed be out of scope for a foundation library, formulas for small matrices of known size are rather simple. And inverted matrices are extremely useful (at least as long as numeric robustness is not the top priority).
Agree that matrix inversion should be there, it is necessary. In earlier sample I used lu_substitute and lu_factorize to invert a uBLAS matrix. Don't know the backgrounds of those calculations, but it worked well.
- No special support for scale+shift matrices (2x3 or 3x4: diagonal + last column). Not that big of a deal, but formulas with them are simpler. More importantly, recognizing this special case as such at compile time helps write good code (e.g. don't want to start rotating raster images unless it is unavoidable...).
- No syntactic sugar: constructors of shift/scale/rotation matrices, classification checks (is it a shift+scale matrix? a rotational matrix? are the X and Y scales identical? is it an invertible matrix? etc...)
None of the above is a real show stopper: missing functionality may well be implemented on top of uBLAS. However I don't really see what non-trivial benefits does uBLAS provide for this specific case -- after all, products are really simple for small matrices of fixed size. Admittedly, I'm as far from uBLAS expert as can be so perhaps someone can point out important things I have missed.
If the general feeling is that uBLAS is not good for this job (I get this impression), then it might be time for a (separate) lightweight vector/matrix library in Boost. I've still the opinion that it should be separate from a geometry library. Any matrix library can be used for geometry transformations. For some simple calculations, that are now "inline", the geometry library needs a lightweight solution. The matrix listed in mail of Noah Stein, partly pasted here:
Matrix<float, 3, 3> m1, m2, m3; m1 = 1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f, 9.0f;
is looking very good. Barend Gehrels, Amsterdam, the Netherlands

Herewith the URL to the second preview of Geodan's Geometry Library, including documentation and examples.
Hello, Thanks for this submission, I have compiled and run successfully all the examples provided, and had a quick look at the documentation and implementation. What I've seen until now is quite convincing :-) Here are a few remarks... * I think a lot of people will most often choose to initialize their points to 0. Maybe it would be better if the default behavior was init_zero, and to have an additional "init_none" value in the "init" enum, that could be used explicitly when needed? If non-init is the default, I'm already afraid about all the "uninitialized value" bugs that programmers will mistakenly produce... * Maybe the parameterized constructor of point should be declared explicit, since implicitly converting a scalar to a point is likely to be an unintended error. * You could take advantage of the "dimension-agnosticness" of your point concept more than you're currently doing in your algorithms. For example, I have rewritten the pythagoras algorithm to make it dimension-agnostic, this way: template <typename P1, typename P2, size_t I, typename T> struct compute_pythagoras { static T result(const P1& p1, const P2& p2) { T d = p2.template value<I-1>() - p1.template value<I-1>(); return d*d + compute_pythagoras<P1, P2, I-1, T>::result(p1, p2); } }; template <typename P1, typename P2, typename T> struct compute_pythagoras<P1, P2, 0, T> { static T result(P1, P2) { return 0; } }; template <typename P1, typename P2 = P1> struct pythagoras { static inline bool squared() { return true; } inline distance_result operator()(const P1& p1, const P2& p2) const { typedef typename select_type_traits<typename P1::coordinate_type, typename P2::coordinate_type>::type T; return distance_result(compute_pythagoras<P1, P2, P1::coordinate_count, T>::result(p1, p2), true); } }; This way you can use pythagoras either with a point_xy or with other point classes of higher dimensions. I've quickly made up a point_xyz class and it worked perfectly after having defined a strategy for it. Every algorithm doesn't have to be that generic, sometimes it just doesn't make sense. But for the simplest ones, and if it's not too much effort, I think it's worth it. Regards Bruno

Bruno, Bruno Lalande wrote:
Herewith the URL to the second preview of Geodan's Geometry Library, including documentation and examples.
Hello,
Thanks for this submission, I have compiled and run successfully all the examples provided, and had a quick look at the documentation and implementation. What I've seen until now is quite convincing :-)
Good to hear!
Here are a few remarks...
* I think a lot of people will most often choose to initialize their points to 0. Maybe it would be better if the default behavior was init_zero, and to have an additional "init_none" value in the "init" enum, that could be used explicitly when needed? If non-init is the default, I'm already afraid about all the "uninitialized value" bugs that programmers will mistakenly produce...
OK, it is no problem to change that of course.
* Maybe the parameterized constructor of point should be declared explicit, since implicitly converting a scalar to a point is likely to be an unintended error.
Good idea.
* You could take advantage of the "dimension-agnosticness" of your point concept more than you're currently doing in your algorithms. For example, I have rewritten the pythagoras algorithm to make it dimension-agnostic, this way:
template <typename P1, typename P2, size_t I, typename T> struct compute_pythagoras { static T result(const P1& p1, const P2& p2) { T d = p2.template value<I-1>() - p1.template value<I-1>(); return d*d + compute_pythagoras<P1, P2, I-1, T>::result(p1, p2); } };
template <typename P1, typename P2, typename T> struct compute_pythagoras<P1, P2, 0, T> { static T result(P1, P2) { return 0; } };
template <typename P1, typename P2 = P1> struct pythagoras { static inline bool squared() { return true; }
inline distance_result operator()(const P1& p1, const P2& p2) const { typedef typename select_type_traits<typename P1::coordinate_type, typename P2::coordinate_type>::type T;
return distance_result(compute_pythagoras<P1, P2, P1::coordinate_count, T>::result(p1, p2), true); } };
This way you can use pythagoras either with a point_xy or with other point classes of higher dimensions. I've quickly made up a point_xyz class and it worked perfectly after having defined a strategy for it. Every algorithm doesn't have to be that generic, sometimes it just doesn't make sense. But for the simplest ones, and if it's not too much effort, I think it's worth it.
Thanks for your close look and this well looking elaboration! I will replace it. To be honest I've done some experiments, also with a polygon with a compile-time number of vertices. Called it the template <size_t D> gon, so a gon<3> for a triangle, etc. It disappointed me a bit because the compile-time area calculation routine which I drafted turned out to be slower than the runtime version... So it is interesting to test this with Pythagoras as well. However, it's looking beautiful like this. Regards, Barend
Regards Bruno _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- ------------------------------------- Barend Gehrels ------------------------------------- Geodan Holding b.v. President Kennedylaan 1 1079 MB Amsterdam The Netherlands ------------------------------------- Tel: +31 (0)20 5711 335 Mob: +31 (0)6 175 447 62 Fax: +31 (0)20 5711 333 ------------------------------------- E-mail: barend.gehrels@geodan.nl Website: www.geodan.nl Disclaimer: www.geodan.nl/disclaimer -------------------------------------

Barend Gehrels wrote>
To be honest I've done some experiments, also with a polygon with a compile-time number of vertices. Called it the template <size_t D> gon, so a gon<3> for a triangle, etc. It disappointed me a bit because the compile-time area calculation routine which I drafted turned out to be slower than the runtime version...
Do you mean the version generated using a compile time area calculation turns out to be slower at runtime than the native runtime version? I don't understand how this might be possible. What are you trying to achieve at compile time? Regards Hartmut

Hartmut Kaiser wrote:
To be honest I've done some experiments, also with a polygon with a compile-time number of vertices. Called it the template <size_t D> gon so a gon<3> for a triangle, etc. It disappointed me a bit because the compile-time area calculation routine which I drafted turned out to be slower than the runtime version...
Do you mean the version generated using a compile time area calculation turns out to be slower at runtime than the native runtime version? I don't understand how this might be possible. What are you trying to achieve at compile time?
In the post, the compile-time version unrolls the loop during compilation; it doesn't compute a result based on values specified in the source that are defined at compilation. Thus, the compile-time function could have a higher cost. It will almost certainly have a higher cost in a debug build since many compilers do not inline in that situation. I'd have to think recursive function calls are going to be much more expensive than a simple for loop. In release, it really depends on how well a compiler inlines. I haven't done extensive testing on any current compiler, but I know MSVC 6.0 optimized very well when __forceinline was used judiciously, but was quite a bit more hit-or-miss when relying on its inlining decisions. -- Noah No virus found in this outgoing message. Checked by AVG. Version: 7.5.519 / Virus Database: 269.22.0/1342 - Release Date: 3/25/2008 10:26 AM

Hello,
To be honest I've done some experiments, also with a polygon with a compile-time number of vertices. Called it the template <size_t D> gon so a gon<3> for a triangle, etc. It disappointed me a bit because the compile-time area calculation routine which I drafted turned out to be slower than the runtime version...
For the pythagoras version I've given above, my tests show that the performances are the same as with the original version with GCC 4, which is not surprising. This is true with or without optimizations (but I don't care about the "without" version, anyway).
Do you mean the version generated using a compile time area calculation turns out to be slower at runtime than the native runtime version? I
don't
understand how this might be possible. What are you trying to achieve at compile time?
Yep, it would be interesting to see your compile-time version because the fact it's slower sounds very surprising, indeed.
In the post, the compile-time version unrolls the loop during compilation; it doesn't compute a result based on values specified in the source that are defined at compilation. Thus, the compile-time function could have a higher cost. It will almost certainly have a higher cost in a debug build since many compilers do not inline in that situation. I'd have to think recursive function calls are going to be much more expensive than a simple for loop. In release, it really depends on how well a compiler inlines. I haven't done extensive testing on any current compiler, but I know MSVC 6.0 optimized very well when __forceinline was used judiciously, but was quite a bit more hit-or-miss when relying on its inlining decisions.
When I meta-program, I usually don't care about what will happen with a non-optimizing compiler, since meta-programming entirely relies on the compilers' inlining and optimizing skills, anyway. Thus, when it comes to compare performances, I do it both with optimizations turned on and off, but I only care about the results of the "on" version (see above). Bruno

A few more notes after looking through the documentation at length: - I agree that it makes sense to make Point a concept. But it looks like other geometries need it even more. For example, geometry::polygon<> encapsulates a fairly complicated data structure but adds little functionality on top of it; linestring and linear_ring provide even less. On the other hand, I imagine the algorithms don't need full container functionality from the geometries in most cases and could get by with a sequence (pair of iterators). Perhaps formulating concepts around those entities would help generalize the algorithms even further. I realize that at this point I should be suggesting an alternative design :) but to do that I'll have to look at the algorithm implementations first, not just the interfaces... and I'll try to get to it later. My own generic algorithms usually dealt with sequences alone, STL-style but I've never parameterized them as extensively as you do it. - Arithmetics with points and vectors (operator overloading and simple functions): it is mostly syntactic sugar, but it tends to make code shorter and more readable. I'm sure you have some, maybe a lot of it, but the docs are a bit hard to navigate in that respect. - Another 2 types of geometries I found useful in certain algorithms (e.g. computing straight skeletons or line widening) are infinite lines and rays, along with segments that you already have. The key operations for them are of course intersections. - Point to linestring distance algorithm should be able to provide the information (iterator?) as to which segment (or vertex) of the line turned out to be the closest. - Something I was thinking about but has never done yet... which of the geometrical algorithms could be effectively implemented on sequences of polynomial curve segments (quadratic and cubic Beziers are my natural favorites)? If feasible, this would be a very powerful tool for 2D graphics. More to come. ...Max...

- Another 2 types of geometries I found useful in certain algorithms (e.g. computing straight skeletons or line widening) are infinite
and rays, along with segments that you already have. The key operations for them are of course intersections.
- Point to linestring distance algorithm should be able to provide the information (iterator?) as to which segment (or vertex) of the line turned out to be the closest.
- Something I was thinking about but has never done yet... which of
lines the
geometrical algorithms could be effectively implemented on sequences of polynomial curve segments (quadratic and cubic Beziers are my natural favorites)? If feasible, this would be a very powerful tool for 2D graphics.
More to come.
...Max...
I am extremely interested in a boost affine geometry library, but given the history on this mailing list I suspect the best approach might be to start with less rather than more. I don't know if this works with boost libraries but I think I would like it if a minimal point/vector/coordinates library, which basically just provided concepts and models for proof of concept, was accepted and additional algorithms on top of them came later. -- John C. Femiani

I am extremely interested in a boost affine geometry library, but given the history on this mailing list I suspect the best approach might be to start with less rather than more.
Absolutely agree; some of my suggestions are definitely well out there. The reason I am airing them up here is (a) to get thoughts on the subject from interested people which might help me implement some of it and (b) see if I might be able to work the prospective implementations into an open-source library that may perhaps be accepted into Boost. I've grown rather tired of re-writing the same libraries (granted, with improvements) over and over for each new project :) ...Max...

Max Motovilov wrote:
I am extremely interested in a boost affine geometry library, but given the history on this mailing list I suspect the best approach might be to start with less rather than more.
Absolutely agree; some of my suggestions are definitely well out there. The reason I am airing them up here is (a) to get thoughts on the subject from interested people which might help me implement some of it and (b) see if I might be able to work the prospective implementations into an open-source library that may perhaps be accepted into Boost. I've grown rather tired of re-writing the same libraries (granted, with improvements) over and over for each new project :)
I've written vector, point, and matrix classes many, many times. I'm in the middle of writing another one at the moment to try out proto. I think a great geometry library would be an enormous asset. Having said that, I've been around for a couple of the discussions here about such a library. There's not a lot of consensus. My apologies, but to illustrate, I'll show my lack of consensus: * I like that the default point constructor does nothing as I hate the potential inefficiency if I don't have my initial data ready at construction time. I'm not a big fan of the init_option constructor. Isn't the init_option almost always going to be defined at compile-time, thus the switch-case is wasteful. * Why don't you have a vector class? It would conceptually be used in situations such computing the area of a polygon. Instead, I see that you've written out the subtraction by hand. * I'm not a fan of a point_xy class. Scaling up to 3D, you now need a separate point_xyz class. There's going to be a lot similar code that need only be written once if the class was templated on dimension as well as the internal element type. In addition, the X() and Y() accessor functions aren't as readable, to me, as operator[], e.g. v3.Y(v1.Y()+v2.Y()) vs. v3[Y]=v1[Y]+v2[Y], assuming Y is something along the lines of const int Y = 1. I'm not a GIS developer. What you have might be very standard in the world of GIS. It's a bit different to the things I see every day. There are probably a few other ways of doing it too. I'd be rather happy if the community could find consensus, and I'm willing to discuss it. Good luck, Noah No virus found in this outgoing message. Checked by AVG. Version: 7.5.519 / Virus Database: 269.22.0/1342 - Release Date: 3/25/2008 10:26 AM

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Noah Stein Sent: Tuesday, March 25, 2008 11:17 PM
I am extremely interested in a boost affine geometry library, but given the history on this mailing list I suspect the best approach might
to
start with less rather than more.
Absolutely agree; some of my suggestions are definitely well out
be there.
The reason I am airing them up here is (a) to get thoughts on the subject from interested people which might help me implement some of it and (b) see if I might be able to work the prospective implementations into an open-source library that may perhaps be accepted into Boost. I've grown rather tired of re-writing the same libraries (granted, with improvements) over and over for each new project :)
I've written vector, point, and matrix classes many, many times. I'm in the middle of writing another one at the moment to try out proto. [John Femiani]
I definitely agree that expression templates will be a great benefit to a point library. Basically for me to be happy I need: 1) Documentation and code which follows the patterns in Boost.Concept, explicitly providing concepts that I can check in code and also providing archetypes and one or two reasonable models (models are less important to me). 2) A coordinates concept, which defines all of the typical operators much like a std::valarray, but is fixed-sized like a Fusion sequence. This generalizes the ideas of an affine point and vector, and it is really what many people on this list seem to mean by a "point". I am not sure whether points should be homogeneous vs. heterogeneous in terms of there element types, or whether run-time indexing is important enough to be handled by a separate concept in addition compile-time indexing. 3) A point concept, with an archetype, and at least a 2d and 3d model. * The point concept should be coordinate-free; The coordinates concept Should be separate. Affine transforms therefore should be Treated like function objects rather than c++ operators. Therefore, you say scale(s)(point) rather than p *= s, unless your point also satisfies CoordinateConcept. 4) A vector concept, with an archetype, and at least a 2d and 3d model. 5) A concept for an affine transform, and models for the basic ones: translate, rotate, and scale. A transform should be applicable to any another transform as well as to a point or a vector. The result of a transform should be impl. defined, so it may or may not be an expression type (ala proto?). The archetype needs to take this into account. 6) Conversion functions, make_vector, make_point, and make_coordinates that make adaptors or something so that one type can be forced to another. The adaptors should have zero runtime overhead. I have been saying "affine" geometry because I think that other geometries (i.e. projective), while very useful, should be handled after the affine case is nailed down. There are some behavioral aspects of points & vectors that may need runtime tests. The geometry library should have well documented invariants that can be checked for any geometric type. Anything beyond points, vectors, coordinates, and transforms is probably overkill in my opinion (so far), I would really just like this one thing nailed down. -- John C. Femiani

Hi,
I'm not a big fan of the init_option constructor. Isn't the init_option almost always going to be defined at compile-time, thus the switch-case is wasteful.
I had also noticed this when reading the code, but I told myself that the compiler can figure this out by itself and optimize. Just to verify this, I've replaced the enum stuff by policies. On the way, those policies also replace runtime indexing by compile-time indexing: template <class P, int I> struct initialize_point { static void init(P& p, typename P::coordinate_type value) { p.template value<I-1>(value); initialize_point<P, I-1>::init(p, value); } }; template <class P> struct initialize_point<P, 0> { static void init(P&, typename P::coordinate_type) {} }; struct init_zero_t { template <class P> static void init(P& p) { initialize_point<P, P::coordinate_count>::init(p, typename P::coordinate_type()); } }; init_zero_t init_zero; struct init_min_infinite_t { template <class P> static void init(P& p) { initialize_point<P, P::coordinate_count>::init(p, boost::numeric::bounds<typename P::coordinate_type>::lowest()); } }; init_min_infinite_t init_min_infinite; struct init_max_infinite_t { template <class P> static void init(P& p) { initialize_point<P, P::coordinate_count>::init(p, boost::numeric::bounds<typename P::coordinate_type>::highest()); } }; init_max_infinite_t init_max_infinite; struct init_inverse_t {}; init_inverse_t init_inverse; The constructor of point that used to do a switch on the enum is now a template that can handle one of the policies defined (except for init_inverse). This obviously makes it much shorter: template <class init_policy> inline point(init_policy policy) { policy.init(*this); } The init_inverse is empty since it's handled in the box class itself: template <class init_policy> inline box(init_policy policy) : m_min(policy) , m_max(policy) {} inline box(init_inverse_t) : m_min(init_max_infinite) , m_max(init_min_infinite) {} The point_ll and point_xy classes' constructors are also adapted: template <class init_policy> inline point_ll(init_policy policy) : point<T, 2>(policy) {} template <class init_policy> inline point_xy(init_policy policy) : point<T, 2>(policy) {} Tests show that the performances obtained are quite the same. The compile-time version is not really faster since the for loops were probably already unrolled by the compiler, and the switch statement was also precomputed since it was based on a compile-time value. So the choice between one version or the other is really a matter of taste... Bruno

I am extremely interested in a boost affine geometry library, but given the history on this mailing list I suspect the best approach might be to start with less rather than more.
Absolutely agree; some of my suggestions are definitely well out there.
Yep, I strongly agree too. We really should focus on having a basic and widely accepted geometry toolbox for now. Bruno

Barend, something I forgot to mention in my first post: while integrating my point_xyz class I faced a slight issue. In distance.hpp the overload of distance() at line 111 calls the one at line 132, which is thus not yet defined at this point. I had to inverse their respective positions to get it compiling. I don't know why it works with the original point_xy class, but maybe it would be better to actually do this modification in your code? Bruno

Bruno,
Barend, something I forgot to mention in my first post: while integrating my point_xyz class I faced a slight issue. In distance.hpp the overload of distance() at line 111 calls the one at line 132, which is thus not yet defined at this point. I had to inverse their respective positions to get it compiling. I don't know why it works with the original point_xy class, but maybe it would be better to actually do this modification in your code?
Bruno _______________________________________________
Thanks, it will be changed. Which compiler do you use, by the way? Barend

I am extremely interested in a boost affine geometry library, but given the history on this mailing list I suspect the best approach might be to start with less rather than more. I don't know if this works with boost libraries but I think I would like it if a minimal point/vector/coordinates library, which basically just provided concepts and models for proof of concept, was accepted and additional algorithms on top of them came later.
-- John C. Femiani
This has been well discussed many times: make Point a concept and allow many models of the Point concept. It is a mistake to have a single do-it-all point class. What if I want a 2 dimensional point with a short for x and a long for y? Your class can never satisfy all possibilities. The best strategy is to allow for multiple point models be usable in your algorithms cout << distance(p1, p2) << endl; cout << distance(p2, p3) << endl; Joel de Guzman This is how it is implemented. More points are possible, two points are
Thanks for all answers and comments on my preview. I knew that the subject is difficult and lends itself to many discussions. I will not answer all mails individually so I paste some snippets to cover most things. First about the vector/matrix calculations. They are intentionally left out the library, because it is a separate thing. You can do geometry without (much) vector/matrix calculations and you can use vector/matrix without doing geometry. Besides that it is already in Boost: uBLAS contains vector/matrix calculations. So I decided to not do it again. To make this more clear I reworked a sample and added it to the preview: http://geometrylibrary.geodan.nl/examples.html, see 6 transformation example. Besides transformations, there are small vector calculations necessary inside any geometry library. They are now coded inside but another library could be used. This is also explained on the example-page. Then about the point class: provided, strategies can be defined for any point type. The distance calculation can be done exactly as stated above. About the "point concept". I've looked at the concepts, downloaded and tried the concept compiler. However, in this geometry library there are more concepts possible. As long as you attach the right strategies with the points, you can use any point, at least, that is the intention. I provided a basic point with an array and neutral names as "value", but even that is not really necessary. In the documentation I added notes about the concepts and requirements, with some classes, but not yet with all of them. About the point_xyz: I intentionally left that out to not give the intention that the library is a 3D library. 3D could be added but at this moment it is not. Algorithms are for 2D points, and after the first preview I added the latlong points as well (but I didn't implement all strategies for them). Finally about the compiletime area calculation: sorry that I've mentioned that, it was really a small experiment. Because there is interest I will relook at it, but it should not distract the list from the focus: the preview, without vector/matrix calculations, with affine geometry and latlong geometry, and if that might be a starting point for a boost geometry library candidate. Best regards, Barend Barend Gehrels, Geodan, Amsterdam.

Barend Gehrels wrote:
About the "point concept". I've looked at the concepts, downloaded and tried the concept compiler.
Hi Barend, Other people have expressed a desire that you describe your library in terms of concepts I agree with that, see e.g. John Femiani's messages. Based on the comment quoted above I wonder if perhaps you have misunderstood what is being asked for. I don't think anyone is asking for code that will run on ConceptGCC. I would be quite happy for you to simply write your _documentation_ in terms of concepts. If we take your point class as an archetype of your point concept, I think that this means that algorithm implementations have to use the operator[] notation while "user" code can choose to use .x()/.y() notation. For what it's worth, as a potential algorithm author I'm not at all enthusiastic about that: but I know that [] is other peoples' preferred style. No doubt once we look at the concepts for everything else there will be myriad other similar choices. I fear what it comes down to is this: if you present a minimalistic library, people will focus on and disagree about these details; neither side of the ".x vs [0]" argument is "right", and I don't think there's a solution that keeps both happy. Only by presenting a library that has compelling merit of its own (for example within its algorithms) will you get people to set aside their stylistic preferences and follow your concepts. Regards, Phil.

About the "point concept". I've looked at the concepts, downloaded and tried the concept compiler.
Based on the comment quoted above I wonder if perhaps you have misunderstood what is being asked for. I don't think anyone is asking for code that will run on ConceptGCC. I would be quite happy for you to simply write your _documentation_ in terms of concepts.
If we take your point class as an archetype of your point concept, I think that this means that algorithm implementations have to use the operator[] notation while "user" code can choose to use .x()/.y() notation. For what it's worth, as a potential algorithm author I'm not at all enthusiastic about that: but I know that [] is other peoples' preferred style. No doubt once we look at the concepts for everything else there will be myriad other similar choices. I fear what it comes down to is this: if you present a minimalistic library, people will focus on and disagree about these details; neither side of the ".x vs [0]" argument is "right", and I don't think there's a solution that keeps both happy. Only by presenting a library that has compelling merit of its own (for example within its algorithms) will you get people to set aside their stylistic preferences and follow your concepts.
Hi Phil, Thanks for your comments and indeed, I was somewhat confused about the concepts. I think the Boost.Concepts plus documentatation will be the right way. In the documentation there are some comments about it but that's obviously not enough. Will work on it. The library is more or less designed as you mention it, the points are as small as possible, minimalistic, with algorithms working on them. There is not yet an overwhelming amount of algorithms, I admit, many other libraries have much more. But that will grow and can also be positive, things can be changed at this stage. Barend

Phil Endecott wrote:
Barend Gehrels wrote:
About the "point concept". I've looked at the concepts, downloaded and tried the concept compiler.
Hi Barend,
Other people have expressed a desire that you describe your library in terms of concepts I agree with that, see e.g. John Femiani's messages.
Based on the comment quoted above I wonder if perhaps you have misunderstood what is being asked for. I don't think anyone is asking for code that will run on ConceptGCC. I would be quite happy for you to simply write your _documentation_ in terms of concepts.
Exactly!
If we take your point class as an archetype of your point concept, I think that this means that algorithm implementations have to use the operator[] notation while "user" code can choose to use .x()/.y() notation. For what it's worth, as a potential algorithm author I'm not at all enthusiastic about that:
Me too.
but I know that [] is other peoples' preferred style.
[integer] implies that the elements have the same type.
No doubt once we look at the concepts for everything else there will be myriad other similar choices.
Yes. Fusion style at_c<N>(p) comes to mind.
I fear what it comes down to is this: if you present a minimalistic library, people will focus on and disagree about these details; neither side of the ".x vs [0]" argument is "right", and I don't think there's a solution that keeps both happy.
Yes.
Only by presenting a library that has compelling merit of its own (for example within its algorithms) will you get people to set aside their stylistic preferences and follow your concepts.
Adaptability with existing and future point classes and geometry/ graphics libraries is a compelling point; and one that should not be taken lightly. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Hi Barend, I see that your distance functions now return a (double,bool) pair, with the bool indicating whether the double is the actual distance or the square of the distance. This does solve the problem of avoiding expensive sqrt()s, but it doesn't look very appealing. How about this instead: double greatcircle_distance(point a, point b) { // return the actual distance return some-tirgonometry(a,b); } squared<double> pythag_distance(point a, point b) { // return the squared distance, to avoid the sqrt() when it's not needed: double dx = a.x-b.x; double dy = a.y-b.y; return squared<double>(dx*dx + dy*dy); } where squared<double> is convertible to double: template<typename T> class squared { T sq; public: explicit squared(T sq_): sq(sq_) {} operator T() { return sqrt(sq); } }; so if I write double d = pythag_distance(p1,p2); I get the actual distance, with the sqrt() done in the conversion from squared<double> to double. Then, if I want to be efficient in code like this: if (pythag_distance(p1,p2) > pythag_distance(p3,p4)) { ... } I just need to define operator> on squared<T>, and the comparison is done without the expensive sqrt(): bool operator>(squared other) { return sq > other.sq; } You can go further. If I'm searching through a long list of points to find the closest to P then I'll avoid the squaring (and conversion to double if my co-ordinates are integers) whenever possible. You can achieve this with a more complex distance proxy: class distance_proxy { double dx; double dy; distance_proxy(double dx_, double dy_): dx(dx_), dy(dy_) {} friend pythag_distance(point,point); public: operator double() { return sqrt(dx*dx+dy*dy); } bool operator>(double d) { return dx>d || dy>d || (dx*dx+dy*dy > d*d); } }; So this is convertible to double, but can be compared to a distance without any need for sqrt() and only multiplication in some cases. Further refinement is possible. I hope this is interesting, and invite readers to find the no-doubt numerous errors in my code.... Phil.

Hi Barend,
I see that your distance functions now return a (double,bool) pair, with the bool indicating whether the double is the actual distance or the square of the distance. This does solve the problem of avoiding expensive sqrt()s, but it doesn't look very appealing. How about this instead:
double greatcircle_distance(point a, point b) { // return the actual distance return some-tirgonometry(a,b); }
squared<double> pythag_distance(point a, point b) { // return the squared distance, to avoid the sqrt() when it's not needed: double dx = a.x-b.x; double dy = a.y-b.y; return squared<double>(dx*dx + dy*dy); }
where squared<double> is convertible to double:
template<typename T> class squared { T sq; public: explicit squared(T sq_): sq(sq_) {} operator T() { return sqrt(sq); } };
Hi Phil, Thanks for this. I did all "distance strategies" let return the same type. The "simplify" algorithm, for example, uses this to compare distances without taking the square. This works both for latlong and for xy-points, and will work for other points. This is the reason for this design. The difference with your approach, until here, is not so large.
so if I write double d = pythag_distance(p1,p2); I get the actual distance, with the sqrt() done in the conversion from squared<double> to double.
Then, if I want to be efficient in code like this:
if (pythag_distance(p1,p2) > pythag_distance(p3,p4)) { ... }
I just need to define operator> on squared<T>, and the comparison is done without the expensive sqrt():
bool operator>(squared other) { return sq > other.sq; }
Good idea.
You can go further. If I'm searching through a long list of points to find the closest to P then I'll avoid the squaring (and conversion to double if my co-ordinates are integers) whenever possible. You can achieve this with a more complex distance proxy:
class distance_proxy { double dx; double dy; distance_proxy(double dx_, double dy_): dx(dx_), dy(dy_) {} friend pythag_distance(point,point); public: operator double() { return sqrt(dx*dx+dy*dy); } bool operator>(double d) { return dx>d || dy>d || (dx*dx+dy*dy > d*d); } };
So this is convertible to double, but can be compared to a distance without any need for sqrt() and only multiplication in some cases. Further refinement is possible.
I hope this is interesting, and invite readers to find the no-doubt numerous errors in my code....
It is an interesting alternative. Somewhat directed to the xy-case. Searching the nearest (or farrest) can be done like this, but also with the "distance_result" variant. In fact I do that in the simplify routine. There I also avoid the square, and therefore I introduced the "squared()" requirement in the concept of the distance strategy. This "squared()" can be avoided, because the tolerance can be squared at comparison, but then it has to be squared each time... or the previous choice has to be rememberd (not so good). Besides this "distance_result" (another name is also OK if that is the problem, or maybe I shouldn't have derived it from std::pair) I also have an "intersection_result" somewhere. Barend Gehrels Geodan, Amsterdam
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

AMDG I noticed the equals function in math.hpp. I'm not sure about it. If a and b are both greater than 1 than if their difference is non-zero, it cannot be less than epsilon (which is defined as "the difference between 1 and the least value greater than 1 that is representable" 18.2.1.2) Thus, the test is equivalent to a == b. Also, the include guard is _GEOMETRY_MATH_HPP "Each name that [...] begins with an underscore followed by an uppercase letter (2.11) is reserved to the implementation for any use." (17.4.3.1.2) In Christ, Steven Watanabe

AMDG
I noticed the equals function in math.hpp. I'm not sure about it. If a and b are both greater than 1 than if their difference is non-zero, it cannot be less than epsilon (which is defined as "the difference between 1 and the least value greater than 1 that is representable" 18.2.1.2) Thus, the test is equivalent to a == b.
Also, the include guard is _GEOMETRY_MATH_HPP "Each name that [...] begins with an underscore followed by an uppercase letter (2.11) is reserved to the implementation for any use." (17.4.3.1.2)
In Christ, Steven Watanabe
Steven, The math.hpp contains things that should not be in this library, I would like to have them replaced by something in Boost, or maybe they are already in Boost but I overlooked them. I put them there to make it easy to change lateron, and it needs probably change indeed. Thanks. The include guards were thought to be preceded by something else... No problem to change them. Barend Gehrels Geodan, Amsterdam

Hi Barend, There was so much already discussed that I don't know where to start, so I decided to just sort of top-post touching a bit of everything. 1) First of all, please pay close attention to Joel's suggestion and reconsider your library's design from a truly generic programming POV, specially to separate models (data structures) from algorithms and properties (coordintes, etc) In particular, try to understand the the free-member approach portraid by Joel's version of "distance". 2) As somneone said (sorry I couldn't keep track of who said what just yet), a monolithic point concept is almost as bad as a monolithc point class. Recall the coordinate-free suggestion from Hervé? Then a point is a point regardless of its coordinates just as a line is a line regardless of the two points that can be used to define it. Orthogonal requirements imply separate concepts so I would start from the requirements, that is, the algorithms: that will tell you what set of concepts collectively correspond to a point. 3) Don't forget aritmetic, that is, you'll need numeric concepts as well, and (as you can see what is done in CGAL), a super number concept is too much and too little at the same time. Again, work from the algorithms, backwards, to discover the numeric concepts you need. 4) IMO you definitely need to bring Vectors into the game and their two most important operators: dot and cross products. For instance, the square_distance between two points should be defined AND implemented as the squared length of the vector from one point to the other. This removes any requirement that points have coordinates. And vectors don't need coordinates either (from a concept POV) since the squared length of a vector v is its dot product with itself, so, all you need to define and implement a square distance between two points is a dot product operation between vectors. As long as vector models (which know about their components) supply that, you don't need any mention of coordinates at all anywhere (to support that operation generically). Granted, some *users* may need to get their hands at the coordinate of a point, but the more coordinate-free the library itself is the better. 5) There is a reason why CGAL doesn't provide a distance function at all but only a squared_distance, and I agree with that reason: sqrt() kills robustness and distances are usually used for comparisons, so you seldom need that. 6) There is a reason why CGAL is parametrized in a Kernel and not a coordinate type, and I agree with that reason: effective robustness is the result of the proper encapsulation of predicates and constructions, and a Kernel type provides a mechanism to *choose* that easily. That is, I can call an algorithm taking JUST a point class with double coordinates but having it balance accuaracy vs speed as I need it and at compile time without wrapping the point class at all. That means I need to give the algorithm additional compile-time information. In CGAL that information is given by the Kernel type expose by the Point type. It doesn't have to be like that exactly, but it is important to use a sufficiently useful type to tailor the algorithm. Parametrization on just the coordinate types is not enough. .. more to come... Best -- Fernando Cacciola SciSoft http://fcacciola.50webs.com http://groups.google.com/group/cppba

Fernando Cacciola wrote:
5) There is a reason why CGAL doesn't provide a distance function at all but only a squared_distance, and I agree with that reason: sqrt() kills robustness and distances are usually used for comparisons, so you seldom need that.
This is one of the things I think proto can help with. I'm playing around with a simple vector/point/matrix library just to get a feel for it. proto should be able to transform expressions of the sort "length(a) > length(b)" into "dot(a,a) > dot(b,b)". I think proper use of proto can lead to high levels of expressivity combined with good optimization. -- Noah No virus found in this outgoing message. Checked by AVG. Version: 7.5.519 / Virus Database: 269.22.1/1346 - Release Date: 3/27/2008 10:03 AM

Noah Stein wrote:
Fernando Cacciola wrote:
5) There is a reason why CGAL doesn't provide a distance function at all but only a squared_distance, and I agree with that reason: sqrt() kills robustness and distances are usually used for comparisons, so you seldom need that.
This is one of the things I think proto can help with. I'm playing around with a simple vector/point/matrix library just to get a feel for it. proto should be able to transform expressions of the sort "length(a) > length(b)" into "dot(a,a) > dot(b,b)". I think proper use of proto can lead to high levels of expressivity combined with good optimization.
Indeed, see below. #include <iostream> #include <boost/xpressive/proto/proto.hpp> #include <boost/xpressive/proto/debug.hpp> #include <boost/xpressive/proto/transform.hpp> using namespace boost; using namespace proto; struct length_impl { friend std::ostream &operator<<(std::ostream &sout, length_impl) { return sout << "length_impl"; } }; struct dot_impl { // dot implementation here friend std::ostream &operator<<(std::ostream &sout, dot_impl) { return sout << "dot_impl"; } }; terminal<length_impl>::type const length = {{}}; terminal<dot_impl>::type const dot = {{}}; // work around msvc bugs... #if BOOST_WORKAROUND(BOOST_MSVC, BOOST_TESTED_AT(1500)) #define dot_impl() make<dot_impl> #define _arg1(a) call<_arg1(a)> #define _make_function(a,b,c) call<_make_function(a,b,c)> #define _make_terminal(a) call<_make_terminal(a)> #endif // convert length(a) < length(b) to dot(a,a) < dot(b,b) struct Convert : when< less< function<terminal<length_impl>, _> , function<terminal<length_impl>, _> > , _make_less( _make_function( _make_terminal(dot_impl()) , _arg1(_arg0) , _arg1(_arg0) ) , _make_function( _make_terminal(dot_impl()) , _arg1(_arg1) , _arg1(_arg1) ) ) > {}; int main() { int i = 0; display_expr(length(1) < length(2)); display_expr(Convert()(length(1) < length(2), i, i)); } This program prints: less( function( terminal(length_impl) , terminal(1) ) , function( terminal(length_impl) , terminal(2) ) ) less( function( terminal(dot_impl) , terminal(1) , terminal(1) ) , function( terminal(dot_impl) , terminal(2) , terminal(2) ) ) In addition to demonstrating how to transform expressions, it's also a neat demonstration of how buggy msvc is wrt function types. :-/ -- Eric Niebler Boost Consulting www.boost-consulting.com

Noah, Fernando: Apart from the simple known expressions (such as eliminating a sqrt from the comparison), symbolic manipulation won't be able to do that much, although it does have real benefits for automatic code generation. Here is what I mean: I had that idea early on, and advised a BSc Thesis on the topic (Allen Clement, "MGK: A geometric meta kernel", May 1, 2000, Princeton Univerisity Dept of CS). I was trying to see if I could, from the simplest computations, compute (complicated) expressions and then use a CAS (computer algebra system, in this case Maple, which has a function to_C() and functions factor()/simplify() to reduce expressions and output them as a C program). Specifically, I wanted to use bootstrap and derive algebraic expressions for complex primitives in terms of a small core of very simple ones (e.g., line through two points, line intersection, perp. line passing through a point). I created a CGAL "symbolic" kernel, with a number type that created an expression tree. I would then implement and use the simplest constructions, e.g.: squared_point_line_distance(x, y, a, b, c) { Point p(x,y); Line l(a,b,c); // ax + by + c = 0 Line m = construct_perp_line_passing_through(p, l); Point q = construct_intersection(l, m); return point_point_squared_distance(x, y); } Similar geometric constructions for circumcenter(p, q, r) as intersection of perp_bisector(p,q) and perp_bisector(p,r). You can generalize this to 3D, polar coordinates, etc. as long as you've implemented your small core of simple functions. line_point_distance as above results in a "naive" expression using 9 additions, 20 mult., and 2 divisions. After Maple is done, you get back the well known (x * a + y * b + c)^2 / (a^2 + b^2), which is 3 additions, 5 multiplications, and 1 division. circumcenter(p,q,r) construction as above gives 18 adds, 16 mults, 1 div, and after simplification you get the well-known formula with 12 adds, 12 mults, 1 div. The approach isn't without benefits and can automate the construction of a suite of C++ code for computing common geometric primitives. But beyond those who were rather well-known such as above, we didn't witness that much improvement. Where we thought we would gain more was in lesser studied representations (lat-lon, polar coordinates) where getting efficient symbolic expressions demands more computation. The approach doesn't really pan out for predicates, due to the branching (if statements, mostly signs of algebraic expressions) which are usually not well handled by a CAS. It worked up to a point, but after that, not much. The final code (efficient) is also unreadable, but its correctness stems from the correctness/simplicity of the original construction and of the CAS expression manipulation. We could compute equivalent expressions and reduce the number of primitive operations by some quantity. However, there was also no guarantee that the generated expressions would be numerically stable. See Allen's thesis for details, and his conclusion for why this system might be interesting and its limitations. Maybe some specialized CAS engine for simplification tailored to the application domain can help. I played around with Yacas, but with no results... -- Hervé Brönnimann hervebronnimann@mac.com [1] Allen Clement, "MGK: A geometric meta kernel", May 1, 2000, Princeton University, Dept of CS. Available at http://photon.poly.edu/~hbr/publi/allen_bsthesis.pdf On Mar 28, 2008, at 3:04 AM, Noah Stein wrote:
Fernando Cacciola wrote:
5) There is a reason why CGAL doesn't provide a distance function at all but only a squared_distance, and I agree with that reason: sqrt() kills robustness and distances are usually used for comparisons, so you seldom need that.
This is one of the things I think proto can help with. I'm playing around with a simple vector/point/matrix library just to get a feel for it. proto should be able to transform expressions of the sort "length(a) > length(b)" into "dot(a,a) > dot(b,b)". I think proper use of proto can lead to high levels of expressivity combined with good optimization.
-- Noah
No virus found in this outgoing message. Checked by AVG. Version: 7.5.519 / Virus Database: 269.22.1/1346 - Release Date: 3/27/2008 10:03 AM
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost

Fernando Cacciola wrote:
Hi Barend,
There was so much already discussed that I don't know where to start, so I decided to just sort of top-post touching a bit of everything.
1) First of all, please pay close attention to Joel's suggestion and reconsider your library's design from a truly generic programming POV, specially to separate models (data structures) from algorithms and properties (coordintes, etc)
In particular, try to understand the the free-member approach portraid by Joel's version of "distance".
Hi Fernando, I wrote a reaction on this point 1) already, but as this thread is long and there is much discussion, I will repeat it in other words. Joel wrote this:
liba::point p1; libb::point<int> p2; POINT p3; // MFC
cout << distance(p1, p2) << endl; cout << distance(p2, p3) << endl;
std::cout << "latlong: " << 0.001 * distance(a, r) << " km" << std::endl; std::cout << "RD: " << 0.001 * distance(a_rd, r_rd) << " km" << std::endl; where a, r are instances of the point_ll class, and a_rd, r_rd are instances of the point_xy class (forget RD, it is a just specific
and it is implemented in the 2nd preview exactly like this. The library supports different point types now. For any point type a distance-strategy can be built. Of course there are builtin strategies for built in points (at this moment point_xy for 2D Euclidian, point_ll for latlong). Below a small piece of the distance_example I wrote, from http://geometrylibrary.geodan.nl/5__distance__example_8cpp-example.html#a6 projection). Point classes might be completely different. The classes above are derived from a common ancestor, but that is *not* required.
2) As somneone said (sorry I couldn't keep track of who said what just yet), a monolithic point concept is almost as bad as a monolithc point class. Recall the coordinate-free suggestion from Hervé? Then a point is a point regardless of its coordinates just as a line is a line regardless of the two points that can be used to define it. Orthogonal requirements imply separate concepts so I would start from the requirements, that is, the algorithms: that will tell you what set of concepts collectively correspond to a point.
I recall Hervé's suggestion, and I followed it literally. It is the largest difference between Preview 1, which indeed felt short in this, and Preview 2, where points are coordinate-free.
3) Don't forget aritmetic, that is, you'll need numeric concepts as well, and (as you can see what is done in CGAL), a super number concept is too much and too little at the same time. Again, work from the algorithms, backwards, to discover the numeric concepts you need.
4) IMO you definitely need to bring Vectors into the game and their two most important operators: dot and cross products. For instance, the square_distance between two points should be defined AND implemented as the squared length of the vector from one point to the other. This removes any requirement that points have coordinates. And vectors don't need coordinates either (from a concept POV) since the squared length of a vector v is its dot product with itself, so, all you need to define and implement a square distance between two points is a dot product operation between vectors. As long as vector models (which know about their components) supply that, you don't need any mention of coordinates at all anywhere (to support that operation generically). Granted, some *users* may need to get their hands at the coordinate of a point, but the more coordinate-free the library itself is the better.
I'm not a CGAL expert, and did not plan to rebuild CGAL. Your suggestion is interesting. However, it is directed to the xy-case or xyz-case, not to the latlong case. In the library in preview distances between points on Earth, or on any sphere, can be calculated. Those calculations cannot be done using vectors but need trigoniometrics. The essential point of the library is: total different point types, with accompagnying stategies. And of course you can use vector calculations in those strategies. I implemented the distance without, just the "good old" Pythagoras rule, delivering a squared distance... But point-segment distance needs, and has, vector calculations.
5) There is a reason why CGAL doesn't provide a distance function at all but only a squared_distance, and I agree with that reason: sqrt() kills robustness and distances are usually used for comparisons, so you seldom need that.
Agree, and it is implemented like that, for xy-points. For latlong-points it doesn't make sense.
6) There is a reason why CGAL is parametrized in a Kernel and not a coordinate type, and I agree with that reason: effective robustness is the result of the proper encapsulation of predicates and constructions, and a Kernel type provides a mechanism to *choose* that easily. That is, I can call an algorithm taking JUST a point class with double coordinates but having it balance accuaracy vs speed as I need it and at compile time without wrapping the point class at all. That means I need to give the algorithm additional compile-time information. In CGAL that information is given by the Kernel type expose by the Point type. It doesn't have to be like that exactly, but it is important to use a sufficiently useful type to tailor the algorithm. Parametrization on just the coordinate types is not enough.
I think this is solved by the strategy approach. I'm not a CGAL expert, for me the problem of CGAL is the license. I cannot use it, and therefore I'm not that interested in it. Most things I know because of this mailing list.
.. more to come...
OK, thanks for your comments! Barend
participants (16)
-
Arash Partow
-
Barend Gehrels
-
Bruno Lalande
-
Demian Nave
-
Eric Niebler
-
Fernando Cacciola
-
Hartmut Kaiser
-
Hervé Brönnimann
-
Joel de Guzman
-
John Femiani
-
Max Motovilov
-
Michael Fawcett
-
Noah Stein
-
Phil Endecott
-
Steve M. Robbins
-
Steven Watanabe