[GTL] redesign checked into sandbox

I checked a gtl project into the sandbox so that I can keep postings to the list more brief while allowing people to see the up to date and correctly compiling state of the re-design and re-implementation of the library interfaces. The main header file is gtl.h, which includes the others and defines free functions that may rely on any of the geometry concepts when instantiating the template function. A recent change is to the design is that rather than using SFINAE to overload a free function on return type I now call a meta-function that looks up from the related concept and provides the correct return type based upon the args given (see get() in gtl.h). This eliminates the need to register the return type in a specialized traits class since the concept can provide it for the given type generically. Log: initial checkin Added: sandbox/gtl/ sandbox/gtl/boost/ sandbox/gtl/gtl/ sandbox/gtl/gtl/geometry_traits.h (contents, props changed) sandbox/gtl/gtl/gtl.h (contents, props changed) sandbox/gtl/gtl/interval_concept.h (contents, props changed) sandbox/gtl/gtl/interval_data.h (contents, props changed) sandbox/gtl/gtl/interval_traits.h (contents, props changed) sandbox/gtl/gtl/isotropy.h (contents, props changed) sandbox/gtl/gtl/point_3d_concept.h (contents, props changed) sandbox/gtl/gtl/point_3d_data.h (contents, props changed) sandbox/gtl/gtl/point_3d_traits.h (contents, props changed) sandbox/gtl/gtl/point_concept.h (contents, props changed) sandbox/gtl/gtl/point_data.h (contents, props changed) sandbox/gtl/gtl/point_traits.h (contents, props changed) sandbox/gtl/gtl/polygon_concept.h (contents, props changed) sandbox/gtl/gtl/polygon_data.h (contents, props changed) sandbox/gtl/gtl/polygon_traits.h (contents, props changed) sandbox/gtl/gtl/post_concept_definitions.h (contents, props changed) sandbox/gtl/gtl/post_geometry_traits_definitions.h (contents, props changed) sandbox/gtl/gtl/rectangle_concept.h (contents, props changed) sandbox/gtl/gtl/rectangle_data.h (contents, props changed) sandbox/gtl/gtl/rectangle_traits.h (contents, props changed) The code in the sandbox is intended to conform to boost standards. Feel free to point out instances where I may have failed to do so. Thanks, Luke

Simonson, Lucanus J wrote:
...
I am still a bit unsure about the 'concept' word that is being thrown around. When I hear it I immediately think of BCCL and things like this video by Douglas Gregor (http://video.google.com/videoplay?docid=-1790714981047186825&hl=en) The reason I mention this is that the point_concept.h file has no constraints, so it cant be used with BCCL, as far as I know. I am thinking of concepts like described here: http://www.boost.org/community/generic_programming.html#concept and in the BCCL http://www.boost.org/doc/libs/1_35_0/libs/concept_check/creating_concept s.htm or even just documentation for concepts in SGI format http://www.sgi.com/tech/stl/table_of_contents.html I dont know if I have a narrow opinion here, but this is what I am interested in even more than algorithms or actual models of 'PointConcept' or whatever. -- John

John, First, let me thank you for taking a look at the code in the sandbox. I have made great strides so far with the community's help. Your input is greatly appreciated. John wrote:
I think your interest is the same as that of the rest of the boost community. Whatever the library does, it has to do it the right way to be considered for acceptance into boost. I'm not entirely sure I understand what you mean by point_concept.h has no constraints on the point data type. It references a point_traits<T>::coordinate_type. Any data type that does not provide a typedef to coordinate_type, or specialize point_traits<> with a typedef for coordinate_type will not be allowed to compile with the point concept. Similarly there are the point_traits<T>::get() and set() functions, which must either work with the given data type, or be specialized to work with the given data type. Are these not constraints on T? I suppose I could have named point_traits as point_concept_map instead to be a little more explicit about it. Would that name change make things more clear? I am restricting myself from making explicit requirements of the user data type (except that it have a default constructor) and instead use the concept mapping traits struct for all interaction with the data type. In this way, all data types defined before the library was written and can still be adapted to work with it without being modified to conform to its expectations. You do raise a valid point, there is currently no concept checking included in the design and implementation I checked into the sandbox. Is it valid to call something a concept if there is no concept checking? I think so, but I could be taking too broad a view. Once the design is finalized and implementation is mostly completed I was planning on adding some concept checking, but in fact, the only benefit of such checking is earlier and less verbose compiler errors when incorrectly using a template. This is of little benefit to me while authoring the library, and only marginally beneficial to the library user. The error they will get if they fail to specialize the traits for their legacy point type is that their point doesn't provide a get() function, which they can fix by providing one. I was also planning on translating the library to use the new c++0x language features once available, so it is my intention to use concepts, even if I'm not now technically doing so if that implies compile time concept checking when taking a narrow view of concepts. Thanks again, Luke

Hi Luke,
Yes it is. But in my opinion, showing up a concept class has the advantage of clearly communicating the intention to the community in order to validate / invalidate the design. Moreover, since two geometry-related libraries are currently being developed for Boost (even if they don't do the same things) I think comparing the concepts used would allow us to see exactly what are the convergences / divergences, and see what can or cannot be done to make them as close as possible. A certain level of consistency between those libraries would surely be appreciated. Barend and me have written the following point concept (it's obviously open to criticism): template <class X> struct Point { typedef typename point_traits<X>::coordinate_type ctype; enum { ccount = point_traits<X>::coordinate_count }; template <class P, int I, int Count> struct dimension_checker { static void check() { const P* point; ctype coord = point_traits<X>::template get<I>(*point); P* point2; point_traits<X>::template get<I>(*point2) = coord; dimension_checker<P, I+1, Count>::check(); } }; template <class P, int Count> struct dimension_checker<P, Count, Count> { static void check() {} }; BOOST_CONCEPT_USAGE(Point) { dimension_checker<X, 0, ccount>::check(); } }; It actually forwards everything to the point_traits class in order to check that everything needed is accessible through point traits for the point type begin checked. Since you rely on point traits too, I suppose you would have the same approach? If you don't want to do that right now because you're afraid about the profusion of concept checking macros in your code, I think that putting a BOOST_CONCEPT_ASSERT in the point_traits class is sufficient with this approach, since any access to a point should be performed only through this class. This way, you don't have to rewrite the check in every algorithm, and the only job you have to do is writing the concept class. However, as I'm not used yet with the BCCL, I can be wrong...? Regards Bruno

Bruno, I'm happy to hear from you again! Luke wrote:
Bruno wrote:
In fact I am thinking much more along these lines than you may realize. Given my proposed operator syntax for so-called Boolean operations I could fold the arbitrary angle geometry provided by Berend's library into mine (or visa-vera.) manhattan + manhattan => manhattan result type by way of manhattan algorithm manhattan + 45-degree => 45-degree result type by way of 45 degree algorithm manhattan/45-degree + arbitrary angle => arbitrary angle result by way of arbitrary angle algorithm with all the concept checking and tag dispatching taken care of by the API to select the correct algorithms and return types for the given input arguments and operations. Whether that arbitrary angle algorithm comes from Barend's library or the geos library (C++ port of the famous JKT java geometry library) seems like an open question to me. Geos is attractive as an alternative to cgal for those of us who cannot stomach cgal's qpl license. geos' lgpl is more restrictive than boost license, but I think we could provide a generic framework for plugging in lgpl'd algorithms to my API at link time to integrate the two. It could then plug in cgal algorithms for that matter, provided that the person doing the linking was allowed to use them (paid for the privledge or is academic or whatever.) This whole issue becomes complex since there are many libraries that do similar things to Barend's. The last open source library that did manhattan geometry operations was the fang library written in 1980 in C. It is next to worthless now that we've come so far with algorithms and data structures (and C++). Bruno wrote:
Yes, my own is very similar to what you show, except that I think n-dimensional point classes are pointless exercise, if you'll excuse the bad pun. What exactly is the benefit of making the order of the point data type generic? Are we saving typing in the library by merging the 2D and 3D point concepts? Do we pay a penalty for that? In your case you may not be paying a penalty because you are using compile time integer to index the point. You also provide compile time checking to make source the index is valid, I see, which is good. In my case, I have the rich language of isotropy that performs runtime indexing. There are separate index types for 2 and 3d geometry and type safety prevents an invalid 3d index from being used with a 2d type. The language of isotropy provides a different kind of abstraction than what you are doing abstracting away the order of the point. The question that leaves us with is which abstraction is more valuable to the user, since they are mutually exclusive. Isotropy is quite valuable to the user, and we get very good results (in terms of code quality and productivity) using it. I doubt the user will parameterize the order of their own code, so the benefit of doing so in your library will be confined to the internal implementation of the library itself, unless I am mistaken. Bruno wrote:
I don't know. I was deferring adding the checks until the design is finished, since they would change as the design changes. Also, my point_concept is a little more overloaded than yours might be. I use the concept type as a tag for tag dispatching purposes. I also use it as a sort-of namespace for holding all the functions related to the point concept. I pass it as a template parameter to select which concept to apply and I return it as the result of a meta function call to deduce the concept type for a given user type. It is therefore much more than a traditional "minimal" concept such as in the stl. Moreover, I want to allow partial modeling of the concept, for instance, if a data type doesn't provide a default constructor, or even allow modification of it's data through the traits it should still be allowed to work as a read-only geometry type when those services are not needed. It is, therefore, improper to check that the data type conforms to the full requirements of the concept, when in fact, it is not required for the specific concept function being instantated. For this the compiler errors themselves provide the right level of protection, and the concept check would simply provide a more concise error (which doesn't benefit me since I have no problem reading the verbose template errors.) I would add it only at the end for the benefit of the library user. Thanks, Luke

Oh, maybe my point of view was too narrow then :) In _this_ context I thought checking was a requirement, otherwise why make a class? without checking it is just documentation right?
So what I am interested in is making those assumptions explicit, and providing the tools to check that my _own_ private code is going to work with whatever makes it into boost. That is why I say I care more about concepts (meaning checkable concepts & archetypes) than I do about the algorithms in a boost geometry library.
My _personal_ opinion is that I prefer type generators or metafunctions to monolithic traits classes. When I looked at the CGAL kernel types (although it's been a while since I did) I realized how many typedefs etc. can pile up in a traits class. Frankly I think that makes traits harder to understand or work with. In the case of CGAL I did not know what a lot of the stuff meant because it dealt with aspects of geometry I was not dealing with at the time. Since I think any boost geometry library should be CGAL compatible, I would vote that the boost approach should be to decompose point traits into a bunch of metafunctions in their own tiny header files. You only include the header files with metafunction that apply to your task. For my two cents I reccomend: 1. Prefer metafunctions in the point concepts requirements over traits classes, or I'm afraid the traits will get huge. 2. Put your concept mapping functions in a namespace rather than a class (do it like like Fusion does). Namespaces seem to provide a lot more flexibility, and the syntax is a LOT cleaner. 3. Provide separate headers for each metafunction. This allows users to fucus on metafunctions that matter. 4. Make all requirements explicit in the concept class. This way I can look at the concept code and see what they are.
But what if somebody specializes your traits class? Then don't you lose the concept checks? I think explicit is better, so I prefer to place the concept check at the top of the algorithm that needs it. Better yet, apparently now BCCL has a BOOST_CONCEPT_REQUIRES macro (I havent used it). A concept check is a kind of documentation as well :) -- John

John wrote:
Um, not in my case. I am sort of stretching the concept class to serve additional roles, for instance, making it also the scope within which functions related to that concept are defined. When the free function of the library is called on a user type the related concept is looked up by metafunction and redirects the call to the function defined within that concept class. Also, I'm making no requirements on the user type except that it provide a default constructor and have an associated specialization of the traits class to provide accessor functions. I can check these requirements with concept checks on each function in the concept class, of course. John wrote:
If your own point type has a default constructor and API sufficient to get and set the x and y values in any way you like it will work with my library. You would simply specialize the point_traits for your type and provide a typedef for your coordinate_type and two functions, one to get a coordinate and one to set a coordinate. John wrote:
If the traits are huge the abstraction is being made in the wrong place. A good abstraction is a min-cut in a graph of dependencies. A huge number of metafunctions seems equally bad to me. Instead the goal should be to minimize the number of traits/metafunctions required to be understood/specialize by the user. That said I was already leaning in the direction of making the coordinate_type typedef a meta-function and moving it out of the traits.
I am considering the implications of your suggestion. It could be that it can be made to work, but I'm not clear on how overloading functions in a namespace is preferable to specializing a struct. It seems that it is more prone to compiler errors related to overloading ambiguity caused by bad user code and unfortunate combinations of user code. With the traits there is no chance for ambiguity.
3. Provide separate headers for each metafunction. This allows users to fucus on metafunctions that matter.
I am doing this already, though there is only one metafunction that matters right now; the one that maps the user type to its related concept.
4. Make all requirements explicit in the concept class. This way I can look at the concept code and see what they are.
It isn't really my intention for the user to look at the concept code. I don't need the user to fully model the concept such that all functions can be called on it. If the user only uses the read only functions that don't modify or construct their data type they can partially model the concept if the have to. (Some programming styles require dynamic allocation of objects and prohibit default construction, in other cases the user may never want the library to modify an object of a given type.) Thanks, Luke

Well, I hope you do go with metafunctions, I just think they are more flexible. CGAL has been around for a long time, and it is a good library that has a more restrictive license situation than boost. I personally dont use it (it is forbidden to me) but my colleagues love it, and I think any boost geometry library has to work well with it. It is old, but quite generic, and it is an extremely active project. It is also peer reviewed (by editors), so the stuff that is in it is vetted and important for real world problems. http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/Kernel_23_ref/Concep t_Kernel.html#Cross_link_anchor_0 Also, traits bundle together a lot of values that dont _need_ to coupled. I can let somebody else make this point: http://www.ubookcase.com/book/Addison.Wesley/CPP.Template.Metaprogrammin g/0321227255/ch02lev1sec2.html "The traits templates in the standard library all follow the "multiple return values" model. We refer to this kind of traits template as a "blob," because it's as though a handful of separate and loosely related metafunctions were mashed together into a single unit. We will avoid this idiom at all costs, because it creates major problems.", this is a good book BTW

Sorry for the long links, I'll get used to newslist posting eventually...
That is: http://tinyurl.com/5ulzdh
That is: http://tinyurl.com/5jnfph -- John

I know the principle of avoiding blob traits, as exposed in Abrahams and Gurtovoy's book. But I think it doesn't apply here just because the traits in question is *way* short. A type, a value, an accessor. And most algorithms need all of them. Does it really make sense to scatter them into several metafunctions??
I agree with Luke on this point, I'm afraid about nightmares that overloading ambiguities could bring to the user. However, I will consider doing a few tests to see the actual consequences of what you propose.
Same remark as above: one metafunction and one separate header of each of the 3 properties needed, I wonder if it's not a bit overkill...
4. Make all requirements explicit in the concept class. This way I can look at the concept code and see what they are.
Aren't the requirements explicit enough in the concept class I've shown? If not, could you be more precise on what you'd like to see more clearly specified? Bruno

Bruno wrote:
Well, I wrote / suggested that because I have in mind a very generic set of concepts associated with points that would be compatible with libraries like CGAL. I am worried that the traits will explode becuase there are so many uses for a point class that have subtly different requirements. The number of associated types etc. in the CGAL Kernel seems to indicate that in a sturdy geometry library that might be the case. eg, it looks a little bit like a point concept will require a 'space' concept that will end up involving tons of associated types for compatible geometric primatives (as in the CGAL Kernel).
I am not talking about requiring user code to depend on ADL, I mean make a special 'adapted' namespace like fusion does. I foresee less problems with ::point::adapted::at_c<0>(point); than I do with point_traits<MyPoint>::template at_c<0>(point) This involves 2 parts: 1. _if_ the traits get to be huge, it is possible to split namespaces accross header files. 2. The annoying 'template' keyword can be a source of problems, since it has been my experience that some compilers require it and others dont. I am also concerned about the 'typename' keyword (for the same reason). Some traits will also probably apply to multiple concepts, and since you can't partially implement a traits class, you will have to mix them by inheritance (I think) if you want to share some traits. Then you end up with an issue about dependant names inherited from template base classes that happens on g++ and not microsoft compilers.
I have only seen what look like the very beginnings of the development of these concepts, and I made those comments anticipating an explosion of traits.
I was worried because somebody was talking about using the traits class to add additional constraints. In your posted code, I dont see the actual definition of a traits class (I see a 'dimension_checker', and I see a point_traits template being used...) -- John

OK I agree with you that if the number of traits needed happens to grow up, scattering them will be much better than having a blob traits class. I don't know much about the CGAL kernel in its whole, I will take a closer look.
This is what I wanted to do to get rid of the template keyword and take advantage of the template parameter deduction on the point type. I wanted to merely implement it by forwarding things to the point_traits written by the user, but maybe this approach will be problematic (not tried yet). If it is, your approach will be a better option, indeed. This request had already been made in another thread so it will be done anyway.
Yep precisely because the concept literally says "there must exist a specialization of point_traits with X such that this code is valid". It's finally as much a "point traits concept" as a "point concept". Maybe it would be better to have X being the specialized point_traits? It would require an additional point_type typedef in the point traits and would give something like: template <class X> // X is a point_traits struct PointTraits { typedef typename X::coordinate_type ctype; enum { ccount = X::coordinate_count }; template <int I, int Count> struct dimension_checker { static void check() { const typename PT::point_type* point; ctype coord = X::template get<I>(*point); typename PT::point_type* point2; X::template get<I>(*point2) = coord; dimension_checker<I+1, Count>::check(); } }; template <int Count> struct dimension_checker<Count, Count> { static void check() {} }; BOOST_CONCEPT_USAGE(Point) { dimension_checker<0, ccount>::check(); } }; Then, requirements inside algorithms would be done this way: template <class P> BOOST_CONCEPT_REQUIRES( ((PointTraits<point_traits<P> >)), (void)) function_requiring_a_point(const P& p) {} It doesn't sound clearer to me though... If it's not what you were expecting, could you propose another concept definition? Bruno

Bruno wrote: ...
Well, if we agree that traits at least have the _potential_ to cause an issue then why make the approach _more_traits based?
It doesn't sound clearer to me though... If it's not what you were expecting, could you propose another concept definition?
I like most of what you have, and I am trying to get you guys to do the thinking so I dont have to ;) I'll work something up and send a link when I get a chance, but I think that it will take some work and testing. -- John

Luke wrote:
By order you mean the number of coordinates right? This is actually really important to me. Coordinates in 2D and 3D are important, but 4D coordinates are too (eg, homogeneous or projective coordinate). Quaternians are 4D (well, 4 coordinates), and Plucker cordinates have 5 coordinates. Those are cases I came up with off the top of my head, but I know there are more examples I did not think of yet. There are very valid reasons that the size of a coordinate set should be specified at compile time for cases other than 2 or 3.
Huh? I think 'Isotropic' may need some clarification, I thought you meant just that distances in each coordinate's direction have the same unit of measurement. Your comments make me wonder if there is something more...?
I often try to ... if I correctly understand what "isotropy" and "order" mean here. -- John

Luke wrote:
John wrote:
By order you mean the number of coordinates right?
Right. John wrote:
You would not be using my library with these types, or if you did, you would need to project it into a plane to provide a 2D view of it or similarly reduce it to 3D. In fact, my library is primarily concerned with planar geometry, so might be ill suited to your math. If you want a 4D quaternian wouldn't you just define your own class for that instead of using an n-dimensional hyper point that doesn't encapsulate any of the semantics of a quaternian except that it has 4 coordinate values? Better still, you create your own generic library for these geometric concepts? Luke wrote:
John wrote:
I do, of course, mean that distances in each coordinate's direction have the same unit of measurement, but I mean more than that. Because the geometry is symmetric, user code can be refactored to be parameterized based upon abstract concepts such as horizontal/vertical orientation, positive/negative direction on the number line, up, down, left, right, positive X, negative Y directions etc. In this way poor quality code with lots of flow control that is used to call the different named function for accessing x, y and z values of a data type can be parameterized by the runtime condition of what orientation/direction is dictated. This is what I mean by compile time accessors (implied by generic order of the point) are mutually exclusive with istropic style, since it forces there to be flow control to choose between symmetric behaviors. Thanks, Luke

Well then maybe I should back out of this discussion, I am interested in a generic geometry library that will provide me with a framework to write algorithms that work with any (or many) external opensource, commercial, or legacy geometry libraries. If that is not an aim of GTL then I appologize for the noise :-) -- John

Hi Luke, I have followed the GTL thread, and I think that you want to preserv as much as possible your initial scope, isn't it? A lot of people are waiting much more of a Geometry Template Library than you are ready to provide, so why not state clearly the intent/scope of the library and give it a name more adequated, something like Isotropic 2D Geometry Template Library, or I2DGTL if you prefer, or whatever defines more precisely your library intent? No body can discuss the intent of your library, but one stated clearly maybe there will be less interested people. I think that this will limit eternal discusions than do not get too much results. What do you think? Best regards Vicente _____________________ Vicente Juan Botet Escriba ----- Original Message ----- From: "John Femiani" <JOHN.FEMIANI@asu.edu> To: <boost@lists.boost.org> Sent: Saturday, May 10, 2008 9:10 AM Subject: Re: [boost] [GTL] redesign checked into sandbox

John wrote:
That is the aim. I don't want you to back out of the discussion. Instead of an n-d point concept, which provides you relatively little, why not extend the library to have explicit quaternians and homogeneous point concepts and algorithms specific to such geometry? Wouldn't that serve you better? It isn't my aim to provide those, because they are not used in my domain, but I do want to provide a framework that can be extended to include geometry that I don't personally need. Another example of compatibility is graph representations. I have an algorithm that computes the connectivity graph of polygons. Right now it is modeling the graph it populates in a fairly rudimentary and restrictive way. It would be better to make it compatible with boost graph's graph concept (if not directly use it.) I am looking at these compatibility issues and I want people to bring them to my attention so I don't overlook something just because it isn't relevant to my own domain. Thanks, Luke

Luke wrote:
So in my imagination, CoordinatesConcept and its refinements are what we are talking about. There should be related concepts of PointConcept, VectorConcept, LineConcept, OrientationConcept, RayConcept, IntervalConcept, SegmentConcept and the _real_ killers are probably GeometryConcept and/or CoordinateSystemConcept. Most of the discussion here has been about coordinates access etc. so here is a rough (uncompiled) sketch of how I think a CoordinatesConcept might look. I am under the gun at work so If you guys tear into this I may be slow to respond, but If this approach is appreciated I will eventually put it into an svn repo and actually try to compile it :) ---------------------------------------------------------------- //coordinates_concept.hpp.... namespace geometry { template<class Coord> struct Coordinates { Coord coord; typedef typename scalar_type<Coord>::type scalar; ////supporting checks I need to do... //BOOST_CONCEPT_ASSERT((Addable<scalar, scalar>)); //BOOST_CONCEPT_ASSERT((Multiplyable<scalar, scalar>)); //BOOST_CONCEPT_ASSERT((Sunbtractable<scalar, scalar>)); //BOOST_CONCEPT_ASSERT((Dividable<scalar, scalar>)); BOOST_CONCEPT_USAGE(Coord) { bool bval; bval = ::geometry::is_runtime_indexable<Coord>(); bval = ::geometry::is_runtime_indexable<Coord>::value; size_t dim; dim = ::geometry::dimension<Coord>(); dim = ::geometry::dimension<Coord>::value; static_at_checker< geometry::dimension<Coord> >::apply(); } template<class Dim> struct static_at_checker { Coord coord; typedef void result_type; typedef mpl::prior<Dim> static_index; typedef typename coordinate_accessor<Coord, static_index > accessor_type; typedef typename accessor_type::result_type raw_type; BOOST_CONCEPT_ASSERT((Convertable<raw_type, scalar>)); BOOST_CONCEPT_ASSERT((UnaryFunction<accessor_type>)); //BOOST_CONCEPT_ASSERT((Addable<scalar, raw_type>)); //BOOST_CONCEPT_ASSERT((Multiplyable<scalar, raw_type>)); //BOOST_CONCEPT_ASSERT((Sunbtractable<scalar, raw_type>)); //BOOST_CONCEPT_ASSERT((Dividable<scalar, raw_type>)); //.... static void apply() { raw_type rt = at(coord, static_index()); scalar st = at(coord, static_index()); raw_type rt = at<static_index>(coord); scalar st = at<static_index>(coord); static_at_checker<static_index>::apply(); } }; template<> struct static_at_checker <mpl::int_<0> > { static void apply(){} }; }; }//geometry -------------------------------------------------------------- Actually I can already see some things to do better in there but that is _close_ to how I would do it. There would be another CoordinateArray concept probably. In order to adapt I would do something like this perhaps: -------------------------------------------------------------- namespace geometry{ //named axises namespace axis { struct X : mpl::int_<0> {}; struct Y : mpl::int_<1> {}; struct Z : mpl::int_<2> {}; struct W : mpl::int_<3> {}; }; //Namespace for nonintrusive customization, not to be called directly namespace adapted { template<class Coord> struct is_runtime_indexable : Coord::is_runtime_indexable {}; template<class Coord> struct dimension : Coord::dimension {}; template<class Coord> struct scalar_type : Coord::scalar_type {}; template<class Coord, class Index> struct coordinate_accessor : Coord::coordinate_accessor<Index> {}; } //Public interface template<class Coord> struct is_runtime_indexable : geometry::adapted::is_runtime_indexable<Coord> {}; template<class Coord> struct dimension : geometry::adapted::dimension<Coord> {}; template<class Coord> struct scalar_type : geometry::adapted::scalar_type<Coord> {}; //Static or dynamic indexing template<class Index, class Coord> adapted::coordinate_accessor<Coord, Index>::result_type>:: type at(Coord& coord, Index const& index) { return adapted::coordinate_accessor<Coord, Index> ::apply(coord, index); } //Static indexing template<class Index, class Coord> typename enable_if_c< less_<Index, dimension<Coord> > , adapted::coordinate_accessor<Coord, Index> ::result_type >:: type at(Coord& coord) { return adapted::coordinate_accessor<Coord, Index>::apply(coord); } } ------------------------------------------------------------ I would also add some other functions (at_c, size, etc) but they would be implimented in terms of what is here. I think it is a lot like Fusion, but with a requirement that the element types have to be convertable to a common scalar type that supports the right operations (+-*/=). And then there is the metafunction to determine whether it is runtime indexable or not. Since ::geometry::adapted is a namespace it is easy to add adapted syntax for other concepts (like Fusion does). Since the syntax uses free functions in the ::geometry namespace you can include files with only the functions & metafunctions you need. (There will be more files in the future, that is bound to happen so we should plan on it). Requiring adapted syntax in the ::geometry::adapted namespace is intended to help avoid ADL issues etc. MPL integral constants are used for coordinate access because they are a convertable to integers, so static indexing should always work whenever runtime indexing would. To be complete there should be header files that adapt Fusion sequences, tuples, c-arrays and boost arrays, and CGAL points. -- John

John wrote:
Hmmm, interesting. I've been thinking about a coordinate_traits class similar to the follow to inform the library of various types. template <typename T> struct coordinate_traits {}; template <> struct coordinate_traits { typedef int coordinate_type; typedef unsigned int manhattan_distance_type; typedef double euclidean_distance_type; typedef unsigned long long area_type; ... }; or alternately metafunctions for the same template <typename T> struct area_type {}; //will not compile unless specialized template <> struct area_type<int> { typedef unsigned long long type; } template <> struct area_type<double> { typedef unsigned double type; } Since I need to infer them somehow from the basic coordinate type. That is working under the assumption that all coordinates have the same type, whereas your proposal looks like you want coordinates to be convertable to a scalar type, but allow each Dim to define a different raw_type. What exactly is the value of having different raw types? What I'm doing with my library is basically saying the scalar type is the coordinate type, and you can use whatever data type you want to store the values in your object, but it should convert to and from scalar type to be processed by the library. Is that unreasonable? Thanks, Luke

As you know I prefer the metafunctrions :)
I want the flexibility to allow 'at' to return a reference. The references would then be to the underlying types internal members, which could be of any type. I have applications in mind where coordinates may be references to unsigned chars, so the type of the reference can not be the same type used for, say, an inner product. I required only a 'scalar' type because I figure that is the minimal need. Coordinate elements, regardless of there raw types, need to be mixed, added, divided, etc. Presumably things like euclidean norms are callable objects and something like result_of<euclidean_norm(Coord)>::type can be used to get or control its result. In my concept a coordinate_accessor is a UnaryFunction object an I should make sure ::boost::result_of works, huh..? Also maybe there needs to be Metric or Norm concepts to, eh? -- John

John wrote:
Why? For a small type like a single coordinate value return by value will be efficient? Do you want the convenience of being able to use it to get a modifiable reference? What if the data type doesn't have an explicit data member, but computes it on demand and updates values it is computed from when it is modified through the accessor?
You want such functions to be functor objects? Why not just let it be a function and create a functor from it when(if) you need one?
In my concept a coordinate_accessor is a UnaryFunction object an I should make sure ::boost::result_of works, huh..?
I guess it all depends on how far we want to go and how complicated we are willing to let it get. I'm driving at simplicity, elegance and minimal dependencies. I want user code to be as dead simple and straightforward to write as possible. I like the current idea of providing a library of free functions that infer types from their arguments using metafunctions and call the appropriate algorithm associated with the concept. Making functions into functors leads to a lot of extra boilerplate in both the library and the user code. I'd need to hear a good rationale for that.
Also maybe there needs to be Metric or Norm concepts to, eh?
I guess that depends on whether they are functions or objects. We would only want concepts for template parameters that may be supplied with user types. To that end, we need to be intentional about what is useful to be made generic rather than just making everything generic because we can. For me the concepts in the library serve two purposes. 1. They allow user types to interface with my heavy algorithms, easing and improving the quality of integration of those capabilities into an application. 2. They provide the user with a set of useful behaviors for geometry objects that are superior to what they are likely to have on their own types. By superior I mean completeness, consistency and ease of use. The only things that need to be made concepts, in my view, are geometry data types, not algorithms or behaviors. Thanks, Luke

Luke wrotE:
I was thinking along the lines of std::vector::at, or Fusion. Certainly I was not worrying about performance, I just dont want to impose unnecessary constraints, and I want to stick close to the semantics of Fusion or MPL sequences. In the case of a computed coordinate, then the raw type would not be an lvalue and the coordinate would be immutable. Thats fine, but we will also need to modify some points. If you want to use proxies, then I think the way to make sure proxies are allowed is to use a proxy as a coordinate type in the archetype. Anyhow I am not dead set on that point -- it just seems like the concept should impose minimimum constraints on the coordinates type, and allowing heterogeneous coordinates seams less restrictive.
Functions are callable... and result_of should work on a function unless it is overloaded I think (I have to check). What I am saying is that I think I would prefer result_of instead of a new metafunction for every operation.
http://www.boost.org/doc/libs/1_35_0/libs/utility/utility.htm "The implementation permits the type F to be a function pointer, function reference, member function pointer, or class type. " I have been using result_of and I love its simplicity. IMO dependancies on libraries that will become part of the standard is tolerable. We could also follow the fusion example again and have a result_of namespace, and in there have metafunctions named after the corresponding regular functions.
Developers are users too! Hopefully this library will inspire a lot of new extensions & contributions over time, right? Plus there may be some benefit to passing a metric around. For instance, we might want to have a way to find out if it is ... isotropic ....? Anyhow I think the thing to do is focus just on coordinates at the moment, and I think that whatever we do should stay very close to the Fusion approach. -- John

Hi John, Although I'm not completely aware of the Boost.Fusion design and the techniques you describe, I think I basically got the point about the direction you wish us to take. We'll study this. An issue I've been facing with the point_traits class makes me think that you could be right with your approach of avoiding blob traits. However, I have a question about that. If I foresee what this will look like in C++0x, I can see something very close to the traits approach. If you take a look at Doug's presentation about concepts (http://tinyurl.com/5gbykj), the new-style for loop he talks about (near 36:00) relies on a For concept that allows it to find the begin and end iterators of a sequence and iterate on it. Here is the concept_map that Doug shows to map native arrays to this concept: template <typename T, size_t N> concept_map For<T[N]> { typedef T* iterator; T* begin(T array[N]) { return array; } T* end(T array[N]) { return array + N; } } Structurally, this strongly reminds the point_traits class, for a good reason: I had concept maps in mind when writing it. For example, here is the current specialization of the point_traits class for arrays: template <class T, int N> struct point_traits<T[N]> { typedef T coordinate_type; enum { coordinate_count = N }; template <int I> static T& get(T p[N]) { return p[I]; } };
Wouldn't it be better to simply state that a do-it-all concept is bad because it will generate blob traits, and should rather be separated into several smaller concepts resulting in several small traits? Bruno

Bruno Lalande
I am using Fusion in my code, and I like its design. I am no expert at it but I enjoy what I have been exposed to through Fusion.
Frankly, I dont know. My sense is that the pre-C++0x (is it 09?) way of dealing with no concept maps is to express concepts in terms of free functions and metafunctions. It looks like this is how boost libraries like boost Graph or Fusion deal with the issue. Still, I am curious about this approach to faking a concept_map, it looks like it could turn out to be something I like. I just dont want the "point_traits" approach used for adapting (or faking a concept map) to spill over into the concept itself. I continue to think thate the concept checking class should require only metafunctions and free functions if we want to support a nonintrusive way to adapt exising pointtypes. I changed the subject line of the thread to match the discussion (we are talking about how to design point concepst, and concept_maps). -- John

John wrote:
My thinking was similar to Bruno's in that I had a concept map in mind. I was also thinking about how the library is used and what is easier for the user. Is it easier to the user to specialize one traits class or several metafunctions? If a user is more likely to understand that they need to provide a complete specialization of a traits class than that they need to specialize all relevant meta-functions. This ease of use argument breaks down as we start pouring extra stuff into the traits. This is similar to the kind of OO code bloat that happens when people fail to maintain objects well and allow the scope of an object to grow instead of breaking it down into separate objects. From that standpoint I also agree with Bruno that the solution is to have more smaller concept_map/traits classes instead of uber-generic all encompassing ones that try to cover too much ground and end up confusing users with a bunch of stuff they don't need or understand because they are only interested in some small portion of what's there. So, we can envision two different tacks to take with this point concept. One is to make one point concept for all types of points (2D, 3D, homogeneous points, 11D simplified string theory points, etc.) or we can have a separate concept for each conceptual type of point and separate traits classes for each. We can have inheritance between concepts, 3D point could inherit from 2D point concept, for instance. Having free functions and meta functions instead of traits might lead us in the direction of good design, but I don't think it is a prerequisite. Good OO design is probably achievable. Thanks, Luke

Hi,
I'd be inclined to prefer the first solution, because I just don't see any advantage in the second one. Do you have some strong arguments in favor of the second approach?
Yep I agree, I think we must consider metafunctions and free function not as a goal or proof of good design but as a more powerful tool that we can use if things go too complicated. Anyway, I'll give it a try to compare the resulting code, on library side and (most important) on user side. Regards Bruno

I'm not completely sure to see what you're talking about exactly, but maybe it is what I had thought about as the fact of "mapping coordinates". Let's say I have an algorithm that applies on 2D points and I want to apply it on the 2nd and 3rd coordinates of my 3D point. I have already thought about providing some compile-time mappers to make such things possible. Are you talking about this king of manipulation? (sorry if I misunderstood) Bruno

Hi,
Yep this is my point of view too. We're not talking about "merging 2D and 3D" but making generic algorithms Some very simple algorithms like distance or dot product, or even addition and subtraction, have to be implemented in a dimension-agnostic manner. BTW, that was exactly the purpose of my first submission to Barend, when I extrapolated his pythagoras algorithm to make it work with any number of dimensions. As a potential user, it's my first requirement. Some algorithms only make sense for a precise number of dimensions. I guess those ones can just statically check if the point that has the right coordinate_count (or "at least enough" coordinates, don't know yet). If your library is 2D-oriented, I'm obviously not asking you to put the coordinate_count requirement into your concept since you wouldn't use it anyway. It's always the same principle: the concept represents the minimal set of requirements, so it's OK for me like that. Regards Bruno

Sorry for multiple mails, there have been a lot of things said since yesterday evening and I have troubles synthesizing my answers. This is the last one :-) On Sat, May 10, 2008 at 12:50 AM, Simonson, Lucanus J <lucanus.j.simonson@intel.com> wrote:
If you face this kind of problem, it means in my opinion that your concept should be separated into several smaller ones. For example , if you want some algorithms to only require read access and some others to only require write access, then you should write a read concept, a write concept, and a read-write concept. I'm not very knowledgeable yet about concepts but if I have well understood, a concept is meant to be checked in its whole. A type satisfies or doesn't satisfy it. C++ gurus will rectify if I'm wrong but I think it's the way in which they will apply in C++0x. Bruno

Simonson, Lucanus J wrote:
As I recall, with Barend's library if you ask for the intersection of two unit-squares (0,0,1,1) and (1,0,2,1) you'll get a line (1,0,1,1), since it considers all points on the perimeter of a polygon to be included in the polygon: +-+-+ | | | +-+-+ Luke, what does your library do in this case? I mention this because, as well as the C++ language style issues which I'm pleased to see are being discussed by smarter people than me, there are no doubt very many geometry-related design choices here. If your library were stand-alone, your choices wouldn't matter much; but if as you suggest it can integrate with other (future) libraries, then consistent choices are needed. Phil.

Simonson, Lucanus J wrote:
Phil wrote:
are needed.
I've been waiting for this to come up. It is the old open/closed semantics issue. In the specification for what I implemented the boundary is neither "in" or "out", but somewhere in between. My implementation is consistent with the behavior of the EDA vendor tools. My choices did matter, and in fact, I didn't get to choose, the choice was made decades ago. If performed as a Boolean operation on polygons that happened to be rectangles the intersection that produces a zero area polygon is a degeneracy and will not be output. Only positive area regions are output. However, there is also provided by my library a rich language on rectangle types, which includes intersection of two rectangles that will return the degenerate intersection as a line modeled as a rectangle with zero area. Let's say that instead of intersection it is the subtraction operation being performed (equivalent to (A & !B) on the same two rectangles as your example above. My library would leave A unchanged, would Barend's library clip the boundary off of A and back it away by some epsilon in service of the boundaries are "in" semantic? It may be that Barend only implements intersection and union, in which case this issue would never come up because without inverting operations the semantic is never problematic. You can generally model "in" semantics by adding an additional bit of precision to your (integer) coordinates (shifting coordinate values left by one) and inflating your polygons by one resulting in all odd coordinates. Shapes own the even coordinates near their boundaries and your example above would produce a non-zero intersection containing one column of even coordinates. We do things this way at the application level when we need to worry about the boundary condition. You could model "out" semantics by deflating by one instead. I can't image how much of a pain all this becomes when using floating point coordinates due to numerical error. Does Barend's library consider a boundary to be shared if it is within some epsilon? What about if it is within some epsilon of equal angle? It seems it would get awfully messy in a hurry. Even if I were to implement arbitrary angle geometry, I would do it on fixed point coordinates at the interfaces and probably use variable precision numerical types for robustness of the algorithms themselves. Given that people seem very concerned about compatibility with cgal it seems like we should be figuring out what it does in these cases. The boost license allows cgal to ship my code with their library if they choose. I think it makes a lot of sense to think about compatibility issues. Thanks, Luke
participants (5)
-
Bruno Lalande
-
John Femiani
-
Phil Endecott
-
Simonson, Lucanus J
-
vicente.botet