
Some of you may remember discussion related to a potential submission of a computational geometry library from Intel last year. Interest in the library was expressed, and some interesting conversation about technical specifics of the library ensued. The most common feedback was "show us the code". At that time, I said that it would take several months to get approval to release the code outside of Intel. Eventually I decided to put the further discussion of my library in boost on hold until I could release the code to you. Now I have gotten approval to release the code to boost, licensed it under the boost license and uploaded it to the vault under Math Geometry. Please feel free to take a look and discuss the library on the list or directly with me. See the README file for some more specific information about the submission. I realize that there has been a recent submission of a GIS data processing library targeting cartography applications mostly, from Barend Gehrels. I agree with him that the overlap between his library and mine is minimal. He is using floating point arithmetic and works with arbitrary angle polygons. My library is integer based and works with polygons that are restricted to right angles and 45-degree angles. Since last year I have added support for resizing of sets of polygons that may have 45-degree edges. While there is some minimal numerical adjustments that need to be made to ensure integer coordinates on the output of such resize operations, that is still a far cry from what CGAL or a good GIS data crunching library is doing. The two capabilities are more complementary than competing. I expect that many changes to my library will be recommended (or required) by the boost community before it might be considered for acceptance into boost. Also, I recognize that some synthesis between multiple related geometry submissions (and ideas) is likely to take place. I'm fine with that. That is what I want to happen and I why I came to boost. Now that I am able to share the original source code with you it is possible for fruitful discussion in that vein to resume. Luke

Simonson, Lucanus J wrote:
Now I have gotten approval to release the code to boost, licensed it under the boost license and uploaded it to the vault under Math Geometry.
Hi Luke, That's good news. I feared that you had been frightened-off by the previous discussions. Although it's good to have the code, and no doubt some people who can scan C++ faster than I can will really appreciate it, what I'd love to see is more in the way of rationale and concept-documentation. For example: - My recollection of the last part of the discussions the first time around was that they focused on the "nasty" way in which you made it possible to adapt a legacy struct to work with your library, and in particular how you added methods to the class by casting from a base class to a subclass. It would be great to see a write up of the rationale for that compared with the alternatives. Perhaps this could just be distilled out of the previous discussions. My feeling is that it may come down to this: what you've done is the most pragmatic solution for your environment, but it isn't something that could ever make it into the C++ standard library (since it used casts in a non-standards-compliant way). So, should Boost only accept libraries that could be acceptable for C++, or could Boost have a more liberal policy? Also, how much weight should be put on the "legacy" benefits of your approach? My feeling is that the standard library, and Boost, typically prefer to "do it right as if you could start all over again", rather than fitting in with legacy problems. - Your library has a limited scope: 2D orthogonal and 45-degree lines. (And its name ought to include some indication of that.) I would like to see some exploration of in what way your interface (as opposed to your algorithms) is tied to this domain, i.e. to what extent your interface could be re-used for a more general or differently-focused library. For example, could you have a Point concept that could be common with Barend's library, allowing Federico's spatial indexes to be used with both? Or would do you require (e.g. for algorithmic efficiency reasons) a point concept that is inherently incompatible? - There are plenty of application domains for computational geometry. Presumably you're processing chip layouts. The other case that I can think of for orthogonal geometry is in GUIs (until you have windows with rounded corners). Everything else that I can think of (GIS, games, mechanical CAD) needs arbitrary angles or 3D. You may be proposing something that no-one else here has any use for, except as a stating point for more geometry libraries in Boost - in which case your concepts and other interface choices will be given a lot more attention than your algorithms. Regards, Phil.

That's good news. I feared that you had been frightened-off by the previous discussions.
Although it's good to have the code, and no doubt some people who can scan C++ faster than I can will really appreciate it, what I'd love to see is more in the way of rationale and concept-documentation. For example:
- My recollection of the last part of the discussions the first time around was that they focused on the "nasty" way in which you made it possible to adapt a legacy struct to work with your library, and in particular how you added methods to the class by casting from a base class to a subclass. It would be great to see a write up of the rationale for that compared with the alternatives. Perhaps this could just be distilled out of the previous discussions. My feeling is that it may come down to this: what you've done is the most pragmatic solution for your environment, but it isn't something that could ever make it into the C++ standard library (since it used casts in a non-standards-compliant way). So, should Boost only accept libraries that could be acceptable for C++, or could Boost have a more liberal policy? Also, how much weight should be put on the "legacy" benefits of your approach? My feeling is that the standard library, and Boost, typically prefer to "do it right as if you could start all over again", rather than fitting in with legacy problems.
Even more as it is possible to write the algorithms in a way that these are agnostic of the concrete point data type used (as long as there are adaptors available, allowing to make this point type compatible with the expected point concept). Joel explicitly alluded to that in the first discussion and I think there is no other way forward. Remember, Boost is a collection of libraries with a major emphasis on generic interfaces, which clearly is one reason of its acceptance by the community. And - anything not conformant to the Standard shouldn't even be considered as a Boost library, IMHO Regards Hartmut

In response to Phil,
Although it's good to have the code, and no doubt some people who can scan C++ faster than I can will really appreciate it, what I'd love to see is more in the way of rationale and concept-documentation. For example:
I uploaded 23KLOC. I don't expect people to give up their day jobs just to read my code, so your request seems quite sensible.
- My recollection of the last part of the discussions the first time around was that they focused on the "nasty" way in which you made it possible to adapt a legacy struct to work with your library, and in particular how you added methods to the class by casting from a base class to a subclass. It would be great to see a write up of the rationale for that compared with the alternatives. Perhaps this could just be distilled out of the previous discussions. My feeling is that it may come down to this: what you've done is the most pragmatic solution for your environment, but it isn't something that could ever make it into the C++ standard library (since it used casts in a non-standards-compliant way). So, should Boost only accept libraries that could be acceptable for C++, or could Boost have a more liberal policy? Also, how much weight should be put on the "legacy" benefits of your approach? My feeling is that the standard library, and Boost, typically prefer to "do it right as if you could start all over again",
rather than fitting in with legacy problems.
The rational was basically that it satisfied everyone's requirements at the time. Boost community input wasn't gathered, so strict compliance to what the standard says is safe wasn't a requirement. I can't answer your question on what policy boost should have. The compatibility with legacy code is really the crux of the issue. Were my design goals the right ones? For internal development, I think they were. For boost development, probably not, and I'm willing to change the design to reflect the change in goals. I'm hoping for dialog on what the new design should be to prevent unnecessary iterations.
- Your library has a limited scope: 2D orthogonal and 45-degree lines.
(And its name ought to include some indication of that.) I would like to see some exploration of in what way your interface (as opposed to your algorithms) is tied to this domain, i.e. to what extent your interface could be re-used for a more general or differently-focused library. For example, could you have a Point concept that could be common with Barend's library, allowing Federico's spatial indexes to be
used with both? Or would do you require (e.g. for algorithmic efficiency reasons) a point concept that is inherently incompatible?
For me, requiring a point concept that a has an x() and y() member functions is unnecessary and restricts the usefulness of the library. I could obviously make such a requirement, but I would prefer to have adaptor functions such as: coordinate_type point_interface<T>::getX(const T& point); which allows compatibility with anyone's point concept, rather than requiring that everyone have syntactically compatible point concepts. Federico's spatial indexes should be compatible with both libraries already, provided they are conceptually 2D points, regardless of what API the provide or concept they model. Even if I took out the inheritance/casting, I still wouldn't require a specific API on the user type. Shouldn't generic code ideally work with any type that is conceptually a point, rather than only types that model the point concept they set forth?
- There are plenty of application domains for computational geometry. Presumably you're processing chip layouts. The other case that I can think of for orthogonal geometry is in GUIs (until you have windows with rounded corners). Everything else that I can think of (GIS, games, mechanical CAD) needs arbitrary angles or 3D. You may be proposing something that no-one else here has any use for, except as a stating point for more geometry libraries in Boost - in which case your
concepts and other interface choices will be given a lot more attention
than your algorithms.
This is why I suggested synthesis with other library proposals. We have an adaptive scheme in one of our applications that uses the rectilinear algorithms in the case that the input is purely rectilinear, uses the 45-degree algorithms in the case that the input contains 45-degree edges, and uses legacy general polygon algorithms not included in my submission for general polygon inputs. A good set of general polygon algorithms (including numerical robustness) would complement what I am providing. I'm not sure your assessment of restricted applicability is entirely true. I recently interviewed a PhD student who's thesis was on networking and wrote a suboptimal algorithm for computing the connectivity graph on a set of axis-parallel rectangles to model network connectivity requirements between nodes. He managed to turn it into 7 different publications at mostly DARPA sponsored networking conferences and workshops based upon the strength that it was at least better than the previously published algorithm in the field. I asked him why he didn't use an R*Tree and he had never heard of it. I asked him about scanline and got a blank look. He had zero knowledge of computational geometry. I think the applications do exist and that they are building their own instead of using what is already out there. That's why I think boost is a good place for it. Luke

Simonson, Lucanus J wrote:
For me, requiring a point concept that a has an x() and y() member functions is unnecessary and restricts the usefulness of the library. I could obviously make such a requirement, but I would prefer to have adaptor functions such as: coordinate_type point_interface<T>::getX(const T& point); which allows compatibility with anyone's point concept
Shouldn't generic code ideally work with any type that is conceptually a point, rather than only types that model the point concept they set forth?
You should be able to express this something like "Type T models the Point Concept if there exists a specialisation point_interface<T> having members ....". Can you write a definition like that? I think it would be an interesting thought-experiment to consider how the standard library would look if re-written in this style, e.g. not using std::pair in maps. There are three groups of people affected: - Users with legacy code, who benefit. - Users without legacy code, who have a (perhaps small) extra step on their learning curve. - People writing libraries, who have more work to do. Phil.

Phil wrote:
You should be able to express this something like "Type T models the Point Concept if there exists a specialisation point_interface<T> having members ....". Can you write a definition like that?
As it stands right now: Type T models the Point concept if it is default constructable, copy constructable and there exists a specialization of PointInterface<T> having the members: static Unit pointGet(const T& t, Orientation2D orient); static void pointSet(T& t, Orientatino2D orient, Unit value); static T pointConstruct(Unit x, Unit y); Except for capitalization, I would keep this portion of the library the same (I like runtime resolution of getting x or y value and depend on compiler constant propagation and optimization to provide fast access if orient is known at compile time.) Orientation2D is a class-encapsulated enum to enforce compile time checking that index values are legal. Phil wrote:
I think it would be an interesting thought-experiment to consider how the standard library would look if re-written in this style, e.g. not using std::pair in maps. There are three groups of people affected: - Users with legacy code, who benefit. - Users without legacy code, who have a (perhaps small) extra step on their learning curve. - People writing libraries, who have more work to do.
Users without legacy code have the option of either using the geometry types provided by the library or defining their own that match the expectations of the default (non-specialized version) of the interface. This should offset the added complexity of the extra layer of abstraction to a large degree. I have found new users starting from scratch to be generally happier using GTL than the users with legacy code. I always consider doing work as a library author to safe work for the library user to be a win. I don't see how I create work for people writing other libraries, since they have the option us conforming to the default interface. Bruno wrote:
The point concept of Barend's library doesn't require any x() or y() member functions. The fact that the library proposes 2 predefined point classes, point_xy and point_ll, can be a bit confusing but don't be fooled: they do *not* represent the point concept, they are only 2 classes that satisfy it. The point concept uses accessors of this kind : template <int I> value() const;
That looks much better to me. I didn't follow Barend's thread closely enough, I was unsubscribed at the time. Bruno wrote:
In the second preview of the library, there was also runtime indexing accessors, but they should disappear to only have compile-time accessors as shown above (I'm currently working on this with Barend).
There is no need to have both run time and compile time accessors, and you can do things with run time accessors that you cannot do with compile time accessors. The reverse is not true. I think you made the wrong decision and should consider making the accessors runtime only instead. The way my accessors work is that I pass a class-encapsulated enum value, which, if the data type is particularly well formed, can be used to index directly into an array of member data. This turns out to be just as fast (when optimized) as direct access to a data member when the index is known at compile time and faster if the index is a runtime variable. Consider the difference between: int get(array<int, 2> point, int index) { return point[index]; } and int get(const tuple<int, int>& point, int index) { if(index==0) return get<0>(point); else return get<1>(point); } If you want to be passing the index around as data you don't want it to be a compile time parameter. I want to pass it around as data. I want my point to be a class-encapsulated array. Bruno wrote:
Also, the point concept as currently defined might be replaced by a point_traits class usable the same way as what you propose with your point_interface<T>. It gives something like : point_traits<T>::value<0>(p) to have the first coordinate of a point p. The point concept would then follow the exact definition just given by Phil. Not sure yet this will be done, but it brings a lot of advantages (non-intrusiveness, possibility of using native arrays as points, ...).
I have thought about changing the name to traits. For more complex types, such as polygons, there are typedefs in the interface for iterators, and sometimes entire classes defined to adapt a polygon data type to iterator range semantics. The new pattern might look like: template <class T> class point_traits { public: typedef T::coordinate_type coordinate_type; static inline coordinate_type get(const T& t, Orientation2D orient) { return T.get(orient); } static inline void set(T& t, Orientation2D orient, coordinate_type value) { t.set(orient, value); } static inline T construct(coordinate_type x, coordinate_type y) { return T(x, y); } }; which can be partially specialized to allow legacy types to work with the library. Hartmut wrote:
Joel explicitly alluded to that in the first discussion and I think there is no other way forward. ... And - anything not conformant to >the Standard shouldn't even be considered as a Boost library, IMHO
Yes, Joel did suggest compile time accessors, and tuples in the first discussion. A point can be a tuple; it doesn't have to be. Heterogeneous coordinate types for different axis' coordinate values is reasonable, and provided that they all provide conversion to and from the library's coordinate type they work fine. I have been trying to figure out how to make templating the coordinate data type (everywhere) work, and it provides some advantages, but adds a lot of complexity. It would be even worse if I tried to have separate x_coordinate_type and y_coordinate_type. I'm fine with changing the library (even big changes) to make it conformant to the Standard. I said so when I first posted my code to the vault. Some of my internal install base is even willing to port their code to a new "boost-ified" version, if it comes to that. I think doing so would constitute and improvement. Luke

After this morning's discussion I implemented what I've had in mind to make the implementation of my library standard generic programming. It is just a demonstration of the GP interfaces I have in mind, and is pretty simple. I define a point, interval, point_3d and rectangle concepts and use them with each other. For example the point_3d is also a model for point_2d, and I use it that way, and the rectangle interface is implemented in terms of the interval. The coordinate type is templated everywhere and I use that fact to demonstrate the compatibility that enables between geometry objects with different coordinate data types. Near the end I show how a made up layout object data type, that has the four coordinates of a rectangle as part of its member data, can be made to model the rectangle concept by providing a specialization of rectangle_traits for it and then I use it with another rectangle type with different coordinate type. Please feel free to critique the code. I want to make sure I've got everything right before I rewrite the library. Thanks, Luke

Please feel free to critique the code. I want to make sure I've got everything right before I rewrite the library.
I am a boost user who would be interested in a geometry lib. I only had a chance to glance at the code & thread, but I like the idea of wrapping a generic type in order to create an adapted point type. This way one can reuse, say, an array of float and still use the type system to keep vectors and points separate. I DO think that when I am writing code that needs points I dont want to depend on the fact that a wrapper was used. I would rather have concepts for the wrapper's interface, including the member, free functions, and metafunctions that must be available. but I have to say that compile time indexing is useful for more than just performance, it is also good for genericity. If I have a struct, for instance, I can not (safely) index the coordinates. If I have an encapsulated class, I definately can not. Wouldn't it be possible to use a _different_ type for each orientation, and provide multiple overloads to the get/set functions? For example: struct some_legacy_point { double x_; double z_; string something_else_; double y_; //Dont ask why it is in this order, its legacy, right! }; typedef boost::mpl::int_<0> X; typedef boost::mpl::int_<1> Y; //Just provides access to xy class adapted_point_reference { private: some_legacy_point& base_; //This is a reference, it is not DefaultConstructable public: //Construct as a reference adapted_point_reference(some_legacy_point & base) : base_(base) {} //CopyConstructable (refers to same some_legacy_point) adapted_point_reference(adapted_point_xy & other) : base_(other._base) {} //I image these are part of AffinePointConcept or something typedef double coordinate ; coordinate get(X const&) {return base_.x_;} coordinate get(Y const&) {return base_.y_;} //I imagine these are part of MutableAffinePointConcept or something coordinate put(X const&, double value) {base_.x_ = value;} coordinate put(Y const&, double value) {base_.y_ = value;} }; I think the same approach would work if some_legacy_point keeps coordinates in an array. -- John Femiani

Hi Luke,
Please feel free to critique the code. I want to make sure I've got everything right before I rewrite the library.
In an eralier post you menioned that your interface was better than requiring a point model to have member functions x() and y(). That's absolutely true, but the conclusion is that such a concept definition is not generic so it's the concept itself which is wrong, not the use of concepts (you know this of course). We would never propose such a concept precisely because you can't use concrete types *as is*. The fundamental idea of generic programming is the use of "external" adpatation. To that effect, your concept definition is way better than one requiring members like x() or y() and I also noticed in the attached sample code that there are no wrappers: this is very good and fundamental since having to create an instance of a point_traits<T> passing a concrete point type wouldn't be in the spirit of good generic programming and we would complain. Still, there are some points to discuss. Take this for instance: template <typename T> class point_traits { public: typedef typename T::coordinate_type coordinate_type; static inline coordinate_type get(const T& point, orientation_2d orient) { return point.get(orient); } static inline void set(T& point, orientation_2d orient, coordinate_type value) { point.set(orient, value); } static inline T construct(coordinate_type x_value, coordinate_type y_value) { return T(x_value, y_value); } }; No doubt this is generic as intended, but IMO it is a bit monolthic and forces an unnnecesary verbose syntax. I would rather have point_traits define types but not functions: template <typename T> struct point_traits { typedef typename T::coordinate_type coordinate_type; } becasue this separation gives you more lattitud to decide how to define the functions. For example, you can have: struct point_concept { template<class T> typename point_traits<T>::coordinate_type static inline get( T const& point, orientation_2d orient ) { return point.get(orient); } } ; which users can specialize for concrete types just as in your case. The difference becomes apparent in the user side syntax: You can type this; point_concept::get(point1, HORIZONTAL) instead of: point_traits<T>::get(point1, HORIZONTAL) which can be quite significant when T is syntactically much more than just 'T'. Now of course I would have free functions instead: template<class T> typename point_traits<T>::coordinate_type static inline get_component( T const& point, orientation_2d orient ) { return point.get(orient); } Because free functions allows you to refactor your concept hierarchy with minimal impact on the end user. Let me explain this: On an upcoming dicussion we would have to argue about the concepts themselves (in the abstract, regardless of the specific C++ interface). Questions like: Does a point has coordinates or is there a coordinate-free point concept separatedly from the one with coordinates? Does the library includes vectors as well? Since, from a certain POV, both vectors and points have coordinates, should there be a "cartesian" concept which is nothing but a tuple of components? What about dimensions? Is a certain point concept for 2D? Is there a separate point concept for 3D? Such a discussion would affect the very definition of the existing concepts and would result in a refactoring of the concepts design. Now, say you haven't yet considered any of the above so you just define your point concept as you do now. This fixes the following interface: point_concept::get(point1, HORIZONTAL) which ties the operation to extract a component of a point with a *specific* concept via the nesting of the function in a class that corresponds to that particular concept. Now say that you want to add vectors to the library, and vectors have x,y components as well: what do you do *now*? You could create yet another accessor with essentially the same method vector_concept::get(vector1, HORIZONTAL) but then users would have to specialize both point_concept and vector_concept even if their concrete class is the same for both. Or perhaps you define a new concept, say cartesian, and deprecate point_concept::get in favor of cartesian::get (or even break compatibility if you are like me and never bargain on refactoring, specially these days when method renaming is a snap in most development enviroments) Free functions OTOH gives you much more flexibility to evolve the design because users that were calling get_component(q,orient) won't have to change that call even if there are now new concepts for which point is just a refinement. Of course the discussion between using free functions instead of member functions is not particular of generic programming but much more general. Yet it is particularly important in generic programming because users have to specialize the functions that instrument the concepts interface and free functions are totally independent hence the most flexible. Granted, in a perfectly tied up and "closed" design, this wouldn't matter at all, but in the case of your library I think this flexibility is very important because in the current form the library is not too general so chances are it will stretch and be refactor over time as other domains are added to it. Best -- Fernando Cacciola SciSoft http://scisoft-consulting.com http://fcacciola.50webs.com http://groups.google.com/group/cppba

Fernando wrote:
struct point_concept { template<class T> typename point_traits<T>::coordinate_type static inline get( T const& point, orientation_2d orient ) { return point.get(orient); } } ;
which users can specialize for concrete types just as in your case. The difference becomes apparent in the user side syntax: point_concept::get(point1, HORIZONTAL) instead of: point_traits<T>::get(point1, HORIZONTAL)
I like that! Believe me, you don't have to explain why that is better, I was a little worried about the verbose syntax I was generating in my example code. This is much cleaner. Thank you.
Now of course I would have free functions instead: template<class T> typename point_traits<T>::coordinate_type static inline get_component( T const& point, orientation_2d orient ) { return point.get(orient); }
In the previous example the struct was merely a standin for namespace, with the advantage that it can be passed as a template paramemter (which has all the advantages that I can appreciate), but haven't you given up both the advantage of disambiguating the function name and being able to pass the concept it belongs to as a parameter by making the accessor a free function?
Now say that you want to add vectors to the library, and vectors have x,y components as well: what do you do *now*? You could create yet another accessor with essentially the same method vector_concept::get(vector1, HORIZONTAL)
but then users would have to specialize both point_concept and vector_concept even if their concrete class is the same for both.
In that particular case I would allow a geometric vector to model a point concept and reuse that concept and have the vector concept include only the new behaviors that are specific to a vector: get magnitude, get direction. My layered rectangle reuses the rectangle concept and adds layer only in its own concept. If you mean you want to get the axis parallel components of a vector you would probably want new functions for that, not reuse the syntax of a point's accessors, which would more aptly give the vector's position. However, I do see you point about refactorability. I'm not sure I buy it; perhaps a more apt example would help. I don't really mind typing partial duplication of interfaces for similar concepts since the goal of doing so is to make the API intuitive for the user. Thanks again, Luke

In the second preview of the library, there was also runtime indexing accessors, but they should disappear to only have compile-time accessors as shown above (I'm currently working on this with Barend).
There is no need to have both run time and compile time accessors, and you can do things with run time accessors that you cannot do with compile time accessors. The reverse is not true. I think you made the wrong decision and should consider making the accessors runtime only instead.
I precise that it's not a final decision, just an idea. But my guts feeling is that compile-time access is a weaker requirement than runtime access, so in the idea that a concept must always model the *minimal* requirements, compile-time should be preferred.
Consider the difference between:
int get(array<int, 2> point, int index) { return point[index]; }
and
int get(const tuple<int, int>& point, int index) { if(index==0) return get<0>(point); else return get<1>(point); }
The latter look very ugly indeed, and it's precisely why I prefer compile-time access. Here are the same example with compile-time access: template <int I> int get(array<int, 2> point) { return point[I]; } and template <int I> int get(const tuple<int, int>& point) { return point.get<I>(); } This time, we access both structures as easily. Just because arrays and tuples are both accessible with a compile-time index, while tuples are not accessible with a runtime index. Adapting a struct { x, y, z } would be trickier in both cases, but looks more natural with compile-time indexes in my opinion (you map a compile-time stuff onto another compile-time stuff). As pointed out by John, another advantage of compile-time access is the ability to have different types for each coordinate. It's something that has been asked for several times during recent discussions on this list. I was even wondering the other day if it wouldn't be better to not require any coordinate_type typedef and have the algorithms deducing by themselves the type of each coordinate by a BOOST_TYPEOF on the accessor. For me, the only advantage of runtime access is to be easier to use inside the library algorithms. As the writer of a library is not here to facilitate his own life but the user's life, this kind of advantage shouldn't be taken into account if it's the only one. This being said, if you have a precise example of algorithm that technically needs runtime indexing on points, could you please show it to me? I haven't been able to think of such a situation until now. If every algorithm of the library is able to work on compile-time indexes, so I definitely don't see why it would require runtime access. It would mean requiring more than actually needed, which is a no-sense for a concept. Regards Bruno

There is no need to have both run time and compile time accessors, and you can do things with run time accessors that you cannot do with compile time accessors. The reverse is not true. I think you made
the
wrong decision and should consider making the accessors runtime only instead.
I precise that it's not a final decision, just an idea. But my guts feeling is that compile-time access is a weaker requirement than runtime access, so in the idea that a concept must always model the *minimal* requirements, compile-time should be preferred.
In what way weaker? This implies that there should be a type that can provide compile time access, but can't meet the stronger requirement and provide runtime access. Can you produce such a type?
This time, we access both structures as easily. Just because arrays and tuples are both accessible with a compile-time index, while tuples are not accessible with a runtime index. Adapting a struct { x, y, z } would be trickier in both cases, but looks more natural with compile-time indexes in my opinion (you map a compile-time stuff onto another compile-time stuff).
You do have a point there, though I would say that tuples are accessible with a runtime index, just not as conveniently.
As pointed out by John, another advantage of compile-time access is the ability to have different types for each coordinate. It's something that has been asked for several times during recent discussions on this list. I was even wondering the other day if it wouldn't be better to not require any coordinate_type typedef and have the algorithms deducing by themselves the type of each coordinate by a BOOST_TYPEOF on the accessor.
Hmmm, that does inspire some thought, doesn't it? At some point, though the different coordinate types need to be used together and the result will be auto-casting. It seems reasonable to me to declare the coordinate type of an object with heterogeneous coordinates as the one to which the others will auto cast when used together. Clearly, this could be bad if you mix signed and unsigned, but that would end badly eventually anyway.
For me, the only advantage of runtime access is to be easier to use inside the library algorithms. As the writer of a library is not here to facilitate his own life but the user's life, this kind of advantage shouldn't be taken into account if it's the only one.
The reason for parameterizing such concepts as orientation and direction in the library is not for the library author's convenience, but for the user. We typically see code that looks like this coming from the average programmer: if(layer % 2) { ...150 lines of application code that look like do_something(point.x() + value); ... } else { ...150 lines of near identical application code that look like do_something(point.y() + value); ... } which is copy-paste coding run amok. More specifically the programmer is using flow control as a substitute for data. We want them to refactor and write the following instead: orientation_2d orient = layer % 2 ? VERTICAL : HORIZONTAL; ...150 lines of application code where there were 300 before that look like do_something(point.get(orient) + value); The refactored form is preferable for a number of reasons, not least of which being that it is 50% less code.
This being said, if you have a precise example of algorithm that technically needs runtime indexing on points, could you please show it to me? I haven't been able to think of such a situation until now. If every algorithm of the library is able to work on compile-time indexes, so I definitely don't see why it would require runtime access. It would mean requiring more than actually needed, which is a no-sense for a concept.
In the precise example above the application code does not know layer at compile time, it is a runtime variable, and orientation depends on it. To rewrite the above code (for the application programmer, I can't help them, they would have to do it themselves) they would write: template <int I> void function_i_have_to_write_because_of_compile_time_accessors(point_type point) { ...150 lines of code that depend on I... } if(layer % 2) function_i_have_to_write_because_of_compile_time_accessors<1>(point); else function_i_have_to_write_because_of_compile_time_accessors<2>(point); which is impractical because in the real case the block being factored into the function probably depends on a couple dozen local variables in the application code and cannot easily be factored into a function. Most users are not as sophisticated as library authors, and requiring them to template their code to use our templated code does raise the bar for learning and using the library. Also, in this case, it makes it less likely that the user adopts the isotropic programming style we prefer because refactoring of flow control into compile time parameters is less convenient than refactoring flow control into runtime parameters since flow control is decided at run time. Thanks, Luke

AMDG
For me, the only advantage of runtime access is to be easier to use inside the library algorithms. As the writer of a library is not here to facilitate his own life but the user's life, this kind of advantage shouldn't be taken into account if it's the only one.
The reason for parameterizing such concepts as orientation and direction in the library is not for the library author's convenience, but for the user. We typically see code that looks like this coming from the average programmer:
if(layer % 2) { ...150 lines of application code that look like do_something(point.x() + value); ... } else { ...150 lines of near identical application code that look like do_something(point.y() + value); ... }
To the extent possible, the library should rely only on compile time accessors, IMO. This does not preclude having the points also provide runtime accessors for users. It might work to allow either compile time access or runtime access and if either one is not given, fall back to implementing it in terms of the other. In Christ, Steven Watanabe

Steven wrote:
To the extent possible, the library should rely only on compile time accessors, IMO. This does not preclude having the points also provide runtime accessors
for users. It might work to allow either compile time access or runtime access and
if either one is not given, fall back to implementing it in terms of the other.
The problem is that doing so makes the isotropic programming style I alluded to in my reply to Bruno less workable. The isotropic style is pervasive in my library, and defines it, to a great extent. This style is a BKM for geometric programming that we have developed, and the library is the means of propagating the BKM throughout the developer community. I can't be expected provide runtime parameters on all API for the user's convenience then implement flow control in every case to convert the runtime parameter into compile time constants and different instantiations of template functions guarded by flow control. Particularly since I frequently use the isotropic data type as the index into an array, it seems silly to instantiate a different function for every index value. Moreover, I have defined a rich language of isotropic types, conversions and behaviors which isn't compatible with compile time parameters because the constant has to be a built in type and not an object. This breaks down the type safety of the parameter and waters down the usefulness of the isotropic objects, since they can't be used as template parameters. The following does not compile (I wish it did): const orientation_2d horizontal_constant; template <orientation2d orient> coordinate get(const point& pt) { return coord_[orient.to_int()]; } coordinate value = get<horizontal_constant>(my_point); because orient has to be a built in type. If you throw away the isotropic types: direction_1d, orientation_2d, direction_2d, orientation_3d, direction_3d, winding_direction then you lose the type safety and the rich language of behaviors provided by the library. What good are these types if they can't be used in the accessor? We both have the same intent of improving programming practices through library development. The way people code geometry typically looks like: if(condition) point.x() = value; else point.y() = value; You are advocating a style: if(condition) point.get<0>() = value; else point.get<1>() = value; and I am advocating: point.get(condition) = value; I agree that compile time access is better than having two completely different functions. However, it leads the developer back to using flow control instead of data to control the behavior of their code, increased code bloat (in the user's code) and reduced code quality. Isotropy is somewhat unique to geometry because the exploitation of geometric symmetry to refactor code doesn't really translate to all other domains. In general, you are probably right that compile time accessors should be preferred, but for geometry, isotropic style should be preferred. We should think about what best accomplishes our goals rather than base decisions on a meme. I realize that my library is somewhat unique among geometry libraries in that others don't use isotropy or do to a lesser and less formalized extent. That uniqueness is what makes the library good. Isotropy is a huge improvement in geometric programming over the common practice. That is why the interface requires the user to access coordinates with the isotropic (runtime) type-safe parameter; it encouraging the user to adopt the best practice. That improvement is what we are trying to get into the hands of geometry library users by submitting the library to boost. Thanks, Luke

On Fri, May 2, 2008 at 2:22 PM, Simonson, Lucanus J <lucanus.j.simonson@intel.com> wrote:
We both have the same intent of improving programming practices through library development. The way people code geometry typically looks like:
if(condition) point.x() = value; else point.y() = value;
You are advocating a style:
if(condition) point.get<0>() = value; else point.get<1>() = value;
and I am advocating:
point.get(condition) = value;
I don't think Joel, Steven, myself, or anyone else suggesting compile-time indexing is advocating that. If the user's style is to use .x() and .y(), then he can do so. If he wants to use run-time indexing, then he can do so. But when the user decides to use a library function like contains(), the contains() implementation will rely on compile-time indexing only. We are not advocating a particular point type, only advocating that the algorithm implementations be generic and require the minimal concept needed for correctness. --Michael Fawcett

On Fri, May 2, 2008 at 2:35 PM, Michael Fawcett <michael.fawcett@gmail.com> wrote:
On Fri, May 2, 2008 at 2:22 PM, Simonson, Lucanus J <lucanus.j.simonson@intel.com> wrote:
We both have the same intent of improving programming practices through library development. The way people code geometry typically looks like:
if(condition) point.x() = value; else point.y() = value;
You are advocating a style:
if(condition) point.get<0>() = value; else point.get<1>() = value;
and I am advocating:
point.get(condition) = value;
I don't think Joel, Steven, myself, or anyone else suggesting compile-time indexing is advocating that.
I apologize, I shouldn't assume that my view of what Joel and Steven say is any more correct than your view thus far. I would like to retract that and simply state: "I am not advocating that." --Michael Fawcett

<lucanus.j.simonson@intel.com> wrote:
We both have the same intent of improving programming practices
library development. The way people code geometry typically looks
s J through like:
if(condition) point.x() = value; else point.y() = value;
You are advocating a style:
if(condition) point.get<0>() = value; else point.get<1>() = value;
and I am advocating:
point.get(condition) = value;
Michael wrote:
I don't think Joel, Steven, myself, or anyone else suggesting compile-time indexing is advocating that. I apologize, I shouldn't assume that my view of what Joel and Steven say is any more correct than your view thus far. I would like to retract that and simply state: "I am not advocating that."
I suppose it behooves me to apologize as well. I didn't really intend to put words in other people's mouths. I was trying to point out a contradiction in your shared position. When the argument is that I should provide runtime parameters to the user, but use compile time parameters internally you are exactly advocating code that looks like: if(condition) point.get<0>() = value; else point.get<1>() = value; by implication because there is no other way to get information from runtime to compile time than through flow control and enumeration of conditions. Of course, you don't intend to advocate that, but it is the inescapable consequence of moving the parameter to compile time in the design of my library. Moreover, putting the contradiction aside, there is still no argument in favor of compile time parameter. Not even an argument that it improves performance (it doesn't, by the way) or that it is easier to use (it isn't.) Just that it "should" be compile time for the sake of "genericity". My question is: to what end? Now I like making the coordinate type a template argument. I was skeptical of that at first. I added that into the design (based upon the community's feedback) and I was very happy to see how well it worked interacting 32 bit and 64 bit geometry in my example code. It will actually solve a problem my users have had with incompatibility between regular 32 bit geometry and geometry that uses their own integer numerical type that overrides arithmetic operators to check for min and max int values which are "sticky" and model INF, -INF and NAN. My users will be happy with that change as well and it will be easy for my to convince them to adopt it. How can I advocate compile time parameters on the accessors to my users until you've first convinced me? Thanks, Luke

On Fri, May 2, 2008 at 5:40 PM, Simonson, Lucanus J <lucanus.j.simonson@intel.com> wrote:
I suppose it behooves me to apologize as well. I didn't really intend to put words in other people's mouths. I was trying to point out a contradiction in your shared position.
When the argument is that I should provide runtime parameters to the user, but use compile time parameters internally you are exactly advocating code that looks like:
if(condition)
point.get<0>() = value; else point.get<1>() = value;
Why do you think I am advocating that? The user is free to use whatever his point class provides, be that .x()/.y() members, free functions, or array-like access. if (condition) point.x() = value; else point[1] = value; // this point class supports member functions and array-like access! float magnitude = sqrt(dot_product(point, point)); The dot_product internals are where the compile-time access takes place. dot_product would therefore work on arrays, tuples, and fusion tuples out of the box and any custom type simply needs to add get<> support (which would hopefully be trivial, perhaps using Fusion). --Michael Fawcett

On Fri, May 2, 2008 at 3:02 PM, Michael Fawcett <michael.fawcett@gmail.com> wrote:
float magnitude = sqrt(dot_product(point, point));
The dot_product internals are where the compile-time access takes place. dot_product would therefore work on arrays, tuples, and fusion tuples out of the box and any custom type simply needs to add get<> support (which would hopefully be trivial, perhaps using Fusion).
I didn't follow this discussion so far but I am also in favor of get<> overloads for types that are tuple-like, including 2D, 3D and 4D vectors. Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode

AMDG Simonson, Lucanus J wrote:
I suppose it behooves me to apologize as well. I didn't really intend to put words in other people's mouths. I was trying to point out a contradiction in your shared position.
When the argument is that I should provide runtime parameters to the user, but use compile time parameters internally you are exactly advocating code that looks like:
if(condition) point.get<0>() = value; else point.get<1>() = value;
by implication because there is no other way to get information from runtime to compile time than through flow control and enumeration of conditions. Of course, you don't intend to advocate that, but it is the inescapable consequence of moving the parameter to compile time in the design of my library.
I haven't really looked at your library yet. If there is a significant amount of code that can't easily be refactored to use compile time accessors, by all means use runtime access. Note in my earlier message I only intended to claim that algorithms that don't need to deal with dynamic variations of the index, should use compile time access. Actually, I've recently been thinking about whether it's possible to support both forms of access, transparently. p[0] // get the first element using runtime access. p[mpl::int_<0>()] // get the first element using compile time access. Note that if only runtime indexes are provided, the mpl::int_<0>() will implicitly convert to an int.
Moreover, putting the contradiction aside, there is still no argument in favor of compile time parameter. Not even an argument that it improves performance (it doesn't, by the way)
Have you measured the performance when adapting a struct like this: struct Point { int x; int y; }; ? In Christ, Steven Watanabe

Steven wrote:
I haven't really looked at your library yet. If there is a significant
amount of code that can't easily be refactored to use compile time accessors, by all means use runtime access. This is indeed the case. Use of the isotropic types is pervasive throughout. They are the foundation that the rest of the library was implemented in terms of.
Note in my earlier message I only intended to claim that algorithms that don't need to deal with dynamic variations of the index, should use compile time access.
It looks like enums are legal arguments to this type of compile time parameter: #include <iostream> enum orientation_2d_enum {HORIZONTAL = 0, VERTICAL = 1 }; template <orientation_2d_enum orient> void foo() { if(orient == HORIZONTAL) std::cout << "HORIZONTAL\n"; else std::cout << "HORIZONTAL\n";} int main() { foo<HORIZONTAL>();} The above code works fine. In the case that the value is an enum value (which is actually quite frequent) I absolutely agree that it is better to have it be compile time rather than runtime, because the enum can enforce type safety. I think providing both compile time and runtime accessors is fine. The runtime accessors can be implemented in terms of the compile time ones in the cases where the type doesn't provide indexing (such as a tuple based point.) People wanting better performance with the runtime accessor can specialize it where appropriate.
Have you measured the performance when adapting a struct like this: struct Point { int x; int y; };
Yes, when compiled with optimization it is identical to access the struct members directly or through the accessors (and the extra wrapper class from the original design as well.) Please note: if you forget to use the result of your computation, dead code removal leads to a misleading result because the compiler is more successful in removing dead code in the case where there is less to be removed, so take care confirming my result. Thank you for your thoughtful response, it was truly in the spirit of your sig. line and inspired me to solve the problem I was having with the type safety of compile time values, which was the only (substantive) argument I had against them. My isotropic constants are enum values to allow the compiler to more successfully provide constant propagation in its optimization of the code. Therefore, it is easy for me to use these enums as the parameters of compile time accessors and that integrates nicely with the rest of the library and complements what is already there. I'll post a revised design proposal (code) presently. Thanks again, Luke

AMDG Simonson, Lucanus J wrote:
I think providing both compile time and runtime accessors is fine. The runtime accessors can be implemented in terms of the compile time ones in the cases where the type doesn't provide indexing (such as a tuple based point.) People wanting better performance with the runtime accessor can specialize it where appropriate.
Cool.
Have you measured the performance when adapting a struct like this: struct Point { int x; int y; };
Yes, when compiled with optimization it is identical to access the struct members directly or through the accessors (and the extra wrapper class from the original design as well.) Please note: if you forget to use the result of your computation, dead code removal leads to a misleading result because the compiler is more successful in removing dead code in the case where there is less to be removed, so take care confirming my result.
Ok. There might be extra optimizations possible when the index is known at compile time as opposed to run time. (I'm not talking about the difference between get<X>(p) and p[X], here, but the difference between when X is known at compile time using templates to avoid code duplication vs. runtime. and using function arguments). This is really a property of the algorithm rather than the point class, though. In Christ, Steven Watanabe

Steven wrote:
Ok. There might be extra optimizations possible when the index is known at compile time as opposed to run time. (I'm not talking about the difference between get<X>(p) and p[X], here, but the difference between when X is known at compile time using templates to avoid code duplication vs. runtime. and using function arguments). This is really a property of the algorithm rather than the point class, though.
I agree with you. It really taxes the compiler to optimize my highly nested inline function calls and it has too much opportunity to give up early instead of getting the job done. Switching from gcc 3.4.2 to gcc 4.2.0 resulted in about a 30% speedup in application code that relies heavily on my types and algorithms. Compile times went up slightly too. That tells me that the compiler is less than fully successful in optimizing things. If the compiler is having trouble providing constant propagation we can't necessarily expect it to optimize away the overhead of the compile time accessor either, but at least it doesn't have the option of giving up before instantiating the template function. On a related note, we recently confirmed that the 4.3.0 compiler (on newer hardware) converts: int myMax(int a, int b){ return a > b ? a : b;} into: .globl _Z5myMaxii .type _Z5myMaxii, @function _Z5myMaxii: .LFB2: .file 1 "t255.cc" .loc 1 7 0 .LVL0: .loc 1 7 0 cmpl %edi, %esi cmovge %esi, %edi .LVL1: .loc 1 10 0 movl %edi, %eax ret instead of: .globl _Z5myMaxii .type _Z5myMaxii, @function _Z5myMaxii: .LFB2: .file 1 "t255.cc" .loc 1 7 0 pushq %rbp .LCFI0: movq %rsp, %rbp .LCFI1: movl %edi, -4(%rbp) movl %esi, -8(%rbp) .loc 1 9 0 movl -4(%rbp), %eax cmpl -8(%rbp), %eax jle .L2 movl -4(%rbp), %eax movl %eax, -12(%rbp) jmp .L3 .L2: movl -8(%rbp), %eax movl %eax, -12(%rbp) .L3: movl -12(%rbp), %eax .loc 1 10 0 leave ret when compiling for old processor or with old compiler. That is about 4X fewer instructions and NO BRANCH instructions. Note: cmovge is a new instruction in the Core2 (merom) processors. I have been using the following: template <class T> inline const T& predicated_value(const bool& pred, const T& a, const T& b) { const T* input[2] = {&b, &a}; return *(input[pred]); } instead of ? syntax because it was 35% faster than the branch based machine code the compiler generated when executed on the prescott based hardware at the time. I'll be able to go back to letting the compiler know best as soon as we cycle out the old hardware and cycle in the new compiler. Thanks, Luke

Hi Luke,
The following does not compile (I wish it did):
const orientation_2d horizontal_constant;
template <orientation2d orient> coordinate get(const point& pt) { return coord_[orient.to_int()]; }
coordinate value = get<horizontal_constant>(my_point);
because orient has to be a built in type.
Acutally, that to_int() is getting back into runtime, so this wouldn't count as a compile-time access even if it compiled. OTOH, this could be made to compile if desired: struct horizontal{ static const int value = 0 ; } template <class orient> coordinate get(const point& pt) { return pt[orient::value]; } coordinate value = get<horizontal>(my_point); Having said that, if the library aims to educate users not to do something as terrible as if(condition) point.x() = value; else point.y() = value; then it should advocate higher level abstractions: vector delta = condition ? vector(value,0) : vector(0,value) {} translate modify (delta); point = modify(point); which, being essentially coordinate-free, hides the coordinate-access issue from the user pushing into the library implementation. Best -- Fernando Cacciola SciSoft http://scisoft-consulting.com http://fcacciola.50webs.com http://groups.google.com/group/cppba

Fernando wrote:
Acutally, that to_int() is getting back into runtime, so this wouldn't count as a compile-time access even if it compiled.
Well, yes, I did say it wouldn't compile, so I figured I wouldn't follow the rules. Allowing the type to auto-cast to int breaks down the type safety (since it allows unintended conversions.) The point was to show that type safety (not allowing conversion or orientation to and from int) is incompatibile with the compiler requirement that the parameter be a built in type.
OTOH, this could be made to compile if desired: struct horizontal{ static const int value = 0 ; } template <class orient> coordinate get(const point& pt) { return pt[orient::value]; } coordinate value = get<horizontal>(my_point);
Since template metaprogramming is turing complete we can envision implementing the entire set of runtime behaviors of the isotropic objects in my library as compile time behaviors which would lead to a truly dauntingly complex (potentially fun) exercise in template meta-programming. It might result in a functional library, but would raise the bar for learning and using the library pretty much all the way up to where only intellectual giants can reach it.
then it should advocate higher level abstractions: vector delta = condition ? vector(value,0) : vector(0,value) {} translate modify (delta); point = modify(point); which, being essentially coordinate-free, hides the coordinate-access issue from the user pushing into the library implementation.
I am guessing translate is a class, modify is an object of that class that is constructed from a vector and provides a operator() which takes a point, translates it and returns the translated point. In fact, this is quite close to the kinds of things we do. If you grep for predicated_value in my vault submission it is a standin for ? syntax, and used quite heavily. This is what we are, in fact, advocating people do, and the isotropic types are often the condition (as you can well imagine.) The idea is to make the user code coordinate-free as much as possible, allowing them to work at a higher level of abstraction and pushing the low level details into the library (other than the heavy algorithms, that is the service I'm providing with the library.) I don't agree that it won't matter to the user what accessors look like inside the library. I don't think I can completely abstract away the existence of coordinates, nor do I want them to have to rely on their own legacy interfaces. Preferably, the user should feel empowered to extend the library by writing in its style to suit their need. From that standpoint, the style shouldn't be allowed to become unnecessarily complex. Thanks, Luke

Hello, I haven't been able to participate to this already well advanced discussion since I was off at the moment. If I summarize, I basically agree with the conclusions given in the latest posts. I will only quickly answer the questions asked me at the beginning.
I precise that it's not a final decision, just an idea. But my guts feeling is that compile-time access is a weaker requirement than runtime access, so in the idea that a concept must always model the *minimal* requirements, compile-time should be preferred.
In what way weaker? This implies that there should be a type that can provide compile time access, but can't meet the stronger requirement and provide runtime access. Can you produce such a type?
boost::tuple is such a type. The whole idea is that whenever runtime access is available, compile-time access is easily providable, while the inverse is not true (unless with series of if statements). So compile-time requirements are easier to satisfy than runtime requirements. This is what I have in mind when I say that compile-time access is more "minimalistic" than runtime access. Keep in mind that I'm really talking about the moment when the user has to provide a compatible point from the data type he's working with. As said earlier in this thread, he can still use his runtime accessors in his own code.
As pointed out by John, another advantage of compile-time access is the ability to have different types for each coordinate. It's something that has been asked for several times during recent discussions on this list. I was even wondering the other day if it wouldn't be better to not require any coordinate_type typedef and have the algorithms deducing by themselves the type of each coordinate by a BOOST_TYPEOF on the accessor.
Hmmm, that does inspire some thought, doesn't it? At some point, though the different coordinate types need to be used together and the result will be auto-casting. It seems reasonable to me to declare the coordinate type of an object with heterogeneous coordinates as the one to which the others will auto cast when used together. Clearly, this could be bad if you mix signed and unsigned, but that would end badly eventually anyway.
I wasn't thinking about letting the compiler cast everything by itself, but rather using precise type promotion rules when needed. But it's just an idea for now, I have only noticed that some people do require the ability to use heterogeneous data types. Regards Bruno

Bruno wrote:
As pointed out by John, another advantage of compile-time access is the ability to have different types for each coordinate. It's something that has been asked for several times during recent discussions on this list. I was even wondering the other day if it wouldn't be better to not require any coordinate_type typedef and have the algorithms deducing by themselves the type of each coordinate by a BOOST_TYPEOF on the accessor.
Can you provide me with a code example of how to do this? I'm getting really tired of writing typename point_traits<T>::coordinate_type over and over and over again. I would greatly appreciate a better way. Thanks, Luke

As pointed out by John, another advantage of compile-time access is the ability to have different types for each coordinate. It's something that has been asked for several times during recent discussions on this list. I was even wondering the other day if it wouldn't be better to not require any coordinate_type typedef and have the algorithms deducing by themselves the type of each coordinate by a BOOST_TYPEOF on the accessor.
Can you provide me with a code example of how to do this? I'm getting really tired of writing typename point_traits<T>::coordinate_type over and over and over again. I would greatly appreciate a better way.
You can have the type of a coordinate by applying BOOST_TYPEOF on the expression by which you access this coordinate. For instance, let's say we have a tuple<float,double> t: BOOST_TYPEOF(t.get<0>()) x = t.get<0>(); BOOST_TYPEOF(t.get<1>()) y = t.get<1>(); x will be a float and y will be a double. Here you can see that we ended up writing "t.get<0>()" and "t.get<1>()" twice, once for the declaration and once for the initialization. BOOST_AUTO can help by both declaring and assigning at the same time: BOOST_AUTO(x, t.get<0>()); BOOST_AUTO(y, t.get<1>()); x is a float and receives the value of t.get<0>(), y is a double and receives the value of t.get<1>(). Bruno

I'm getting really tired of writing: typename point_traits<T>::coordinate_type Bruno wrote: BOOST_TYPEOF(t.get<0>()) x = t.get<0>(); BOOST_TYPEOF(t.get<1>()) y = t.get<1>();
Unfortunately almost all of the cases I have look like: template <orientation_3d_enum orient, typename T> static inline typename point_3d_traits<T>::coordinate_type get(const T& point) {return point.get(orient); } and: static inline void set(T& point, orientation_3d orient, typename point_3d_traits<T>::coordinate_type value) { if(orient == PROXIMAL) set<PROXIMAL>(point, value); else point_concept::set(point, orient, value); } where typeof can't help me. I'm sure that I'll be declaring local coordinate variables when I rewrite some of the more involved algorithms, but the extra typing won't bother me. For the user the coordinate type they chose to instantiate with is known and they can just use it directly. In fact, user code (of which there is very little in my example) looks very concise and clean. Thanks, Luke

Hi,
- Your library has a limited scope: 2D orthogonal and 45-degree lines.
(And its name ought to include some indication of that.) I would like to see some exploration of in what way your interface (as opposed to your algorithms) is tied to this domain, i.e. to what extent your interface could be re-used for a more general or differently-focused library. For example, could you have a Point concept that could be common with Barend's library, allowing Federico's spatial indexes to be
used with both? Or would do you require (e.g. for algorithmic efficiency reasons) a point concept that is inherently incompatible?
For me, requiring a point concept that a has an x() and y() member functions is unnecessary and restricts the usefulness of the library. I could obviously make such a requirement, but I would prefer to have adaptor functions such as: coordinate_type point_interface<T>::getX(const T& point); which allows compatibility with anyone's point concept, rather than requiring that everyone have syntactically compatible point concepts. Federico's spatial indexes should be compatible with both libraries already, provided they are conceptually 2D points, regardless of what API the provide or concept they model. Even if I took out the inheritance/casting, I still wouldn't require a specific API on the user type. Shouldn't generic code ideally work with any type that is conceptually a point, rather than only types that model the point concept they set forth?
The point concept of Barend's library doesn't require any x() or y() member functions. The fact that the library proposes 2 predefined point classes, point_xy and point_ll, can be a bit confusing but don't be fooled: they do *not* represent the point concept, they are only 2 classes that satisfy it. The point concept uses accessors of this kind : template <int I> value() const; In the second preview of the library, there was also runtime indexing accessors, but they should disappear to only have compile-time accessors as shown above (I'm currently working on this with Barend). Also, the point concept as currently defined might be replaced by a point_traits class usable the same way as what you propose with your point_interface<T>. It gives something like : point_traits<T>::value<0>(p) to have the first coordinate of a point p. The point concept would then follow the exact definition just given by Phil. Not sure yet this will be done, but it brings a lot of advantages (non-intrusiveness, possibility of using native arrays as points, ...). Regards Bruno

On Thu, May 1, 2008 at 8:41 AM, Bruno Lalande <bruno.lalande@gmail.com> wrote:
The point concept of Barend's library doesn't require any x() or y() member functions. The fact that the library proposes 2 predefined point classes, point_xy and point_ll, can be a bit confusing but don't be fooled: they do *not* represent the point concept, they are only 2 classes that satisfy it. The point concept uses accessors of this kind :
template <int I> value() const;
Could you just specialize boost::get?
In the second preview of the library, there was also runtime indexing accessors, but they should disappear to only have compile-time accessors as shown above (I'm currently working on this with Barend).
Also, the point concept as currently defined might be replaced by a point_traits class usable the same way as what you propose with your point_interface<T>. It gives something like : point_traits<T>::value<0>(p) to have the first coordinate of a point p. The point concept would then follow the exact definition just given by Phil. Not sure yet this will be done, but it brings a lot of advantages (non-intrusiveness, possibility of using native arrays as points, ...).
That's a very big advantage. What about using boost::tuples? boost::get gives you these things with a common syntax. I believe that ideally, the compile-time indexable point concept should support the boost::get syntax, and the run-time version (for looping over members) should support array access. --Michael Fawcett

On Fri, May 2, 2008 at 4:44 AM, Bruno Lalande <bruno.lalande@gmail.com> wrote:
template <int I> value() const;
Could you just specialize boost::get?
I'm not sure to understand. Do you mean that you'd like the point concept to require that the provided point is accessible by boost::get?
Yes, although I still maintain that it shouldn't be called a PointConcept. You show your "value" template function that does basically the same thing as boost::get. Instead of: brunos::value<0>(myvec); it should be: boost::get<0>(myvec); IMHO... --Michael Fawcett

Yes, although I still maintain that it shouldn't be called a PointConcept. You show your "value" template function that does basically the same thing as boost::get.
Instead of:
brunos::value<0>(myvec);
it should be:
boost::get<0>(myvec);
IMHO...
Yes, the value<>() function will surely be renamed into get<>() in order to have a common interface with tuples. And using boost::get<>() can be made possible quite easily, I think. The big advantage is that it becomes even easier to build a point from a tuple. Bruno
participants (9)
-
Bruno Lalande
-
Emil Dotchevski
-
Fernando Cacciola
-
Hartmut Kaiser
-
John Femiani
-
Michael Fawcett
-
Phil Endecott
-
Simonson, Lucanus J
-
Steven Watanabe