
AMDG Simonson, Lucanus J wrote:
Joel also expressed the opinion that generic programming could allow the library to use floating point or integer arithmetic for coordinates. I remain skeptical of that. Also, because the coordinate type template parameter would go on practically everything, I fear it would become onerous to the user (and me.)
How about giving a coordinate type template parameter to everything, but also providing typedefs for all the specializations with the default coordinate type?
I generally don't like to place a requirement on the template parameter that it provide specific functions, and instead prefer to go through adaptors, but I do see your point. In my work, I rarely have the freedom to modify a class in legacy code to conform to the requirements set forth in a generic library I want to use (or write.)
One of the things I like about my current design is that it allows the function prototype to disambiguate for the compiler (and the user) what concepts the arguments are supposed to model. For example: template <class T> template <class T2> bool RectangleImpl<T>::contains(const RectangleImpl<T2>& rectangle); requires the T and T2 both provide RectangleInterface adaptors. (Obviously Impl is a mis-nomer and would be changed, but if I throw away the unorthodox design pattern then it doesn't much matter.) I also like that is isn't ambiguous to the user which rectangle is doing the containing vs: template <class T, class T2> bool constains(const T& containing_rectangle, const T& contained_rectangle); where the user will remember that there is a function that takes two parameters, but have to check the header file or documentation to remind themselves what the order means, since it is somewhat arbitrary. I would need to come up with a convention for ordering and be consistent.
If I need another function that you have not provided, I would need to implement it as a non-member... Another disadvantage is that if I only need one piece of the rectangle functionality, I still need to include everything because it's all in a single class.
The other unfortunate thing that happens is overloading of free functions becomes problematic when types are generically polymorphic: template <class T, class T2> bool constains(const T& containing_prism, const T& contained_prism); because a prism should provide both rectangle and prism adaptors and two prisms should satisfy both functions the result in a compiler error even when enable_if is used.
I'm not entirely convinced that it should be possible to treat a prism as a rectangle without an explicit conversion of some kind. contains for a prism should treat it as as prism. The general solution to this kind of problem is tag dispatching.
If we could induce it to compile, the user is left with no way to call the rectangle version on two prisms whereas with my library they would currently just do: bool containment = prism.mimicRectangle().contains(prism2); to view the prism as a rectangle to get access to the rectangle version of contains.
How important is it to not make a copy of the prism? Alternately, how important is it to be able to copy the whole prism while treating it as a rectangle?
The only solutions I can see are to embed the conceptual types into the function name such as rectangle_contains(a, b) and prism_contains(a, b) or better still, make them into namespaces: namespace rectangle { template <class T, class T2> bool constains(const T& containing_rectangle, const T& contained_rectangle); } namespace prism { template <class T, class T2> bool constains(const T& containing_prism, const T& contained_prism); } which is workable.
That doesn't work well because it ought to be possible to write code that deals with either a rectangle or a prism, as long as it only relies on the common properties--such as contains.
I'm looking for the boost community's feedback on what approach to rewriting the library they find preferable, and a sort of final decision on whether the generic inheritance/static_cast/mimicry design-pattern I came up with is unacceptable for acceptance into boost, implying that a complete rewrite is truly needed. Is the requirement that code never do anything the standard doesn't guarantee is safe, or is the requirement that code be portable and functional?
The absolute minimum requirement is that it should work on several platforms. I think that to the extent that is possible, code should only rely on what the standard guarantees. I believe that if you rely only on non-member functions and reinterpret_cast back and forth, it is possible to make the mimicry legal.
Are there other considerations that make what I'm doing objectionable?
In Christ, Steven Watanabe