Provisional Boost.Generic and Boost.Auto_Function (concepts without concepts)

For those of you who were following the Boost.Auto_Function call for interest, this thread sort of spawned off of that. Boost.Generic (not a part of boost) is a C++0x library intended to replace BCCL as a way of specifying concepts and concept maps, and, when used in conjunction with Boost.Auto_Function (also not a part of boost, though it's in the sandbox and has online documentation at http://www.rivorus.com/auto_function ) as a way to get concept-based function template overloading. For anyone who followed the original thread, I'm happy to say that I'm just a few days away from being able to fully and automatically recognize standard library and user-defined iterator types, though I've had a [not so] surprising amount of compiler crashes and workarounds along the way. For an example of how concept mapping will look (revised from earlier versions as I've made much more progress with implementation), see http://codepaste.net/n47ocu And for an example of how this will be used by Boost.Auto_Function, see http://codepaste.net/1faj21 . At this point I'm not trying to do the equivalent of what would have been C++0x "scoped" concept maps, though at some point I may try to support them, but it would imply calling algorithms through macro invocations (yuck). During development, I've come to some realizations that I'd like discussion about here, mostly concerning concept map lookup and ODR. Essentially, the way the library works is by assembling a compile-time list of concept maps piecewise throughout a translation unit for a given type or combination of types via a clever use of overload resolution that I talked about briefly in the BOOST_AUTO_FUNCTION call for interest thread. Underneath the hood, there is something that resembles tag-dispatching, however it is all done entirely within decltype. In short, the way the concept-based overloading works is that the macro used for specifying a function that dispatches based on concept maps generates something along the lines of this. The "magic" shown in comments is something I'm able to already do, as talked about in the other thread: ///// template< class It > void foo( It&& it ) { typedef decltype ( function_returning_type_with_a_nested_static_function ( /* magical way to get a type that inherits from all concept tags */() ) ) fun_holder; fun_holder::impl( std::forward< It >( it ) ); } ///// Now, this is great, but I'm wondering what this means with respect to ODR. If the user is working with the algorithm "correctly", the typedef will resolve to the same type regardless of the translation unit, however, the "path" taken when inside of the decltype via overload resolution may vary depending on the translation unit when different, orthogonal concept maps are specified for the same type in one translation unit but not in the other (or if concept maps are specified in a different order I.E. via different #include orders). My question is, does this violate ODR in any meaningful sense? Since technically the typedefs should resolve to the same type in each translation unit, not including user error, is there a problem? The next question is much more devious and I have a feeling implies a blatant violation of ODR. Consider the following code, assuming "foo" does concept-based dispatching in a way similar to the above: ///// foo( 5 ); /* a concept map that specifies "int" models a concept that may affect dispatching */ foo( 5 ); ///// With the definition of foo given above, what would effectively happen is that the second call to "foo" will not be able to dispatch based on the new concept map! The reason why is because both calls will use foo< int >, and since it was already instantiated once for int, that first definition will be used. Note that this problem technically even exists with traditional tag dispatch, though I wonder if there may be some sort of solution that is standard. A hackish workaround I've come up for this is if we change the defintion of "foo" to be generated as the following: ///// // Uses C++0x function template default arguments template< class It, class TagType = /* magical way to get a type that inherits from all concept tags */ > void foo( It&& it ) { typedef decltype ( function_returning_type_with_a_nested_static_function ( TagType() ) ) fun_holder; fun_holder::impl( std::forward< It >( it ) ); } ///// Going back to the example calling code, "foo" will now correctly dispatch differently if the intermediate concept map should affect concept-based dispatch. This might seem like a perfect solution, but now we are pretty much definitely violating ODR, since a function that calls "foo" in different translation units will very possibly see different TagTypes even though the overload resolution internal to "foo" would resolve the same. Am I clear on this problem and why I believe my solution only works if you don't consider ODR violations? The final solution that I believe sidesteps all ODR violations would be if I force calls to such algorithms to be done via a macro. The macro would internally do the trick that is currently shown inside of the definition of "foo", only it would now do it at the caller's scope. If I decide to eventually try to support scoped concept maps I would be forced down such a route anyway, so my question ends up being at what point does the library cease being a convenience? Is it worth supporting concepts as accurately as possible, including the above desired behavior, if the calling code has to become: ///// BOOST_GENERIC_CALL( (foo)( 5 ) ); /* a concept map that specifies "int" models a concept that may affect dispatching */ BOOST_GENERIC_CALL( (foo)( 5 ) ); ///// or, if you want a more practical example: ///// BOOST_GENERIC_CALL( (advance)( vector_.begin(), 5 ) ); ///// This is a problem that I would prefer to be resolved sooner rather than later since I want to reduce code rewrites. Any feedback is greatly appreciated, especially if you have insight into these problems. I'll try to get Boost.Generic in its current, very limited form up on the sandbox and the docs online as soon as possible. -- -Matt Calabrese

On 11/13/2010 5:15 PM, Matt Calabrese wrote:
For those of you who were following the Boost.Auto_Function call for interest, this thread sort of spawned off of that.
Boost.Generic (not a part of boost) is a C++0x library intended to replace BCCL as a way of specifying concepts and concept maps, and, when used in conjunction with Boost.Auto_Function (also not a part of boost, though it's in the sandbox and has online documentation at http://www.rivorus.com/auto_function ) as a way to get concept-based function template overloading. For anyone who followed the original thread, I'm happy to say that I'm just a few days away from being able to fully and automatically recognize standard library and user-defined iterator types, though I've had a [not so] surprising amount of compiler crashes and workarounds along the way. For an example of how concept mapping will look (revised from earlier versions as I've made much more progress with implementation), see http://codepaste.net/n47ocu And for an example of how this will be used by Boost.Auto_Function, see http://codepaste.net/1faj21 . At this point I'm not trying to do the equivalent of what would have been C++0x "scoped" concept maps, though at some point I may try to support them, but it would imply calling algorithms through macro invocations (yuck).
What is the necessary C++0x support needed for Boost.Generic and Boost.Auto_Function ? What compilers currently provide that support ? I am guessing that to replace BCCL a compiler that largely supports C++0x is needed and that there may not be many right now that do. That is not meant to be discouraging at all but it needs to be denoted in the docs what current compiler(s) can be used.

On Sat, Nov 13, 2010 at 6:23 PM, Edward Diener <eldiener@tropicsoft.com>wrote:
What is the necessary C++0x support needed for Boost.Generic and Boost.Auto_Function ? What compilers currently provide that support ? I am guessing that to replace BCCL a compiler that largely supports C++0x is needed and that there may not be many right now that do. That is not meant to be discouraging at all but it needs to be denoted in the docs what current compiler(s) can be used.
I thought I had a note in the docs, but I may have forgotten. I'll be sure to add more detailed information about that. Right now I am only testing it with GCC 4.5.1 and it can only handle a subset of features, albeit a subset large enough to not sacrifice much functionality. So GCC 4.5.1 is the only compiler I personally know of that can support it in any way at all. I know I have a list of some of the 0x features required in the support section of the docs, though it is not complete. If whatever compiler you are testing with supports such features, you're more than welcome to give it a try, though I'm a bit pessimistic. With respect to Boost.Auto_Function, "do" arguments aren't supported yet in GCC since it can't yet fully handle lambdas in the way I'm using them (inside a decltype in a trailing return type). For the same reason, Boost.Generic "for" arguments can't be supported, though given that I don't even have an implementation of Boost.Generic up on the sandbox yet that's sort of a moot point. If you are looking to use the libraries in practice, I'd hold off on that, though I encourage you to play around with Boost.Auto_Function in GCC if you want. I'm more putting work into these libraries now in the hope that soon more compilers will be able to handle them. The majority of this work is sort of theoretical. That said, if you play around with Boost.Auto_Function keep in mind that it is in a state that will be changed shortly -- the "if" family of arguments will soon have their predicates be the variadic macro equivalent of Boost.Preprocessor sequences. All of the arguments passed this way are "anded" together. This is particularly useful when some of your predicates are values and others are types, since you don't have to manually combine them with && or mpl::and_. It also makes the to-be-implemented "break if" construct, as mentioned in the future direction section, able to report more fine-grained errors. This change also has the side-effect that I can make the parameter IDs "if" and "if not" instead of "if" and "not", for reasons I'm sure no one cares about other than people in the business of making crazy macro hacks, which I'd imagine is a small subset of programmers even in the Boost community. With this thread I'm mostly hoping that others who are keen on the standard will be able provide some suggestions concerning the problems mentioned in my first post. Anyone with ideas for features are also more than welcome. I'm just worried that the problems already mentioned may prove to be road blocks, though I'm not tossing my hands up yet. -- -Matt Calabrese

Hi Matt, I apologize if this response took a while, I got buried into a few things that needed my immediate attention and almost missed replying to this email of yours. That said, please see some of my thoughts in-lined below. On Sun, Nov 14, 2010 at 6:15 AM, Matt Calabrese <rivorus@gmail.com> wrote:
For those of you who were following the Boost.Auto_Function call for interest, this thread sort of spawned off of that.
Boost.Generic (not a part of boost) is a C++0x library intended to replace BCCL as a way of specifying concepts and concept maps, and, when used in conjunction with Boost.Auto_Function (also not a part of boost, though it's in the sandbox and has online documentation at http://www.rivorus.com/auto_function ) as a way to get concept-based function template overloading. For anyone who followed the original thread, I'm happy to say that I'm just a few days away from being able to fully and automatically recognize standard library and user-defined iterator types, though I've had a [not so] surprising amount of compiler crashes and workarounds along the way. For an example of how concept mapping will look (revised from earlier versions as I've made much more progress with implementation), see http://codepaste.net/n47ocu And for an example of how this will be used by Boost.Auto_Function, see http://codepaste.net/1faj21 . At this point I'm not trying to do the equivalent of what would have been C++0x "scoped" concept maps, though at some point I may try to support them, but it would imply calling algorithms through macro invocations (yuck).
Interesting! So which compilers are you using to test your implementation? Is this with MSVC or GCC?
During development, I've come to some realizations that I'd like discussion about here, mostly concerning concept map lookup and ODR. Essentially, the way the library works is by assembling a compile-time list of concept maps piecewise throughout a translation unit for a given type or combination of types via a clever use of overload resolution that I talked about briefly in the BOOST_AUTO_FUNCTION call for interest thread. Underneath the hood, there is something that resembles tag-dispatching, however it is all done entirely within decltype. In short, the way the concept-based overloading works is that the macro used for specifying a function that dispatches based on concept maps generates something along the lines of this. The "magic" shown in comments is something I'm able to already do, as talked about in the other thread:
///// template< class It > void foo( It&& it ) { typedef decltype ( function_returning_type_with_a_nested_static_function ( /* magical way to get a type that inherits from all concept tags */() ) ) fun_holder;
fun_holder::impl( std::forward< It >( it ) ); } /////
Now, this is great, but I'm wondering what this means with respect to ODR. If the user is working with the algorithm "correctly", the typedef will resolve to the same type regardless of the translation unit, however, the "path" taken when inside of the decltype via overload resolution may vary depending on the translation unit when different, orthogonal concept maps are specified for the same type in one translation unit but not in the other (or if concept maps are specified in a different order I.E. via different #include orders). My question is, does this violate ODR in any meaningful sense? Since technically the typedefs should resolve to the same type in each translation unit, not including user error, is there a problem?
I don't see an obvious problem here in terms of ODR here because you are using a template -- which by definition still gets instantiated anyway across multiple translation units, and is not required to have a single definition anyway. The only worrying thing is that if the nested function invocation referred to has a static but non-extern linkage, and thus will be defined in multiple translation units -- some compilers issue a diagnostic on this occurrence although I forget if the standard requires that a diagnostic be emitted in cases where you have nested static functions in templates. Maybe those who actually know enough about the relevant sections of the standard can chime in.
The next question is much more devious and I have a feeling implies a blatant violation of ODR.
Consider the following code, assuming "foo" does concept-based dispatching in a way similar to the above:
///// foo( 5 );
/* a concept map that specifies "int" models a concept that may affect dispatching */
foo( 5 ); /////
With the definition of foo given above, what would effectively happen is that the second call to "foo" will not be able to dispatch based on the new concept map! The reason why is because both calls will use foo< int >, and since it was already instantiated once for int, that first definition will be used. Note that this problem technically even exists with traditional tag dispatch, though I wonder if there may be some sort of solution that is standard.
Note that your concept map is computed at "compile-time" right, and should be in a globally accessible scope -- i.e. a template specialization or a template class in a namespace -- right? Unless you're able to create a concept map at runtime or call the foo function outside of a function body, then I don't see how adding a new concept_map might be an issue for ODR. Of course unless you're talking about invocations of foo<...> on multiple translation units where you have one translation unit already defining the concept map and the others having a different set of concept maps. In which case you will get around that by marking foo<...> as an inline function, thus allowing multiple definitions across translation units be acceptable. I'm not sure if that's what you're looking for or asking, and if I'm not making sense please enlighten me more as to what your concern actually is. ;)
A hackish workaround I've come up for this is if we change the defintion of "foo" to be generated as the following:
///// // Uses C++0x function template default arguments template< class It, class TagType = /* magical way to get a type that inherits from all concept tags */ > void foo( It&& it ) { typedef decltype ( function_returning_type_with_a_nested_static_function ( TagType() ) ) fun_holder;
fun_holder::impl( std::forward< It >( it ) ); } /////
Going back to the example calling code, "foo" will now correctly dispatch differently if the intermediate concept map should affect concept-based dispatch. This might seem like a perfect solution, but now we are pretty much definitely violating ODR, since a function that calls "foo" in different translation units will very possibly see different TagTypes even though the overload resolution internal to "foo" would resolve the same.
Am I clear on this problem and why I believe my solution only works if you don't consider ODR violations?
Like I already mentioned above, if `function_returning_type_with_a_nested_static_function` is a template, and the static function is defined in-line, I don't think you'll run into ODR violations here. If you're worried of the foo function being defined differently across multiple translation units, then that's fine because that is the nature of templates AFAIK. ;)
The final solution that I believe sidesteps all ODR violations would be if I force calls to such algorithms to be done via a macro. The macro would internally do the trick that is currently shown inside of the definition of "foo", only it would now do it at the caller's scope. If I decide to eventually try to support scoped concept maps I would be forced down such a route anyway, so my question ends up being at what point does the library cease being a convenience? Is it worth supporting concepts as accurately as possible, including the above desired behavior, if the calling code has to become:
///// BOOST_GENERIC_CALL( (foo)( 5 ) );
/* a concept map that specifies "int" models a concept that may affect dispatching */
BOOST_GENERIC_CALL( (foo)( 5 ) ); /////
or, if you want a more practical example:
///// BOOST_GENERIC_CALL( (advance)( vector_.begin(), 5 ) ); /////
This is a problem that I would prefer to be resolved sooner rather than later since I want to reduce code rewrites. Any feedback is greatly appreciated, especially if you have insight into these problems.
I'll try to get Boost.Generic in its current, very limited form up on the sandbox and the docs online as soon as possible.
I don't like the idea with making function calls macro functions. Unless I'm missing what you're saying above with ODR violations, I don't see them at all, granted that templates in general are instantiated per translation unit, and that you'll find ODR in cases where you have statics defined in multiple TU's that either differ or are exposed through a non-template class. Maybe looking at how Phoenix gets around the ODR issue might help -- I've never found any problems with the way Phoenix actors have static nested functions in templates, and I've never had compilers complain of ODR in those cases either. HTH -- Dean Michael Berris deanberris.com

At Tue, 14 Dec 2010 15:35:04 +0800, Dean Michael Berris wrote:
I don't see an obvious problem here in terms of ODR here because you are using a template -- which by definition still gets instantiated anyway across multiple translation units,
No, not by definition. EDG still has a link-time instantiation option.
and is not required to have a single definition anyway.
That would be news to me! Do you have a reference? That said, if all the type difference occurs *within* the decltype *and* within the decltype there are no ODR violations, I *think* there is technically no ODR violation. But I suggest asking a real hard-core core-language expert on this one if you care about technical correctness.
The only worrying thing is that if the nested function invocation referred to has a static but non-extern linkage, and thus will be defined in multiple translation units -- some compilers issue a diagnostic on this occurrence although I forget if the standard requires that a diagnostic be emitted in cases where you have nested static functions in templates. Maybe those who actually know enough about the relevant sections of the standard can chime in.
If someone prepares a _minimal_ representative example of the question, I'll be happy to run it by someone who knows for you.
On Sun, Nov 14, 2010 at 6:15 AM, Matt Calabrese <rivorus@gmail.com> wrote:
The next question is much more devious and I have a feeling implies a blatant violation of ODR.
Ditto. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Tue, Dec 14, 2010 at 6:14 PM, Dave Abrahams <dave@boostpro.com> wrote:
At Tue, 14 Dec 2010 15:35:04 +0800, Dean Michael Berris wrote:
I don't see an obvious problem here in terms of ODR here because you are using a template -- which by definition still gets instantiated anyway across multiple translation units,
No, not by definition. EDG still has a link-time instantiation option.
Ah, I haven't had much experience with EDG and link-time template instantiation. Interesting, I haven't looked for that in other front-ends.
and is not required to have a single definition anyway.
That would be news to me! Do you have a reference?
Actually, I was referring to having the template have a single definition per translation unit. However, as you've pointed out, EDG has a link-time instantiation option for templates, which is also standards compliant behavior.
That said, if all the type difference occurs *within* the decltype *and* within the decltype there are no ODR violations, I *think* there is technically no ODR violation. But I suggest asking a real hard-core core-language expert on this one if you care about technical correctness.
That was what I was thinking too -- because Matt's relying on the decltype being the "discriminant" or the thing that differentiates it from other instantiations of the template, then it wouldn't technically be an ODR violation. I have to get my hands on the latest draft standard to see whether the ODR rules have changed for C++0x. Maybe those writing the compilers (and those involved in the standard committee and this section that are also in this list) can chime in and correct all of my mis-interpretations. ;)
The only worrying thing is that if the nested function invocation referred to has a static but non-extern linkage, and thus will be defined in multiple translation units -- some compilers issue a diagnostic on this occurrence although I forget if the standard requires that a diagnostic be emitted in cases where you have nested static functions in templates. Maybe those who actually know enough about the relevant sections of the standard can chime in.
If someone prepares a _minimal_ representative example of the question, I'll be happy to run it by someone who knows for you.
Let me see if I can come up with a simple C++03 example to illustrate what I meant above. :) -- Dean Michael Berris deanberris.com

On Tue, Dec 14, 2010 at 5:14 AM, Dave Abrahams <dave@boostpro.com> wrote:
That said, if all the type difference occurs *within* the decltype *and* within the decltype there are no ODR violations, I *think* there is technically no ODR violation. But I suggest asking a real hard-core core-language expert on this one if you care about technical correctness.
This is what I'm currently banking on. I'll post to clc++ and see if anyone can shed some light here. This thread is a bit outdated with regards to the library now. I'll push the latest Boost.Generic and Boost.Auto_Function to the sandbox in the next couple of days. The progress I've made since this thread was posted has been pretty extensive -- I now have concepts in place for all of the iterator concepts and have completed some slick compile-time asserts which tell you exactly why a particular type does not model a given concept, pointing directly to the concept in question and including, in a static_assert, the exact parameter(s) from the BOOST_GENERIC_CONCEPT invocation that the type doesn't satisfy. Here's a quick couple of screen shots of things in action -- first, a declaration for a user-defined "random access iterator" and an assert that tells why it is not actually a random access iterator (see the commented out lines and the error in the build log -- the first error points directly to the line where the assert appears): http://img.waffleimages.com/146abf6162ad7becd898ce3e06aae7502963de2d/boost_g... And here is the actual concept definition. Note that while the first error that appears in the build log points to the assert itself, the lines that tell you what went wrong point you to the concept definition: http://img.waffleimages.com/75dc253518146071c4a10b63c155d6c893b7ff68/boost_g... To see just how closely the above concept resembles the specification of the concept in the standard, check out page 820 in the current working draft. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3126.pdf Notice that Boost.Generic is even able to specify the "givens" (r, a, b, n) via "for", which is a bit trickier than you might expect. The reason why they were tricky is because the given names must be dependent since I need to use them with SFINAE. This mean that they can't simply be data members as they often are with BCCL -- instead, they are implicitly converted to be non-type template reference parameters where the type is dependent on another template parameter. Then, all expressions and conditions internally appear in specializations of these templates in a manner similar to Boost.Enable_If usage with type templates. This allows me to check conditions and the validity of expressions all without causing a hard compile-time error. Using that information I then produce a really simple message via static_assert. Anyway, I've said more than I wanted to right now. I didn't expect this thread to be bumped -- I was hoping to post a new thread about all of this once everything was polished and up in the sandbox.

On Tue, Dec 14, 2010 at 8:11 PM, Matt Calabrese <rivorus@gmail.com> wrote:
On Tue, Dec 14, 2010 at 5:14 AM, Dave Abrahams <dave@boostpro.com> wrote:
Here's a quick couple of screen shots of things in action -- first, a declaration for a user-defined "random access iterator" and an assert that tells why it is not actually a random access iterator (see the commented out lines and the error in the build log -- the first error points directly to the line where the assert appears):
http://img.waffleimages.com/146abf6162ad7becd898ce3e06aae7502963de2d/boost_g...
And here is the actual concept definition. Note that while the first error that appears in the build log points to the assert itself, the lines that tell you what went wrong point you to the concept definition:
http://img.waffleimages.com/75dc253518146071c4a10b63c155d6c893b7ff68/boost_g...
To see just how closely the above concept resembles the specification of the concept in the standard, check out page 820 in the current working draft.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3126.pdf
I love it. Thanks for working on this Matt! [snip]
Anyway, I've said more than I wanted to right now. I didn't expect this thread to be bumped -- I was hoping to post a new thread about all of this once everything was polished and up in the sandbox.
Looking forward to that. Have a good one! -- Dean Michael Berris deanberris.com

On Tue, Dec 14, 2010 at 7:16 AM, Dean Michael Berris <mikhailberis@gmail.com
wrote:
I love it. Thanks for working on this Matt!
Thanks. I'm embarrassed to admit just how much time I've been putting into this, though I think it's looking to be worth it. -- -Matt Calabrese

On Tue, Dec 14, 2010 at 7:11 AM, Matt Calabrese <rivorus@gmail.com> wrote:
Anyway, I've said more than I wanted to right now. I didn't expect this thread to be bumped -- I was hoping to post a new thread about all of this once everything was polished and up in the sandbox.
Actually, since this thread has started going again, I've hit some complicated issues regarding concept maps that I'd hope someone could help out with. In particular, there is one very troubling situation I've encountered. Imagine three concept types: a "base" concept, a "left" concept which is a refinement of "base", and a "right" concept, which is also a refinement of "base". A programmer creates a type called "foo" and wishes to make a concept map for "left" and also a concept map for "right". Because "left" is a refinement of "base", making a concept map for "left" implicitly makes a concept map for "base". Similarly, since "right" is a refinement of "base", the same thing happens. So the issue is, if someone now writes a function that requires a "base", which concept map is to be used? Should this be a compile-time error? If so, how could this possibly be safely resolved (does anyone know how this was handled in concept proposals)? Assuming it should be a compile-time error, would it be enough to just require a third concept map to be written, explicitly for base, or perhaps a way to specify explicitly that "right" should be used? If so, what happens when a function takes a "left" and that function references the parts of the concept map that are used to satisfy the requirements of "base"? I'd imagine that it would use the "left" concept map, but that means that a different concept map for "base" would be used depending on whether the function took a "base" or a "left" or a "right", which seems to be very wrong. Am I expressing this problem correctly? Does anyone see an obvious resolution that I've missed? -- -Matt Calabrese

AMDG On 12/14/2010 4:41 AM, Matt Calabrese wrote:
Actually, since this thread has started going again, I've hit some complicated issues regarding concept maps that I'd hope someone could help out with. In particular, there is one very troubling situation I've encountered. Imagine three concept types: a "base" concept, a "left" concept which is a refinement of "base", and a "right" concept, which is also a refinement of "base". A programmer creates a type called "foo" and wishes to make a concept map for "left" and also a concept map for "right". Because "left" is a refinement of "base", making a concept map for "left" implicitly makes a concept map for "base". Similarly, since "right" is a refinement of "base", the same thing happens. So the issue is, if someone now writes a function that requires a "base", which concept map is to be used? Should this be a compile-time error? If so, how could this possibly be safely resolved (does anyone know how this was handled in concept proposals)?
I believe that it wasn't an issue for the C++0x concept proposals, because the concept maps for both left and right would depend on being able to find a concept map for base. In Christ, Steven Watanabe

On Tue, Dec 14, 2010 at 10:33 AM, Steven Watanabe <watanabesj@gmail.com>wrote:
I believe that it wasn't an issue for the C++0x concept proposals, because the concept maps for both left and right would depend on being able to find a concept map for base.
Ah thanks, Steven, I misremembered apparently. Just to get this straight, what you're saying is that if someone were to make a concept map for a random access iterator they'd have to first explicitly make concept maps for iterator, input iterator, forward iterator, and bidirectional iterator? That should solve the problem, albeit somewhat tedious for people creating models of concepts. That's also much easier to implement than my current approach of having the concept map for the refinement act as the concept map for the parent concept, so I'll probably follow that route. I should just look at the concept proposals from now on for reference. No sense in putting in a lot of work to just come to the same conclusions with likely similar rationale. -- -Matt Calabrese

At Tue, 14 Dec 2010 12:02:59 -0500, Matt Calabrese wrote:
On Tue, Dec 14, 2010 at 10:33 AM, Steven Watanabe <watanabesj@gmail.com>wrote:
I believe that it wasn't an issue for the C++0x concept proposals, because the concept maps for both left and right would depend on being able to find a concept map for base.
Ah thanks, Steven, I misremembered apparently. Just to get this straight, what you're saying is that if someone were to make a concept map for a random access iterator they'd have to first explicitly make concept maps for iterator, input iterator, forward iterator, and bidirectional iterator?
I don't think so. Take a look at https://svn.osl.iu.edu/svn/hlo/trunk/gcc/libstdc++-v3/include/bits/iterator_... for reference; search for "X*". You'll see there's only one concept map for X*, to RandomAccessIterator.
That should solve the problem, albeit somewhat tedious for people creating models of concepts. That's also much easier to implement than my current approach of having the concept map for the refinement act as the concept map for the parent concept, so I'll probably follow that route.
I should just look at the concept proposals from now on for reference. No sense in putting in a lot of work to just come to the same conclusions with likely similar rationale.
Good thought. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Alright, I'm updating the way concept maps are defined and I'd like some feedback here before I go through the task of creating yet another named-parameter macro. In designing Generic, I've gone in a slightly different direction from what would have been C++0x concepts and concept maps. In particular, instead of having "auto concepts", all concepts are by default explicit, however, a programmer can create one or more "auto concept maps" for a given concept. These "auto concept maps" act as concept maps with optional conditions (SFINAE and metafunction conditions). For instance, there has been concern in the C++ community about making iterator concepts, not to mention many other concepts, auto. I side with the explicit crowd and agree that most concepts should be be explicit, including iterator concepts, however, given certain conditions, I believe that it is acceptable for types that pass some kind of basic compile-time check to be automatically mapped if the superficial requirements of the concept are met. If it's not clear what I mean, here is an example: ////////// BOOST_GENERIC_CONCEPT_MAP ( ( template ( (class) T ) ) , (ForwardIterator)( T ) , ( if ( check_if_T_uses_the_boost_forward_iterator_helper_base< T > ) ) ) ( // Empty implementation ) ////////// In the above code we have an auto concept map for ForwardIterator that only applies to types which inherit from forward_iterator_helper of Boost.Operators. In addition to the explicitly specified "if" check, it will also implicitly check via SFINAE all of the syntactic concept requirements. The idea is that if a class uses the iterator helper base then it is likely trying to be a forward iterator. While this is not entirely true 100% of the time (a quick example is another intermediate base that uses this helper base), it should be safe and useful in practice and is IMO a better alternative to fully auto concepts, and it should alleviate the need for programmers creating iterators to have to manually make empty concept maps in many situations. First, I'd like to get some feedback on this approach before I take the time to fully implement it. Does the idea of "auto concept maps" as opposed to "auto concepts" make sense? What problems may arise from this? One thing that may initially appear to be a problem is that you can have two auto concept maps with overlapping conditions, in which case the auto concept map to use would be ambiguous. This problem can be alleviated by explicitly specifying a non-auto concept map which would be preferred over the auto maps. The rules for which concept maps get picked are the same as they are for matching template specializations that have (optional) Enable_If style conditions, as that's how everything is implemented underneath the hood, so the method of disambiguation should be fairly familiar to C++ programmers. If anyone sees any problems with this approach, please let me know, as it will save me a lot of effort if there are any as-of-yet unforeseen show-stopping problems. -- -Matt Calabrese

On Wed, Dec 15, 2010 at 2:23 PM, Matt Calabrese <rivorus@gmail.com> wrote:
Alright, I'm updating the way concept maps are defined and I'd like some feedback here before I go through the task of creating yet another named-parameter macro. In designing Generic, I've gone in a slightly different direction from what would have been C++0x concepts and concept maps. In particular, instead of having "auto concepts", all concepts are by default explicit, however, a programmer can create one or more "auto concept maps" for a given concept. These "auto concept maps" act as concept maps with optional conditions (SFINAE and metafunction conditions).
Bump hoping that someone has feedback, positive or negative, about this approach. Also, for a while now I've been thinking about a future addition to the library that's way far off, but I'd like to hear people's opinion on it now, or at least have people thinking about it for the long while before I ever get to possibly implementing it. The idea is const-qualification for concepts, which I'll try to briefly explain. First, the motivating use-case is the "circle-ellipse" problem ( http://en.wikipedia.org/wiki/Circle-ellipse_problem ). I'm sure people are familiar with it already, but in short, a circle is a kind of ellipse, however, if an ellipse concept requires mutating operations for individual axes, then a circle can't really be considered a refinement of ellipse. Instead, the ideal solution is likely to have an Ellipse concept that doesn't have any mutating operations, and a MutableEllipse that is a refinement of Ellipse but that has mutating operations. A Circle can then be considered a refinement of Ellipse without a problem. My idea is to eventually directly support this idea via a form of const-qualification for concepts. Take the following example, in pseudo-code: ////////// concept Ellipse< T > : Regular< T > { const: // used to specify that the following are required even for const Ellipse typename value_type; value_type axis_0( T const& ); value_type axis_1( T const& ); mutable: // The following are only required for non-const Ellipse void axis_0( T&, value_type ); // set the axis length void axis_1( T&, value_type ); // set the axis length } // The above code effectively creates 2 concepts: // a "const Ellipse" concept that can view properties of the ellipse // an "Ellipse" concept that refines "const Ellipse" and that adds mutators // Now, the implementation of the Circle concept: concept Circle< T > : const Ellipse< T > // A circle meets the requirements of const Ellipse , Regular< T > // And it's also a non-const Regular type { const: value_type radius( T const& ); mutable: void radius( T&, value_type ); } ////////// There are some subtleties, but I think the above code should hopefully make some sense simply from reading it. The overall idea is that a Circle meets the requirements of const Ellipse and is also a Regular type. This should be a concise way to represent that relationship. To be a little more clear, here is what I'd imagine the above code would effectively generate: ////////// concept ConstEllipse< T > : ConstRegular< T > { typename value_type; value_type axis_0( T const& ); value_type axis_1( T const& ); } concept Ellipse< T > : ConstEllipse< T > , Regular< T > { void axis_0( T&, value_type ); // set the axis length void axis_1( T&, value_type ); // set the axis length } concept ConstCircle< T > : ConstEllipse< T > , ConstRegular< T > { value_type radius( T const& ); } concept Circle< T > : ConstCircle< T > , Regular< T > { void radius( T&, value_type ); } ////////// Notice that ConstEllipse refines ConstRegular, whereas Ellipse refines both ConstEllipse and Regular. In other words, the "constness" of a concept propagates to the concepts that it refines. I haven't shown the implementation of Regular, but it would be what you'd expect from such a concept, but with, I.E. assignment specified as mutable, similar to the mutable specifications for Ellipse. Specifically, this means that a const Regular< T > wouldn't require assignment, which is why Circle has to explicitly refine Regular< T >. In the end, I think this should be a concise way to handle situations equivalent to the circle-ellipse problem without programmers having to manually create separate mutable and immutable concepts. Have I made this point clear? If anyone has anything to say about this idea please let me know, especially anything negative. Again, if I ever pursue this idea at all, it won't be for a very long time anyway, but I'd like people to think about it now. -- -Matt Calabrese

On Wed, Dec 15, 2010 at 9:29 PM, Matt Calabrese <rivorus@gmail.com> wrote:
I think this should be a concise way to handle situations equivalent to the circle-ellipse problem without programmers having to manually create separate mutable and immutable concepts. Have I made this point clear?
Perfectly
If anyone has anything to say about this idea please let me know, especially anything negative. Again, if I ever pursue this idea at all, it won't be for a very long time anyway, but I'd like people to think about it now.
I just wonder whether this sort of thing comes up often enough to warrant a feature, or is it enough to simply define separate Ellipse and MutableEllipse concepts? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Thu, Dec 16, 2010 at 9:20 AM, Dave Abrahams <dave@boostpro.com> wrote:
I just wonder whether this sort of thing comes up often enough to warrant a feature, or is it enough to simply define separate Ellipse and MutableEllipse concepts?
I wonder the same thing but it's hard to be able to answer that question. I can at the very least, though, say that I personally have run into multiple situations analogous to this back when I was still in OOP land and I did, in fact, just create mutable and immutable interfaces related by inheritance. In those cases, a solution like this would have been a bit more concise. The fact that there is a fairly well known name for this leads me to believe that it's not incredibly uncommon, but I really don't have any way to back that up. -- -Matt Calabrese

On Wed, Dec 15, 2010 at 2:23 PM, Matt Calabrese <rivorus@gmail.com> wrote:
Alright, I'm updating the way concept maps are defined and I'd like some feedback here before I go through the task of creating yet another named-parameter macro. In designing Generic, I've gone in a slightly different direction from what would have been C++0x concepts and concept maps. In particular, instead of having "auto concepts", all concepts are by default explicit, however, a programmer can create one or more "auto concept maps" for a given concept. These "auto concept maps" act as concept maps with optional conditions (SFINAE and metafunction conditions).
That sort of sounds like concept map templates with SFINAE "tacked on" (just the same way we tacked on SFINAE to partial template specialization—though most people don't know that usage of enable_if). Would you support the analogue of concept map templates that can be partially-ordered (sounds from what you write below like you would), or would we be writing SFINAE conditions like "if (InputIterator<X> && !BidirectionalIterator<X>)" to avoid ambiguities?
For instance, there has been concern in the C++ community about making iterator concepts, not to mention many other concepts, auto. I side with the explicit crowd and agree that most concepts should be be explicit, including iterator concepts, however, given certain conditions, I believe that it is acceptable for types that pass some kind of basic compile-time check to be automatically mapped if the superficial requirements of the concept are met.
I think it's particularly important for foundational concepts like EqualityComparable.
If it's not clear what I mean, here is an example:
////////// BOOST_GENERIC_CONCEPT_MAP ( ( template ( (class) T ) ) , (ForwardIterator)( T ) , ( if ( check_if_T_uses_the_boost_forward_iterator_helper_base< T > ) ) ) ( // Empty implementation ) //////////
In the above code we have an auto concept map for ForwardIterator that only applies to types which inherit from forward_iterator_helper of Boost.Operators. In addition to the explicitly specified "if" check, it will also implicitly check via SFINAE all of the syntactic concept requirements. The idea is that if a class uses the iterator helper base then it is likely trying to be a forward iterator. While this is not entirely true 100% of the time (a quick example is another intermediate base that uses this helper base), it should be safe and useful in practice and is IMO a better alternative to fully auto concepts, and it should alleviate the need for programmers creating iterators to have to manually make empty concept maps in many situations.
First, I'd like to get some feedback on this approach before I take the time to fully implement it. Does the idea of "auto concept maps" as opposed to "auto concepts" make sense?
This sounds like the exact opposite of what Bjarne tried to do with his last minute proposal of "explicit concept maps instead of non-auto concepts." IMO, you're both going too far in one direction, so I'll give you the analogous answer to the one I gave him: If you can do full auto concepts, you should, because people will be able to, and will, use your mechanism to create the equivalent thing anyhow—it'll just be uglier.
What problems may arise from this? One thing that may initially appear to be a problem is that you can have two auto concept maps with overlapping conditions, in which case the auto concept map to use would be ambiguous. This problem can be alleviated by explicitly specifying a non-auto concept map which would be preferred over the auto maps. The rules for which concept maps get picked are the same as they are for matching template specializations that have (optional) Enable_If style conditions, as that's how everything is implemented underneath the hood, so the method of disambiguation should be fairly familiar to C++ programmers.
Sounds perfect.
If anyone sees any problems with this approach, please let me know, as it will save me a lot of effort if there are any as-of-yet unforeseen show-stopping problems.
The kinds of things you're proposing won't kill the feature; it's all a question of expressiveness and usability at this level. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Thu, Dec 16, 2010 at 9:18 AM, Dave Abrahams <dave@boostpro.com> wrote:
That sort of sounds like concept map templates with SFINAE "tacked on" (just the same way we tacked on SFINAE to partial template specialization—though most people don't know that usage of enable_if).
That's exactly how it works. Everything is done with specializations and there is also an implicit "Enabler" for convenience, which is why the macro can optionally use those "if" and "try" conditions (try just checks expression validity).
Would you support the analogue of concept map templates that can be partially-ordered (sounds from what you write below like you would), or would we be writing SFINAE conditions like "if (InputIterator<X> && !BidirectionalIterator<X>)" to avoid ambiguities?
You can do everything that you'd expect from C++03 template specializations that also happen to implicitly include a "SFINAE enabler". So, for instance, you can have a concept map for std::vector< bool, Alloc > that will be picked as a better match over one for std::vector< T, Alloc >, but you cannot do the equivalent of concept overloading, which I think you may be trying to get at with your example. If it came down to it, that could be simulated in a manner similar to how "switch" works with Auto_Function, but that'd be really pushing things to their limits and I'm not sure it's worth trying to support, especially not at this stage. -- -Matt Calabrese

At Tue, 14 Dec 2010 07:41:17 -0500, Matt Calabrese wrote:
On Tue, Dec 14, 2010 at 7:11 AM, Matt Calabrese <rivorus@gmail.com> wrote:
Anyway, I've said more than I wanted to right now. I didn't expect this thread to be bumped -- I was hoping to post a new thread about all of this once everything was polished and up in the sandbox.
Actually, since this thread has started going again, I've hit some complicated issues regarding concept maps that I'd hope someone could help out with. In particular, there is one very troubling situation I've encountered. Imagine three concept types: a "base" concept,
concrete code helps: concept Base<typename T> { T operator*(T, T); };
a "left" concept which is a refinement of "base", and a "right" concept, which is also a refinement of "base".
concept Left<typename T> : Base<T> { T operator-(T, T); }; concept Right<typename T> : Base<T> { T operator+(T, T); };
A programmer creates a type called "foo" and wishes to make a concept map for "left" and also a concept map for "right".
struct foo {}; // This one would be an error, since foo doesn't have operator* concept_map Left<foo> { foo operator-(foo,foo) { return 0; } }; concept_map Right<foo> { foo operator+(foo,foo) { return 0; } }; So how are you going to supply operator*? The answer to your question depends on that.
Because "left" is a refinement of "base", making a concept map for "left" implicitly makes a concept map for "base". Similarly, since "right" is a refinement of "base", the same thing happens. So the issue is, if someone now writes a function that requires a "base", which concept map is to be used?
I don't see a problem. If you answer the question by giving foo an operator* of its own, the two implicitly-generated concept maps are the same concept map.
Should this be a compile-time error? If so, how could this possibly be safely resolved (does anyone know how this was handled in concept proposals)?
I need more information. Please try to write out a complete example (using ConceptC++ syntax, please).
Assuming it should be a compile-time error, would it be enough to just require a third concept map to be written, explicitly for base, or perhaps a way to specify explicitly that "right" should be used? If so, what happens when a function takes a "left" and that function references the parts of the concept map that are used to satisfy the requirements of "base"? I'd imagine that it would use the "left" concept map, but that means that a different concept map for "base" would be used depending on whether the function took a "base" or a "left" or a "right", which seems to be very wrong. Am I expressing this problem correctly? Does anyone see an obvious resolution that I've missed?
too many questions; too little code ;-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Wed, Dec 15, 2010 at 2:19 PM, Dave Abrahams <dave@boostpro.com> wrote:
I don't see a problem. If you answer the question by giving foo an operator* of its own, the two implicitly-generated concept maps are the same concept map.
The problem is if the base concept specifies, for instance, an associated type requirement, possibly not even with a default. The concept map for "left" will have to specify this associated type and so will the concept map for "right". So, what happens when trying to access the concept map for base? Do you use the associated type specified by "left" or the one specified by "right"? This is especially problematic if the associated type is specified as something different in the concept map for "left" than from the concept map for "right". Marcin Zalewski answered this in a more recent reply in this thread -- apparently the last draft with concepts specifies that the two concept maps are checked for compatibility. There is an error if there is conflict. I should probably be able to do something similar in Generic, though it may end up being complicated (and complicated is a relative term with respect to the library already). With the implementation I'm imagining I can already see an ODR issue that would be tough, but not impossible, to account for.
I need more information. Please try to write out a complete example (using ConceptC++ syntax, please).
Marcin's recent reply should demonstrate what I meant. -- -Matt Calabrese

On Wed, Dec 15, 2010 at 4:06 PM, Matt Calabrese <rivorus@gmail.com> wrote:
Marcin Zalewski answered this in a more recent reply in this thread -- apparently the last draft with concepts specifies that the two concept maps are checked for compatibility. There is an error if there is conflict. I should probably be able to do something similar in Generic, though it may end up being complicated (and complicated is a relative term with respect to the library already). With the implementation I'm imagining I can already see an ODR issue that would be tough, but not impossible, to account for.
Actually, I'm going think about this for a while. Even though Steven's answer wasn't accurate, at least with respect to the last draft with concepts, I think his answer is much more feasible to implement and should be just as capable, even though it unfortunately requires those who are writing concept maps to split up the implementation among the refinements. Given that concept map definitions are often empty anyway, I don't think this should be too horrible. Which direction I go can mean the difference between having things implemented in a couple of weeks vs months, so I'll probably end up using Steven's approach for now, but I'll try to leave open the possibility for more true-to-c++0x behavior in the future, since I think that would ultimately lead to the most concise code for programmers creating concept maps. If anyone notices a problem with Steven's solution please bring it to my attention as soon as possible since I'm likely going to devote a bit of time to implementing it. -- -Matt Calabrese

At Wed, 15 Dec 2010 16:52:10 -0500, Matt Calabrese wrote:
On Wed, Dec 15, 2010 at 4:06 PM, Matt Calabrese <rivorus@gmail.com> wrote:
Marcin Zalewski answered this in a more recent reply in this thread -- apparently the last draft with concepts specifies that the two concept maps are checked for compatibility. There is an error if there is conflict. I should probably be able to do something similar in Generic, though it may end up being complicated (and complicated is a relative term with respect to the library already). With the implementation I'm imagining I can already see an ODR issue that would be tough, but not impossible, to account for.
Actually, I'm going think about this for a while. Even though Steven's answer wasn't accurate, at least with respect to the last draft with concepts,
There was never a draft that required maps for all the parent concepts.
I think his answer is much more feasible to implement
Well, that's an important criterion.
and should be just as capable, even though it unfortunately requires those who are writing concept maps to split up the implementation among the refinements. Given that concept map definitions are often empty anyway, I don't think this should be too horrible. Which direction I go can mean the difference between having things implemented in a couple of weeks vs months, so I'll probably end up using Steven's approach for now, but I'll try to leave open the possibility for more true-to-c++0x behavior in the future, since I think that would ultimately lead to the most concise code for programmers creating concept maps.
If anyone notices a problem with Steven's solution please bring it to my attention as soon as possible since I'm likely going to devote a bit of time to implementing it.
I think from a usability perspective it's problematic, and could impair adoption, but I see nothing wrong with using it as a stepping-stone. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Wed, Dec 15, 2010 at 6:48 PM, Dave Abrahams <dave@boostpro.com> wrote:
I think from a usability perspective it's problematic, and could impair adoption, but I see nothing wrong with using it as a stepping-stone.
That was what I thought at first as well, but the more I think about it the less this actually seems to be a problem. First, because of the potential restricted "auto concept maps" I mentioned in an earlier post that would be able to create many of these concept maps automatically (I.E. going back to my example, if someone makes a random access iterator type and uses the Boost.Operators helper base, all of the concept maps would be generated automatically anyway), and second, since many concept maps are empty, it would be trivial to create a macro that combines a bunch of empty concept maps together. For instance, I could very easily make a utility macro called "BOOST_GENERIC_CONCEPT_MAP_SEQ" that could be used as such: ////////// // The creator of RandomAccessIterator (in this case me) would provide this // convenience macro that internally uses BOOST_GENERIC_CONCEPT_MAP_SEQ #define BOOST_GENERIC_RANDOM_ACCESS_ITERATOR_MAP( ... )\ BOOST_GENERIC_CONCEPT_MAP_SEQ\ ( (Iterator)(InputIterator)(ForwardIterator)(BidirectionalIterator)(RandomAccessIterator)\ , __VA_ARGS__\ ) ////////// Then, someone who wants to create a RandomAccessIterator just does something along the lines of: ////////// BOOST_GENERIC_RANDOM_ACCESS_ITERATOR_MAP ( ( template ( (class) Some, (class) Template, (class) Parameters ) ) , ( (user_defined_iterator_template< Some, Template, Parameters >) ) ) ////////// The above would automatically create all of the empty concept maps necessary for RandomAccessIterator and isn't really any more complicated for the creator of the iterator type. Still, yes, ideally this would just be a stepping stone, but it's certainly usable. -- -Matt Calabrese

On Tue, Dec 14, 2010 at 07:41, Matt Calabrese <rivorus@gmail.com> wrote:
On Tue, Dec 14, 2010 at 7:11 AM, Matt Calabrese <rivorus@gmail.com> wrote:
Anyway, I've said more than I wanted to right now. I didn't expect this thread to be bumped -- I was hoping to post a new thread about all of this once everything was polished and up in the sandbox.
Actually, since this thread has started going again, I've hit some complicated issues regarding concept maps that I'd hope someone could help out with. In particular, there is one very troubling situation I've encountered. Imagine three concept types: a "base" concept, a "left" concept which is a refinement of "base", and a "right" concept, which is also a refinement of "base". A programmer creates a type called "foo" and wishes to make a concept map for "left" and also a concept map for "right". Because "left" is a refinement of "base", making a concept map for "left" implicitly makes a concept map for "base". Similarly, since "right" is a refinement of "base", the same thing happens. So the issue is, if someone now writes a function that requires a "base", which concept map is to be used? Should this be a compile-time error? If so, how could this possibly be safely resolved (does anyone know how this was handled in concept proposals)?
Assuming it should be a compile-time error, would it be enough to just require a third concept map to be written, explicitly for base, or perhaps a way to specify explicitly that "right" should be used? If so, what happens when a function takes a "left" and that function references the parts of the concept map that are used to satisfy the requirements of "base"? I'd imagine that it would use the "left" concept map, but that means that a different concept map for "base" would be used depending on whether the function took a "base" or a "left" or a "right", which seems to be very wrong. Am I expressing this problem correctly? Does anyone see an obvious resolution that I've missed?
Matt, I think that n2914 answers your questions (modulo any possible issues that can or have been found in the text). Here it is: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2914.pdf This is the last draft that included concepts proposal, I believe. Specifically, 14.10.3.2 p2 says that a concept map for a refined concept is only implicitly created if it has not been found by concept map lookup. Therefore, there always be only one concept map for base. Also relevant is p5 in the same section which gives compatibility rules for "satisfiers" in concept maps for refining concepts. For example, you can have: concept A { typename T; } concept B : A {} concept C : A {} concept_map B { typedef int T; } // concept map for A implicitly created concept_map C { typedef int T; } // concept map for A found by lookup, compatibility checked There is more than that, but the basic idea is that compatibility is checked when possible, and decided negatively if it is deemed to be undecidable or too difficult to decide.
-- -Matt Calabrese _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 12/14/2010 7:11 AM, Matt Calabrese wrote:
On Tue, Dec 14, 2010 at 5:14 AM, Dave Abrahams<dave@boostpro.com> wrote:
That said, if all the type difference occurs *within* the decltype *and* within the decltype there are no ODR violations, I *think* there is technically no ODR violation. But I suggest asking a real hard-core core-language expert on this one if you care about technical correctness.
This is what I'm currently banking on. I'll post to clc++ and see if anyone can shed some light here.
This thread is a bit outdated with regards to the library now. I'll push the latest Boost.Generic and Boost.Auto_Function to the sandbox in the next couple of days.
I see Auto_Function in the sandbox but I do not see Generic. Is it there under a different name ?

On Tue, Dec 14, 2010 at 8:24 AM, Edward Diener <eldiener@tropicsoft.com>wrote:
I see Auto_Function in the sandbox but I do not see Generic. Is it there
under a different name ?
It's not in the sandbox yet, and that's also a fairly old version of Auto_Function ("do" has been removed as I've learned that it requires nonstandard use of lambdas in an unevaluated context, and the basic auto function macro has been changed accordingly). When I started Generic, Auto_Function changed drastically to depend on it (both libraries use "named macro parameters" and "variadic sequences" so there is a decent amount of code shared between them because of this). Everything has been in a very volatile state since then and I'm only now starting to get all of my tests passing again, but I'll try to get everything up in the next few days. Keep in mind, though, that Generic is far from stable and likely will be that way for a while. -- -Matt Calabrese

At Tue, 14 Dec 2010 07:11:51 -0500, Matt Calabrese wrote:
This thread is a bit outdated with regards to the library now. I'll push the latest Boost.Generic and Boost.Auto_Function to the sandbox in the next couple of days. The progress I've made since this thread was posted has been pretty extensive -- I now have concepts in place for all of the iterator concepts and have completed some slick compile-time asserts which tell you exactly why a particular type does not model a given concept, pointing directly to the concept in question and including, in a static_assert, the exact parameter(s) from the BOOST_GENERIC_CONCEPT invocation that the type doesn't satisfy.
That is *sweet*!
Here's a quick couple of screen shots of things in action -- first, a declaration for a user-defined "random access iterator" and an assert that tells why it is not actually a random access iterator (see the commented out lines and the error in the build log -- the first error points directly to the line where the assert appears):
http://img.waffleimages.com/146abf6162ad7becd898ce3e06aae7502963de2d/boost_g...
And here is the actual concept definition. Note that while the first error that appears in the build log points to the assert itself, the lines that tell you what went wrong point you to the concept definition:
http://img.waffleimages.com/75dc253518146071c4a10b63c155d6c893b7ff68/boost_g...
I have some concerns over the use of valid expressions rather than pseudosignatures; they tend to make it very difficult to write correct algorithms. Can you create archetypes from these concept definitions and do compile-time checking of algorithm bodies? Have you tried to (re-)write any interesting algorithms using these concepts?
To see just how closely the above concept resembles the specification of the concept in the standard, check out page 820 in the current working draft.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3126.pdf
Notice that Boost.Generic is even able to specify the "givens" (r, a, b, n) via "for", which is a bit trickier than you might expect.
Yeah, I saw. I love that.
The reason why they were tricky is because the given names must be dependent since I need to use them with SFINAE. This mean that they can't simply be data members as they often are with BCCL -- instead, they are implicitly converted to be non-type template reference parameters where the type is dependent on another template parameter. Then, all expressions and conditions internally appear in specializations of these templates in a manner similar to Boost.Enable_If usage with type templates. This allows me to check conditions and the validity of expressions all without causing a hard compile-time error. Using that information I then produce a really simple message via static_assert.
Wow.
Anyway, I've said more than I wanted to right now. I didn't expect this thread to be bumped -- I was hoping to post a new thread about all of this once everything was polished and up in the sandbox.
Looking forward to it! -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Wed, Dec 15, 2010 at 2:07 PM, Dave Abrahams <dave@boostpro.com> wrote:
I have some concerns over the use of valid expressions rather than pseudosignatures; they tend to make it very difficult to write correct algorithms.
I'm not sure I immediately understand your concerns, so please elaborate. For what it's worth, I do plan on eventually supporting pseudosignatures, though the macro will internally translate the pseudosignature to their corresponding "expression validity check" anyway (but with the added benefit of more likely being able to be translated directly to what may be C++1x concept pseudosignatures, though this is not really my concern for now). The reason I didn't take this approach from the start is it is much more complicated to preprocess a pseudosignature parameter -- in particular, I'd need special handling of operators (I.E. users would have to spell operator++ as operator pre_inc and the macro internally would have to handle each operator explicitly, which will be a fairly tedious undertaking). If you can explain what exactly you feel is problematic with the current approach in more detail I have no problem with eventually scrapping the current interface and only supporting pseudosignatures instead. Can you create archetypes from these concept definitions
and do compile-time checking of algorithm bodies? Have you tried to (re-)write any interesting algorithms using these concepts?
As for archetypes do you mean automatically create archetypes? If so, then no. Is this the problem you see with the "valid expression" approach over pseudosignatures? I'd imagine that with pseudosignatures it may be possible for me to automatically generate archetypes, though I'd have to give it further thought. That alone is certainly a convincing argument for pseudosignatures, however, I can see the macro getting fairly complicated if such functionality is to be fully-featured. If I do eventually follow that idea, it will likely take quite some time to implement. As for writing algorithms with the concepts, no, I haven't done so yet other than an Auto_Function "advance" that uses concept-based overloading. Unfortunately, things end up being verbose and loaded with parentheses. Manual tag dispatching ends up being cleaner in practice, so the benefits are questionable. The main advantage here is that such "category" types that are used for tag dispatching don't have to be created to begin with since the concepts themselves can be used directly. Here's the "advance" implementation I use for testing. It is actually much more complicated than std::advance because I'm using the return type to determine which overload is picked. Try to look past that complexity since it wouldn't be there in an actual implementation. http://codepaste.net/qcz4cq Anyway, at the moment such rewrites aren't very interesting except for being able to do concept-based overloads, since currently the only concept maps that are supported are empty concept maps. With regards to the iterator concepts, associated types are still accessed via iterator_traits as opposed to via a concept_map directly because I have yet to fully implement associated types with concept maps. So, at this point, Generic is really only worthwhile for "pretty" asserts and concept-based overloading, though that should hopefully change in the near future. -- -Matt Calabrese

At Wed, 15 Dec 2010 15:48:31 -0500, Matt Calabrese wrote:
For what it's worth, I do plan on eventually supporting pseudosignatures, though the macro will internally translate the pseudosignature to their corresponding "expression validity check" anyway (but with the added benefit of more likely being able to be translated directly to what may be C++1x concept pseudosignatures, though this is not really my concern for now).
That's fine; the key thing is including the forced type conversions at the function boundaries.
The reason I didn't take this approach from the start is it is much more complicated to preprocess a pseudosignature parameter -- in particular, I'd need special handling of operators (I.E. users would have to spell operator++ as operator pre_inc and the macro internally would have to handle each operator explicitly, which will be a fairly tedious undertaking).
Meh. I guess that wouldn't bother me much.
If you can explain what exactly you feel is problematic with the current approach in more detail I have no problem with eventually scrapping the current interface and only supporting pseudosignatures instead.
Eventually, that's where you should end up, IMO. But that shouldn't impair your progress now if you have momentum. See links in my previous reply for rationale.
Can you create archetypes from these concept definitions
and do compile-time checking of algorithm bodies? Have you tried to (re-)write any interesting algorithms using these concepts?
As for archetypes do you mean automatically create archetypes?
Yep.
If so, then no. Is this the problem you see with the "valid expression" approach over pseudosignatures?
That's not what I was thinking of, but now that you mention it, yes. I think archetypes are easier to create from pseudosignatures.
I'd imagine that with pseudosignatures it may be possible for me to automatically generate archetypes, though I'd have to give it further thought. That alone is certainly a convincing argument for pseudosignatures, however, I can see the macro getting fairly complicated if such functionality is to be fully-featured. If I do eventually follow that idea, it will likely take quite some time to implement.
No doubt!
As for writing algorithms with the concepts, no, I haven't done so yet other than an Auto_Function "advance" that uses concept-based overloading. Unfortunately, things end up being verbose and loaded with parentheses. Manual tag dispatching ends up being cleaner in practice, so the benefits are questionable.
Oh, that's a shame. You know, with C++0x you also have variadic macros, which could potentially clean up the syntax... or are you already using those?
The main advantage here is that such "category" types that are used for tag dispatching don't have to be created to begin with since the concepts themselves can be used directly. Here's the "advance" implementation I use for testing. It is actually much more complicated than std::advance because I'm using the return type to determine which overload is picked. Try to look past that complexity since it wouldn't be there in an actual implementation.
Anyway, at the moment such rewrites aren't very interesting except for being able to do concept-based overloads, since currently the only concept maps that are supported are empty concept maps. With regards to the iterator concepts, associated types are still accessed via iterator_traits as opposed to via a concept_map directly because I have yet to fully implement associated types with concept maps. So, at this point, Generic is really only worthwhile for "pretty" asserts and concept-based overloading, though that should hopefully change in the near future.
That's still a significant achievement. Great going! -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Wed, Dec 15, 2010 at 6:58 PM, Dave Abrahams <dave@boostpro.com> wrote:
Eventually, that's where you should end up, IMO. But that shouldn't impair your progress now if you have momentum. See links in my previous reply for rationale.
I must have missed them, I'll take a look. On Wed, Dec 15, 2010 at 6:58 PM, Dave Abrahams <dave@boostpro.com> wrote:
You know, with C++0x you also have variadic macros, which could potentially clean up the syntax... or are you already using those?
They're used extensively and they do make things much simpler -- without variadic support, the use of the macros would be so loaded with extra parentheses and require so many more internal hacks that I probably wouldn't be working on this at all. As an example, all of the top-level macros are variadic and all of the arguments that appear to take Boost.Preprocessor sequences, such as "if" in the RandomAccessIterator concept definition I posted, actually take "variadic" preprocessor sequences, which are like Boost.Preprocessor sequences except that they can have elements with top-level commas or elements that are empty. In fact, the conditions in the code I posted make use of this if you look carefully -- the is_same and is_convertible conditions have top-level commas in them since the metafunctions are binary, which cannot be accomplished with traditional Boost.Preprocessor sequences. Also, anything that looks like a parameter list but with the types wrapped in parentheses also are implemented internally with variadic macros.
That's still a significant achievement. Great going!
Thanks. There are certain simplifications that can be made as some of those parentheses are there solely for consistency. For instance, something like ( template ( (class) It, (class) DiffT ) ) with Auto_Function could actually be written as ( template ( class It, class DiffT ) ), however, in some other places that there are parameter-list-like macro parameters I need to have the user put those extra parentheses -- the parentheses are what allow me to separate and examine each parameter when necessary. As an example, imagine: ( function_name, (int a, array< int, 5 > b) ) The problem is, in a "switch" auto function that is used for concept-based overloads, I need to forward "a" and "b", but when the parameter list is written as seen above, I have no way of actually pulling out those parameter names. Even without that comma in the "array" template argument list, it would be impossible, though commas such as that cause problems in other places that I use parameters. In the end, rather than requiring users of the macros to remember when those extra parentheses are required and when they aren't I opted to just always require them. Unfortunately that makes things look somewhat hairier. This actually isn't how I initially had the macro implemented. It used to be that some places you had to use those extra parentheses and other places you didn't. This minimized the amount of parentheses you had to use but it was hard to remember when the parentheses were required and when they weren't. With effort, I could static_assert if they were required but the user didn't provide them, but that still doesn't make the rules for when they are required any more obvious. Even with that approach, always parenthesizing also worked since I could detect if parentheses were there or not and branch out accordingly. That code is still sitting around, but I decided to stop using it since it made the implementation complicated, not to mention that the rules for when and when not to use these seemingly superfluous parentheses was complicated as well. If it turns out that the parentheses really are making things much too hairy, I could go back to making them only required in particular situations, but for the time being it's easier for me to leave them as they are, I'm sure this is much more information about the implementation than you cared to know, but I'm really trying my hardest to make this actually usable. There are unfortunately some hard limits to what is possible with the C++ preprocessor, so the end result will likely not be much simpler than what you see now. If it turns out to not be worth it, then at the very least we've learned how not to emulate concept-based overloads in C++ :p It's not a huge loss since the concepts and concept maps should still be more than usable on their own. The asserts alone, IMO, have been worth all of the effort so far. -- -Matt Calabrese

On Tue, Dec 14, 2010 at 2:35 AM, Dean Michael Berris <mikhailberis@gmail.com
wrote:
Interesting! So which compilers are you using to test your implementation? Is this with MSVC or GCC?
GCC only at the moment. I don't see an obvious problem here in terms of ODR here because you
are using a template -- which by definition still gets instantiated anyway across multiple translation units, and is not required to have a single definition anyway.
It technically still is required to have one definition. In particular, see 3.2 p5 with respect to function template definitions in multiple translation units: "each definition of D shall consist of the same sequence of tokens; and — in each definition of D, corresponding names, looked up according to 3.4, shall refer to an entity defined within the definition of D, or shall refer to the same entity, after overload resolution (13.3) and after matching of partial template specialization (14.8.3), except that a name can refer to a const object with internal or no linkage if the object has the same literal type in all definitions of D, and the object is initialized with a constant expression (5.19), and the value (but not the address) of the object is used, and the object has the same value in all definitions of D; an" Since the tag type may be different in different translation units as it is dependent on which concept maps have been included and even in which order, this should technically violate ODR. Anyway, it's been a long time since I posted this thread and I've figured out a way around the ODR violation -- instead of it implicitly passing the tag type, it instead fully figures out the concept for which there is a corresponding overload. This is fine since the concept definitions are all the same regardless of the translation unit, unlike the tag type. The conclusion I've come to is that any use of the tag type always needs to be in an unevaluated context and the result should always leave no direct "remnant" of the tag type (if that makes sense). This is rule of thumb I've been following since then. Note that your concept map is computed at "compile-time" right, and
should be in a globally accessible scope -- i.e. a template specialization or a template class in a namespace -- right? Unless you're able to create a concept map at runtime or call the foo function outside of a function body, then I don't see how adding a new concept_map might be an issue for ODR.
From that type list, I internally create the tag type, which virtually inherits from each concept in that list. That tag type is then used under
It's not the concept_map itself that causes the ODR violation, it's the implicitly created "tag type". The way the tag type works is each time a concept map is written, it adds a function to an overload set and defines its own return type based on previous overloads. That return type assembles a compile-time type list of all concepts currently modeled by reaching into the return type from the previous overload set and creating a new type list that is the same but with the new concept added to the list. The idea is we always have an updated type list of every single concept that is modeled by the type. the hood for tag dispatching when the user writes Boost.AutoFunction overloads. The issue is, for instance, if one translation unit doesn't include all of the same concept maps, or even if they are included in a different order, that tag type is technically different, since the type list, and therefore its bases, may be different or appear in a different order. I believe this technically violates ODR because of how the tag type is (or rather was) used by the library. But, I believe that's all moot anyway, as, as I said, I no longer use that approach. I now fully calculate the exact information needed to figure out which overload should be picked and the result of the decltype should always be exactly the same, regardless of the translation unit. All uses of the tag type are contained entirely in decltype and no evidence of its use appears in the resultant type, which I believe skirts the issue. It's quarantined.
In which case you will get around that by marking foo<...> as an inline function, thus allowing multiple definitions across translation units be acceptable.
Again, I believe that would technically violate ODR even though the problem would rarely be diagnosed. I'm trying to be very pedantic here, I don't want to do something nonstandard if at all possible, even if it "works". -- -Matt Calabrese

On Tue, Dec 14, 2010 at 7:30 PM, Matt Calabrese <rivorus@gmail.com> wrote:
On Tue, Dec 14, 2010 at 2:35 AM, Dean Michael Berris <mikhailberis@gmail.com
wrote:
Interesting! So which compilers are you using to test your implementation? Is this with MSVC or GCC?
GCC only at the moment.
Cool. Interesting indeed. This is GCC 4.5 in --std=c++0x mode right?
I don't see an obvious problem here in terms of ODR here because you are using a template -- which by definition still gets instantiated anyway across multiple translation units, and is not required to have a single definition anyway.
It technically still is required to have one definition. In particular, see 3.2 p5 with respect to function template definitions in multiple translation units:
"each definition of D shall consist of the same sequence of tokens; and — in each definition of D, corresponding names, looked up according to 3.4, shall refer to an entity defined within the definition of D, or shall refer to the same entity, after overload resolution (13.3) and after matching of partial template specialization (14.8.3), except that a name can refer to a const object with internal or no linkage if the object has the same literal type in all definitions of D, and the object is initialized with a constant expression (5.19), and the value (but not the address) of the object is used, and the object has the same value in all definitions of D; an"
Since the tag type may be different in different translation units as it is dependent on which concept maps have been included and even in which order, this should technically violate ODR.
Yes, but then your instantiation of the template would be different, since the stuff in the decltype changes -- so technically they're not the same instantiations anymore, because technically they're different types. Am I understanding that wrong? Also, I'm assuming this is the latest draft of the standard post-Batavia, right?
Anyway, it's been a long time since I posted this thread and I've figured out a way around the ODR violation -- instead of it implicitly passing the tag type, it instead fully figures out the concept for which there is a corresponding overload. This is fine since the concept definitions are all the same regardless of the translation unit, unlike the tag type. The conclusion I've come to is that any use of the tag type always needs to be in an unevaluated context and the result should always leave no direct "remnant" of the tag type (if that makes sense). This is rule of thumb I've been following since then.
OK
Note that your concept map is computed at "compile-time" right, and should be in a globally accessible scope -- i.e. a template specialization or a template class in a namespace -- right? Unless you're able to create a concept map at runtime or call the foo function outside of a function body, then I don't see how adding a new concept_map might be an issue for ODR.
It's not the concept_map itself that causes the ODR violation, it's the implicitly created "tag type". The way the tag type works is each time a concept map is written, it adds a function to an overload set and defines its own return type based on previous overloads. That return type assembles a compile-time type list of all concepts currently modeled by reaching into the return type from the previous overload set and creating a new type list that is the same but with the new concept added to the list. The idea is we always have an updated type list of every single concept that is modeled by the type.
From that type list, I internally create the tag type, which virtually inherits from each concept in that list. That tag type is then used under the hood for tag dispatching when the user writes Boost.AutoFunction overloads. The issue is, for instance, if one translation unit doesn't include all of the same concept maps, or even if they are included in a different order, that tag type is technically different, since the type list, and therefore its bases, may be different or appear in a different order. I believe this technically violates ODR because of how the tag type is (or rather was) used by the library.
But, I believe that's all moot anyway, as, as I said, I no longer use that approach. I now fully calculate the exact information needed to figure out which overload should be picked and the result of the decltype should always be exactly the same, regardless of the translation unit. All uses of the tag type are contained entirely in decltype and no evidence of its use appears in the resultant type, which I believe skirts the issue. It's quarantined.
Okay. :)
In which case you will get around that by marking foo<...> as an inline function, thus allowing multiple definitions across translation units be acceptable.
Again, I believe that would technically violate ODR even though the problem would rarely be diagnosed. I'm trying to be very pedantic here, I don't want to do something nonstandard if at all possible, even if it "works".
Yeah, but... I thought 'inline' was meant to be the "way out" to the "ODR" problem with templates? :) -- Dean Michael Berris deanberris.com
participants (6)
-
Dave Abrahams
-
Dean Michael Berris
-
Edward Diener
-
Marcin Zalewski
-
Matt Calabrese
-
Steven Watanabe