[guidelines] why template errors suck

This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines. http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template... Comments? -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 09/24/2010 06:00 PM, Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template...
Comments?
You meant to post this link, right? http://cpp-next.com/archive/2010/09/expressive-c-why-template-errors-suck-an... Reading on...

On 09/24/2010 06:00 PM, Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://cpp-next.com/archive/2010/09/expressive-c-why-template-errors-suck-an... Reducing those tons of template error message gibberish and adding descriptive error messages is a very good goal, IMO. Especially when it comes to messages from the deepest bowels of boost::xyz::detail... To consider and report overly long error messages as errors could help, indeed. I am not sure though, if there will be simple rules as to when something really should be taken care of? Regards, Roland

Maybe the standard is just: if you can't figure out what the error (or warning) message means, ask the library author. If the get asked the same question 500 times it'll start to look like an error to them. On Fri, Sep 24, 2010 at 2:48 PM, Roland Bock <rbock@eudoxos.de> wrote:
To consider and report overly long error messages as errors could help, indeed. I am not sure though, if there will be simple rules as to when something really should be taken care of?

(please don't top-post, rearranging....) On 9/24/2010 3:52 PM, John B. Turpish wrote:
On Fri, Sep 24, 2010 at 2:48 PM, Roland Bock wrote:
To consider and report overly long error messages as errors could help, indeed. I am not sure though, if there will be simple rules as to when something really should be taken care of?
Maybe the standard is just: if you can't figure out what the error (or warning) message means, ask the library author. If the get asked the same question 500 times it'll start to look like an error to them.
I'd like to see library authors be a bit more proactive than that. Template libraries, and boost's in particular, have a real perception problem. Better techniques exist. They just need to be applied. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 9/24/2010 12:00 PM, Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template...
Comments?
Your URL does not say anything. If you have some point, why not just say it here. No C++ programmer is seriously going to stop using templates in his code, but I am sure you know that being one of the expert template metaprogrammers among those who create Boost libraries. But I have a feeling you have something else on your mind, but your URL does not say what it is.

On 9/24/2010 6:05 PM, Edward Diener wrote:
On 9/24/2010 12:00 PM, Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template...
Your URL does not say anything. If you have some point, why not just say it here.
Sorry, for those not familiar with reddit, it's a site for recommending and commenting on online articles. What I linked to was the reddit discussion of the article. At the top of the page was a link to the article itself. You have to click through that link. I linked to the reddit discussion so that folks had the option to vote on the article to promote (or demote) it as they saw fit. Highly voted articles get more eyeballs. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 9/24/2010 8:23 PM, Eric Niebler wrote:
On 9/24/2010 6:05 PM, Edward Diener wrote:
On 9/24/2010 12:00 PM, Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template...
Your URL does not say anything. If you have some point, why not just say it here.
Sorry, for those not familiar with reddit, it's a site for recommending and commenting on online articles. What I linked to was the reddit discussion of the article. At the top of the page was a link to the article itself. You have to click through that link.
I linked to the reddit discussion so that folks had the option to vote on the article to promote (or demote) it as they saw fit. Highly voted articles get more eyeballs.
The article is much more intersting than the comments about it. Thanks for the article.

On 09/24/2010 09:00 AM, Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template...
Comments?
Obviously actual timing data would be useful, but it seems that in some cases these techniques might result in increased compilation time, which is at least as serious a problem as the error messages. Also, rather than use the technique mentioned at the end of the article for ensuring that compilation does not continue (and produce additional errors) after certain static assertions fail, at the cost of increased code complexity (and likely increased compile time), this purpose may be better served by a compiler option that causes compilation to abort after the first error (e.g. -Wfatal-errors in the case of gcc).

Jeremy Maitin-Shepard a écrit :
Obviously actual timing data would be useful, but it seems that in some cases these techniques might result in increased compilation time, which is at least as serious a problem as the error messages.
Indeed. The idea, in order to provide better error messages, is to test whether all the expressions you're going to evaluate are going to lead to errors. Then, if they don't, you evaluate said expressions. However, this basically ends up doing the same thing twice (or worse, checking the expressions are valid may be more costly than actually instanciating them) as far as the compiler is concerned. Therefore I think the solution is to provide a compiler debug mode in which we do this testing, and one where we don't.

On 9/26/2010 8:51 AM, Mathias Gaunard wrote:
Jeremy Maitin-Shepard a écrit :
Obviously actual timing data would be useful, but it seems that in some cases these techniques might result in increased compilation time, which is at least as serious a problem as the error messages.
Indeed.
The idea, in order to provide better error messages, is to test whether all the expressions you're going to evaluate are going to lead to errors.
Then, if they don't, you evaluate said expressions. However, this basically ends up doing the same thing twice (or worse, checking the expressions are valid may be more costly than actually instanciating them) as far as the compiler is concerned.
In the case of a Proto expression, evaluating an expression by applying transforms (that are embedded in a grammar) will instantiate the same templates that would be instantiated by checking the expression against that same grammar. Given that the compiler memoizes template instantiation, checking an expression against a grammar before evaluating it *should* be virtually free in terms of compile time. Benchmarks should be done to back up that claim.
Therefore I think the solution is to provide a compiler debug mode in which we do this testing, and one where we don't.
In general, I agree. I have long thought about adding such a debug mode to the implementation of Proto itself (which eschews parameter checking for improved compile times). -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 26/09/10 14:51, Mathias Gaunard wrote:
The idea, in order to provide better error messages, is to test whether all the expressions you're going to evaluate are going to lead to errors.
Then, if they don't, you evaluate said expressions. However, this basically ends up doing the same thing twice (or worse, checking the expressions are valid may be more costly than actually instanciating them) as far as the compiler is concerned.
Therefore I think the solution is to provide a compiler debug mode in which we do this testing, and one where we don't.
In my own epxerience, I have a global ENABLE_COMPILE_TIME_CHECK macro that control all instance of such tests.

On Sun, Sep 26, 2010 at 10:12 AM, joel falcou <joel.falcou@lri.fr> wrote:
On 26/09/10 14:51, Mathias Gaunard wrote:
[snip]
In my own epxerience, I have a global ENABLE_COMPILE_TIME_CHECK macro that control all instance of such tests.
I have it otherwise: PRODUCT_NAME_DISABLE_CONCEPTS which disables concept_check uses, and PRODUCT_NAME_CONCEPT_ASSERT. -- Felipe Magno de Almeida

Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template...
Comments?
would all this boil down to the following guidlines. a) All template libraries should use boost::concepts ? b) All template libraries should try to factor implemenations so that compile errors are not propagated? Robert Ramey

On 9/24/2010 8:22 PM, Robert Ramey wrote:
Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template...
Comments?
would all this boil down to the following guidlines.
a) All template libraries should use boost::concepts ?
Certainly Concept_check or something like it has a big role to play in this. But I wouldn't know how to use Concept_check to verify that an expression template matches a grammar, for instance. It's not the whole story. Though perhaps I don't know enough about the concept_check library to see how it can apply to proto expressions and grammars. Suggestions? Also, I would prefer C++0x static_assert to the concept_check macros because the error messages can be much nicer. I think the answer is that the concept_check library badly needs a C++0x makeover.
b) All template libraries should try to factor implemenations so that compile errors are not propagated?
Yes. And: detailed comments should be left near any static assertions or concept checks, since that's where the unlucky few will end up. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Sep 24, 2010, at 8:41 PM, Eric Niebler wrote:
On 9/24/2010 8:22 PM, Robert Ramey wrote:
Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template...
Comments?
would all this boil down to the following guidlines.
a) All template libraries should use boost::concepts ?
Certainly Concept_check or something like it has a big role to play in this. But I wouldn't know how to use Concept_check to verify that an expression template matches a grammar, for instance. It's not the whole story. Though perhaps I don't know enough about the concept_check library to see how it can apply to proto expressions and grammars. Suggestions?
You'd need to describe adherence to the grammar as a concept. Ultimately, Concept_check relies on the same stuff as BOOST_MPL_ASSERT_MESSAGE: causing a hard error when the concept isn't satisfied. And it does that by crudely attempting to exercise all the compile-time elements of a concept's requirements. So I think your concept would probably end up looking something like: template <class Expr> struct Grammatical { static_assert( proto::matches<Expr, MyGrammar>::value, "The expression does not match MyGrammar"); }; Not very illuminating, is it?
Also, I would prefer C++0x static_assert to the concept_check macros because the error messages can be much nicer. I think the answer is that the concept_check library badly needs a C++0x makeover.
The real challenge would be making it easy to write new concepts. Right now the usage requirements are simple to state, but if you wanted static_assert to fire, we'd need to use checks that don't cause errors, e.g. instead of: same_type(*i++,v); // postincrement-dereference returning value_type you'd have to write something like: static_assert( has_postincrement<InputIterator>::value, "not postincrementable"); static_assert( has_postincrement_deref<InputIterator>::value, "not dereferenceable"); static_assert( is_same< postincrement_deref<InputIterator>::type, InputIterator::value_type >::value, "postincrement dereference doesn't return value_type"); Well, in all, I guess I like the rigor but it's not very compatible with the loosey-goosey C++03 way of specifying concepts <pines for real concept support>. -- Dave Abrahams BoostPro Computing http://boostpro.com

On 9/24/2010 9:37 PM, David Abrahams wrote:
On Sep 24, 2010, at 8:41 PM, Eric Niebler wrote:
Certainly Concept_check or something like it has a big role to play in this. But I wouldn't know how to use Concept_check to verify that an expression template matches a grammar, for instance. It's not the whole story. Though perhaps I don't know enough about the concept_check library to see how it can apply to proto expressions and grammars. Suggestions?
You'd need to describe adherence to the grammar as a concept. Ultimately, Concept_check relies on the same stuff as BOOST_MPL_ASSERT_MESSAGE: causing a hard error when the concept isn't satisfied. And it does that by crudely attempting to exercise all the compile-time elements of a concept's requirements. So I think your concept would probably end up looking something like:
template <class Expr> struct Grammatical { static_assert( proto::matches<Expr, MyGrammar>::value, "The expression does not match MyGrammar"); };
Not very illuminating, is it?
Haha! No, not at all. Let's rephrase the problem a bit. If we still had C++0x concepts, what would the concept SpiritParser look like, such that we could define spirit::qi::rule::operator= such that it required its RHS to satisfy the SpiritParser concept? Would it be any more illuminating that a simple wrapper around proto::matches<Expr, SpiritGrammar>?
Also, I would prefer C++0x static_assert to the concept_check macros because the error messages can be much nicer. I think the answer is that the concept_check library badly needs a C++0x makeover.
The real challenge would be making it easy to write new concepts. Right now the usage requirements are simple to state, but if you wanted static_assert to fire, we'd need to use checks that don't cause errors, e.g. instead of:
same_type(*i++,v); // postincrement-dereference returning value_type
you'd have to write something like:
static_assert( has_postincrement<InputIterator>::value, "not postincrementable"); static_assert( has_postincrement_deref<InputIterator>::value, "not dereferenceable"); static_assert( is_same< postincrement_deref<InputIterator>::type, InputIterator::value_type >::value, "postincrement dereference doesn't return value_type");
Well, in all, I guess I like the rigor but it's not very compatible with the loosey-goosey C++03 way of specifying concepts <pines for real concept support>.
Blech. Can C++0x SFINAE on expressions make this any nicer? The idea is to be able to text expressions for well-formedness without causing a compile error. Then we'd need a way to chain these expressions to make lists of requirements. Maybe PP sequences consisting of valid expressions? Is there a compiler that implements this yet so we can play with it? -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Sep 24, 2010, at 11:51 PM, Eric Niebler wrote:
On 9/24/2010 9:37 PM, David Abrahams wrote:
Haha! No, not at all. Let's rephrase the problem a bit. If we still had C++0x concepts, what would the concept SpiritParser look like, such that we could define spirit::qi::rule::operator= such that it required its RHS to satisfy the SpiritParser concept?
That's easy to answer; just look at the operations that operator= et. al expect of such a type. Wouldn't SpiritParser just be some simple refinement of Callable?
Would it be any more illuminating that a simple wrapper around proto::matches<Expr, SpiritGrammar>?
If you want to apply concepts to this stuff (and I'm not sure whether it's really possible), you have to go back to the classic Generic Programming Process: look at concrete algorithms, lift out common requirements, cluster into concepts, etc.
Also, I would prefer C++0x static_assert to the concept_check macros because the error messages can be much nicer. I think the answer is that the concept_check library badly needs a C++0x makeover.
The real challenge would be making it easy to write new concepts. Right now the usage requirements are simple to state, but if you wanted static_assert to fire, we'd need to use checks that don't cause errors, e.g. instead of:
same_type(*i++,v); // postincrement-dereference returning value_type
you'd have to write something like:
static_assert( has_postincrement<InputIterator>::value, "not postincrementable"); static_assert( has_postincrement_deref<InputIterator>::value, "not dereferenceable"); static_assert( is_same< postincrement_deref<InputIterator>::type, InputIterator::value_type
::value, "postincrement dereference doesn't return value_type");
Well, in all, I guess I like the rigor but it's not very compatible with the loosey-goosey C++03 way of specifying concepts <pines for real concept support>.
Blech. Can C++0x SFINAE on expressions make this any nicer?
Maybe.
The idea is to be able to text expressions for well-formedness without causing a compile error. Then we'd need a way to chain these expressions to make lists of requirements. Maybe PP sequences consisting of valid expressions?
Yeah, it would involve a bunch of PP stuff.
Is there a compiler that implements this yet so we can play with it?
According to http://gcc.gnu.org/projects/cxx0x.html that stuff came in with GCC 4.4 But IMO this is all beside the point. The truth is that valid expressions are the wrong way to describe concepts, for reasons you and I have discussed in the past. The now-comatose C++ concepts proposal had it right. -- Dave Abrahams BoostPro Computing http://boostpro.com

On 9/25/2010 1:24 PM, David Abrahams wrote:
On Sep 24, 2010, at 11:51 PM, Eric Niebler wrote:
On 9/24/2010 9:37 PM, David Abrahams wrote:
Haha! No, not at all. Let's rephrase the problem a bit. If we still had C++0x concepts, what would the concept SpiritParser look like, such that we could define spirit::qi::rule::operator= such that it required its RHS to satisfy the SpiritParser concept?
That's easy to answer; just look at the operations that operator= et. al expect of such a type. Wouldn't SpiritParser just be some simple refinement of Callable?
No. rule::operator= expects SpiritParser to be a strongly-typed tree. operator= walks that tree and converts it to a recursive descent parser. The tree conforms to a grammar that is easily expressible in Proto, but I wouldn't know how to express such a constraint with concepts.
Would it be any more illuminating that a simple wrapper around proto::matches<Expr, SpiritGrammar>?
If you want to apply concepts to this stuff (and I'm not sure whether it's really possible), you have to go back to the classic Generic Programming Process: look at concrete algorithms, lift out common requirements, cluster into concepts, etc.
Why do you doubt it's possible? If there are template constraints not practically expressible in concepts, then aren't concepts broken?
Also, I would prefer C++0x static_assert to the concept_check macros because the error messages can be much nicer. I think the answer is that the concept_check library badly needs a C++0x makeover.
The real challenge would be making it easy to write new concepts. Right now the usage requirements are simple to state, but if you wanted static_assert to fire, we'd need to use checks that don't cause errors, e.g. instead of:
same_type(*i++,v); // postincrement-dereference returning value_type
you'd have to write something like:
static_assert( has_postincrement<InputIterator>::value, "not postincrementable"); static_assert( has_postincrement_deref<InputIterator>::value, "not dereferenceable"); static_assert( is_same< postincrement_deref<InputIterator>::type, InputIterator::value_type
::value, "postincrement dereference doesn't return value_type");
Well, in all, I guess I like the rigor but it's not very compatible with the loosey-goosey C++03 way of specifying concepts <pines for real concept support>.
Blech. Can C++0x SFINAE on expressions make this any nicer?
Maybe.
The idea is to be able to text expressions for well-formedness without causing a compile error. Then we'd need a way to chain these expressions to make lists of requirements. Maybe PP sequences consisting of valid expressions?
Yeah, it would involve a bunch of PP stuff.
Is there a compiler that implements this yet so we can play with it?
According to http://gcc.gnu.org/projects/cxx0x.html that stuff came in with GCC 4.4
But IMO this is all beside the point. The truth is that valid expressions are the wrong way to describe concepts, for reasons you and I have discussed in the past. The now-comatose C++ concepts proposal had it right.
I was never convinced that valid expressions are the wrong approach. -- Eric Niebler BoostPro Computing http://www.boostpro.com

To convince yourself: * Write down the find_if algorithm in a natural way * write down the valid expressions implied by that algorithm * now re-check the algorithm against the valid expressions BoostPro Computing * http://boostpro.com [Sent from coveted but awkward mobile device] -- On Sep 26, 2010, at 8:55 AM, Eric Niebler <eric@boostpro.com> wrote:
The truth is that valid expressions are the wrong way to describe concepts, for reasons you and I have discussed in the past. The now-comatose C++ concepts proposal had it right.
I was never convinced that valid expressions are the wrong approach.

At Sun, 26 Sep 2010 11:26:50 -0400, Dave Abrahams wrote:
On Sep 26, 2010, at 8:55 AM, Eric Niebler <eric@boostpro.com> wrote:
The truth is that valid expressions are the wrong way to describe concepts, for reasons you and I have discussed in the past. The now-comatose C++ concepts proposal had it right.
I was never convinced that valid expressions are the wrong approach.
To convince yourself:
* Write down the find_if algorithm in a natural way * write down the valid expressions implied by that algorithm * now re-check the algorithm against the valid expressions
So, no takers? :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

I was never convinced that valid expressions are the wrong approach.
To convince yourself:
* Write down the find_if algorithm in a natural way * write down the valid expressions implied by that algorithm * now re-check the algorithm against the valid expressions
So, no takers? :-)
I'm not sure I fully understand the problem. Surely, you can use the Concept_check library to describe constraints for the find_if algorithm, and the Concept_check library internally validates those constraints against expressions expected to be valid. To me, that would imply that valid expressions can be used to describe constraints. Am I missing something? Andrew Sutton

On Sep 27, 2010, at 9:51 AM, Andrew Sutton wrote:
I was never convinced that valid expressions are the wrong approach.
To convince yourself:
* Write down the find_if algorithm in a natural way * write down the valid expressions implied by that algorithm * now re-check the algorithm against the valid expressions
So, no takers? :-)
I'm not sure I fully understand the problem.
Of course not; you didn't try it! ;-)
Surely, you can use the Concept_check library to describe constraints for the find_if algorithm, and the Concept_check library internally validates those constraints against expressions expected to be valid. To me, that would imply that valid expressions can be used to describe constraints.
Am I missing something?
Among other things, you're missing the other half of concept checking: checking algorithm bodies against constraints (which is done in the Concept_check library by using archetypes). P.S. I'm not claiming concepts can't be described via valid expressions; I'm claiming it's the wrong approach, for reasons that should become evident to anyone who *actually does the exercise*. -- Dave Abrahams BoostPro Computing http://boostpro.com

On 27/09/2010 17:56, David Abrahams wrote:
Among other things, you're missing the other half of concept checking: checking algorithm bodies against constraints (which is done in the Concept_check library by using archetypes).
P.S. I'm not claiming concepts can't be described via valid expressions; I'm claiming it's the wrong approach, for reasons that should become evident to anyone who *actually does the exercise*.
I don't see how a set of statements is much different from a set of expressions, except that statements don't have nice properties from a meta-programming point of view.

At Mon, 27 Sep 2010 22:01:09 +0100, Mathias Gaunard wrote:
On 27/09/2010 17:56, David Abrahams wrote:
Among other things, you're missing the other half of concept checking: checking algorithm bodies against constraints (which is done in the Concept_check library by using archetypes).
P.S. I'm not claiming concepts can't be described via valid expressions; I'm claiming it's the wrong approach, for reasons that should become evident to anyone who *actually does the exercise*.
I don't see how a set of statements is much different from a set of expressions, except that statements don't have nice properties from a meta-programming point of view.
No comment. do-the-exercise-ly y'rs, -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 27/09/2010 23:01, David Abrahams wrote:
At Mon, 27 Sep 2010 22:01:09 +0100, Mathias Gaunard wrote:
On 27/09/2010 17:56, David Abrahams wrote:
Among other things, you're missing the other half of concept checking: checking algorithm bodies against constraints (which is done in the Concept_check library by using archetypes).
P.S. I'm not claiming concepts can't be described via valid expressions; I'm claiming it's the wrong approach, for reasons that should become evident to anyone who *actually does the exercise*.
I don't see how a set of statements is much different from a set of expressions, except that statements don't have nice properties from a meta-programming point of view.
No comment.
do-the-exercise-ly y'rs,
I've done it and there was no particular problem. I just need to check the argument are input iterators and an unary function object taking the same value as the iterator value type and returning a boolean. (of course, by taking a particular value, I really mean taking something that can be converted from that particular value, and by returning a particular value I mean returning a type convertible to that value, but all of those implicit conversions are taken care of automatically by expressions) The concept definitions of iterators and function objects are already defined in terms of valid expressions anyway, each having particular semantics. Are you maybe hinting at a trick to properly ensure input iterators and not just forward ones? Also, I don't think replacing statements by expressions means we should get rid of archetypes at all. They're still required if you want to actually instantiate the template and test the code, rather than just test that the arguments match what you expect. Checking the arguments match what you expect is the important bit for good error messages and validation, while archetypes are really useful for development to check your code and your requirements are consistent by ensuring it works with the minimal amount of requirements. All in all, ConceptCheck isn't so bad: we just need to replace the main body of statements by a compile-time itereable and manipulable list of expressions.

On Sep 28, 2010, at 4:33 AM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
I've done it and there was no particular problem. I just need to check the argument are input iterators and an unary function object taking the same value as the iterator value type and returning a boolean. (of course, by taking a particular value, I really mean taking something that can be converted from that particular value, and by returning a particular value I mean returning a type convertible to that value, but all of those implicit conversions are taken care of automatically by expressions)
All of that handwaving is exactly where the problem is. When the computer is checking your work, you don't get that luxury ;-) resting-his-case-ly y'rs, -- BoostPro Computing * http://boostpro.com [Sent from coveted but awkward mobile device]

I've done it and there was no particular problem. I just need to check the argument are input iterators and an unary function object taking the same value as the iterator value type and returning a boolean. (of course, by taking a particular value, I really mean taking something that can be converted from that particular value, and by returning a particular value I mean returning a type convertible to that value, but all of those implicit conversions are taken care of automatically by expressions)
All of that handwaving is exactly where the problem is. When the computer is checking your work, you don't get that luxury ;-)
resting-his-case-ly y'rs,
Having done the exercise, I'm satisfied with my results (which agree with Mathias'). I think your "left to the reader as an exercise" approach doesn't seem to be getting the point across. Perhaps you could provide a more concrete argument? Andrew Sutton

At Tue, 28 Sep 2010 08:39:23 -0500, Andrew Sutton wrote:
Having done the exercise, I'm satisfied with my results (which agree with Mathias').
Great; let's see your work!
I think your "left to the reader as an exercise" approach doesn't seem to be getting the point across. Perhaps you could provide a more concrete argument?
It's hard to provide an argument more concrete than an example that doesn't work. But in fairness I guess it's also easy to overlook non-workingness when you don't have a computer doing the checking for you, or the STL wouldn't have been designed with the "valid expression" approach... OK, I found a good explanation. See §3.2.3 of http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1758.pdf HTH, -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Tue, Sep 28, 2010 at 9:20 AM, David Abrahams <dave@boostpro.com> wrote:
At Tue, 28 Sep 2010 08:39:23 -0500, Andrew Sutton wrote:
Having done the exercise, I'm satisfied with my results (which agree with Mathias').
Great; let's see your work!
This is my interpretation of the type constraints on the template parameters of the find_if algorithm. template<typename Iter, typename Pred> Iter find_if(Iter f, Iter l, Pred p) { for( ; f != l; ++f) if(p(*f)) return f; return l; } Expressions on Iter: f != l; *f; ++f; Expressions on pred: p(*f) and... is_convertible<decltype(f != l), bool>::value is_convertible<decltype(p(*f)), bool>::value I don't know if you'd count the last two as "valid expressions", but they clearly seem to be requirements on the algorithm. I'm probably missing copy constructibility requirements. It's hard to provide an argument more concrete than an example that
doesn't work. But in fairness I guess it's also easy to overlook non-workingness when you don't have a computer doing the checking for you, or the STL wouldn't have been designed with the "valid expression" approach...
OK, I found a good explanation. See §3.2.3 of http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1758.pdf
That is a good explanation, but I don't see why type traits can't be used to enforce convertability or same-type requirements. Why not describe LessThanComparable as? static_assert(is_same<decltype(x < y), bool>::value, "") Besides the fact that it's a little gross. We can do this now (with C++0x anyway), and we do use is_same<> to constrain key/value types in SimpleAssociativeContainers. It seems to me that writing constraints in terms of valid expressions gives you flexibility when you want it, but doesn't preclude the strengthening of requirements when needed. Andrew Sutton

On Tue, 28 Sep 2010, Andrew Sutton wrote:
On Tue, Sep 28, 2010 at 9:20 AM, David Abrahams <dave@boostpro.com> wrote:
At Tue, 28 Sep 2010 08:39:23 -0500, Andrew Sutton wrote:
Having done the exercise, I'm satisfied with my results (which agree with Mathias').
Great; let's see your work!
This is my interpretation of the type constraints on the template parameters of the find_if algorithm.
template<typename Iter, typename Pred> Iter find_if(Iter f, Iter l, Pred p) { for( ; f != l; ++f) if(p(*f)) return f; return l; }
Expressions on Iter: f != l; *f; ++f;
Expressions on pred: p(*f)
and...
is_convertible<decltype(f != l), bool>::value is_convertible<decltype(p(*f)), bool>::value
These can be written as "normal" valid expressions: bool(f != l) bool(p(*f))
I don't know if you'd count the last two as "valid expressions", but they clearly seem to be requirements on the algorithm. I'm probably missing copy constructibility requirements.
Just to show the subtletly of using valid expressions, the p(*f) constraint (with or without a conversion on the outside) would not allow the following: struct p { template <typename T> bool operator()(T) const { /* Something invalid */ } bool operator()(bool) const {return true;} }; vector<bool> v; find_if(v.begin(), v.end(), p()); (note that the wrong operator()() will be chosen for the *vector<bool>::iterator proxy).
It's hard to provide an argument more concrete than an example that
doesn't work. But in fairness I guess it's also easy to overlook non-workingness when you don't have a computer doing the checking for you, or the STL wouldn't have been designed with the "valid expression" approach...
OK, I found a good explanation. See §3.2.3 of http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1758.pdf
That is a good explanation, but I don't see why type traits can't be used to enforce convertability or same-type requirements. Why not describe LessThanComparable as?
static_assert(is_same<decltype(x < y), bool>::value, "")
Besides the fact that it's a little gross. We can do this now (with C++0x anyway), and we do use is_same<> to constrain key/value types in SimpleAssociativeContainers.
It seems to me that writing constraints in terms of valid expressions gives you flexibility when you want it, but doesn't preclude the strengthening of requirements when needed.
The issue is that convertability constraints are too easy to get wrong (within algorithms); see p. 8 of N1758 that was linked to before. You can write them without is_convertible (as they were done before). Using is_same<decltype(...), ...> is fine, but that might cause trouble for users since now they can't return proxies (or even references) as their function results. A pseudosignature-based approach allows conversions both on the user side and the algorithm side without any of the gyrations and subtlety that shows up with valid expressions. -- Jeremiah Willcock

On 28/09/10 16:21, Jeremiah Willcock wrote:
The issue is that convertability constraints are too easy to get wrong (within algorithms); see p. 8 of N1758 that was linked to before. You can write them without is_convertible (as they were done before). Using is_same<decltype(...), ...> is fine, but that might cause trouble for users since now they can't return proxies (or even references) as their function results. A pseudosignature-based approach allows conversions both on the user side and the algorithm side without any of the gyrations and subtlety that shows up with valid expressions.
Just write it with a temporary variable (named or not) and avoid all the problems.

At Tue, 28 Sep 2010 17:03:40 +0100, Mathias Gaunard wrote:
On 28/09/10 16:21, Jeremiah Willcock wrote:
The issue is that convertability constraints are too easy to get wrong (within algorithms); see p. 8 of N1758 that was linked to before. You can write them without is_convertible (as they were done before). Using is_same<decltype(...), ...> is fine, but that might cause trouble for users since now they can't return proxies (or even references) as their function results. A pseudosignature-based approach allows conversions both on the user side and the algorithm side without any of the gyrations and subtlety that shows up with valid expressions.
Just write it with a temporary variable (named or not) and avoid all the problems.
Exactly the kind of gyrations that algorithm authors shouldn't have to put up with. The expression of concepts should optimize convenience for algorithm authors, not for concept writers, since many algorithms will use any given concept. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Just to show the subtletly of using valid expressions, the p(*f) constraint (with or without a conversion on the outside) would not allow the following:
struct p { template <typename T> bool operator()(T) const { /* Something invalid */ }
bool operator()(bool) const {return true;} };
vector<bool> v; find_if(v.begin(), v.end(), p());
Ah... this is a good example for demonstrating subtlety. It's not entirely obvious whether this demonstrates a flaw in or motivates a solution to any particular approach to concepts. This seems to be the programmer's error -- they have failed to understand something very important about vector<bool>. I suppose this discussion ultimately goes down the road of "do you just tell the programmer why it fails, or do you try to match the programmer's intent". Given that the compiler already does the wrong thing (in a usually sane way), failure is good enough for me :) Out of curiosity, what would the pseudo-signature solution look like? Andrew Sutton

At Tue, 28 Sep 2010 11:21:55 -0400 (EDT), Jeremiah Willcock wrote:
is_convertible<decltype(f != l), bool>::value is_convertible<decltype(p(*f)), bool>::value
These can be written as "normal" valid expressions:
bool(f != l) bool(p(*f))
Isn't bool(x) equivalent to (bool)x? I think that's valid code for any x. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Tue, 28 Sep 2010, David Abrahams wrote:
At Tue, 28 Sep 2010 11:21:55 -0400 (EDT), Jeremiah Willcock wrote:
is_convertible<decltype(f != l), bool>::value is_convertible<decltype(p(*f)), bool>::value
These can be written as "normal" valid expressions:
bool(f != l) bool(p(*f))
Isn't bool(x) equivalent to (bool)x? I think that's valid code for any x.
Yes. My comment there was just that you did not need decltype and is_convertible to write those two constraints -- a standard cast would work. -- Jeremiah Willcock

At Tue, 28 Sep 2010 13:29:15 -0400 (EDT), Jeremiah Willcock wrote:
On Tue, 28 Sep 2010, David Abrahams wrote:
At Tue, 28 Sep 2010 11:21:55 -0400 (EDT), Jeremiah Willcock wrote:
is_convertible<decltype(f != l), bool>::value is_convertible<decltype(p(*f)), bool>::value
These can be written as "normal" valid expressions:
bool(f != l) bool(p(*f))
Isn't bool(x) equivalent to (bool)x? I think that's valid code for any x.
Yes. My comment there was just that you did not need decltype and is_convertible to write those two constraints -- a standard cast would work.
I don't think there's any standard cast that can check implicit convertibility without also allowing other undesired things. For example, static_cast<Derived*>(base_ptr) works, but Base* can't be converted to Derived*. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 28/09/10 18:27, David Abrahams wrote:
At Tue, 28 Sep 2010 11:21:55 -0400 (EDT), Jeremiah Willcock wrote:
is_convertible<decltype(f != l), bool>::value is_convertible<decltype(p(*f)), bool>::value
These can be written as "normal" valid expressions:
bool(f != l) bool(p(*f))
Isn't bool(x) equivalent to (bool)x? I think that's valid code for any x.
That's valid code for any object x of a built-in type or that is implicitly convertible to any built-in type.

At Tue, 28 Sep 2010 10:00:54 -0500, Andrew Sutton wrote:
It seems to me that writing constraints in terms of valid expressions gives you flexibility when you want it, but doesn't preclude the strengthening of requirements when needed.
It's not that there's anything you *can't* express with valid expressions, it's that they're difficult to use correctly, and the most natural way of using them creates a big mess for algorithm writers. Furthermore, they don't offer any compelling value over the alternative. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 28/09/10 18:33, David Abrahams wrote:
At Tue, 28 Sep 2010 10:00:54 -0500, Andrew Sutton wrote:
It seems to me that writing constraints in terms of valid expressions gives you flexibility when you want it, but doesn't preclude the strengthening of requirements when needed.
It's not that there's anything you *can't* express with valid expressions, it's that they're difficult to use correctly, and the most natural way of using them creates a big mess for algorithm writers. Furthermore, they don't offer any compelling value over the alternative.
Please demonstrate how you check that a type models a concept without using a set of valid expressions to define the concept, but only with signatures in an archetype-like fashion. My insight is that you can't. Therefore what compelling value expressions give over archetypes becomes fairly obvious: they do what we want, while archetypes don't and do something different. You said it yourself: expressions are only half the solution; archetypes are the other half. They're not the whole and they're no substitute for using expressions to define and test concepts.

At Tue, 28 Sep 2010 18:45:23 +0100, Mathias Gaunard wrote:
On 28/09/10 18:33, David Abrahams wrote:
At Tue, 28 Sep 2010 10:00:54 -0500, Andrew Sutton wrote:
It seems to me that writing constraints in terms of valid expressions gives you flexibility when you want it, but doesn't preclude the strengthening of requirements when needed.
It's not that there's anything you *can't* express with valid expressions, it's that they're difficult to use correctly, and the most natural way of using them creates a big mess for algorithm writers. Furthermore, they don't offer any compelling value over the alternative.
Please demonstrate how you check that a type models a concept without using a set of valid expressions to define the concept, but only with signatures
See the implementation of ConceptGCC. There's a demonstration in working code. But...
in an archetype-like fashion.
...the whole sentence became meaningless to me when you added "in an archetype-like fashion." I have no idea what it means, and it seems incongruous to bring up archetypes in the context of a discussion of the mechanics of checking conformance of a type to a concept. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 28/09/2010 19:22, David Abrahams wrote:
At Tue, 28 Sep 2010 18:45:23 +0100, Mathias Gaunard wrote:
On 28/09/10 18:33, David Abrahams wrote:
Please demonstrate how you check that a type models a concept without using a set of valid expressions to define the concept, but only with signatures
See the implementation of ConceptGCC. There's a demonstration in working code. But...
Sorry, I assumed we were talking about standard C++, or eventually about the soon-to-standard C++0x. I.e. solutions that we can really use in the real world.

At Sun, 03 Oct 2010 22:52:20 +0100, Mathias Gaunard wrote:
Sorry, I assumed we were talking about standard C++, or eventually about the soon-to-standard C++0x.
I.e. solutions that we can really use in the real world.
I actually think it's quite possible to make something like this work: template <class Iter> struct ForwardIterator : associated<value_type>, signature<Iter&, operator_increment(Iter&)>, signature<value_type&, operator_star(Iter)>, EqualityComparable<Iter> { }; (maybe not exactly that, but something like it). No time to explore that right now, though, sorry ;-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Tue, Sep 28, 2010 at 10:45 AM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 28/09/10 18:33, David Abrahams wrote:
At Tue, 28 Sep 2010 10:00:54 -0500, Andrew Sutton wrote:
It seems to me that writing constraints in terms of valid expressions gives you flexibility when you want it, but doesn't preclude the strengthening of requirements when needed.
It's not that there's anything you *can't* express with valid expressions, it's that they're difficult to use correctly, and the most natural way of using them creates a big mess for algorithm writers. Furthermore, they don't offer any compelling value over the alternative.
Please demonstrate how you check that a type models a concept without using a set of valid expressions to define the concept, but only with signatures in an archetype-like fashion.
See 14.9.2.1 [concept.map.fct]p4 in http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2773.pdf
My insight is that you can't.
There are several ways to accomplish this. The concepts proposal cited above mapped signatures to valid expressions when checking a type against a concept, but one could also perform a signature match with varying degrees of "fuzziness" (for references, convertibility, etc.).
Therefore what compelling value expressions give over archetypes becomes fairly obvious: they do what we want, while archetypes don't and do something different.
Having spent several years engrossed in concepts, I can tell you outright that valid expressions rarely "do what we want" from the type-checking perspective. There's a simple example from earlier in this thread:
bool(f != l) bool(p(*f))
Isn't bool(x) equivalent to (bool)x? I think that's valid code for any x.
That's valid code for any object x of a built-in type or that is implicitly convertible to any built-in type.
Since the valid expression has the conversion written explicitly, as a function-style cast, a class type with an explicit conversion to bool (or any built-in type) would also meet the requirements of this valid expression. The obvious point here is that the simplest valid expression we've seen in this discussion has already been misinterpreted. But, that's not the interesting part. Say we have values "f" and "l" in a constrained template with the valid expression "bool(f != l)". What can we write in the constrained template, such that we're sure it will instantiate correctly? bool(f != l) Yep, that works. It's just like the valid expression shows. (bool)(f != l) Yep, that works, since function-style and C-style casts have the same behavior. if (f != l) Yes, that works, because this is contextual conversion to bool, so explicit conversion operators will be considered. bool result = f != l; Nope, doesn't work, because we only said that we require an explicit cast to "bool" to work. The constrained template must be considered ill-formed, or you've lost type soundness. if (!(f != l)) Nope, doesn't work, because we only said that f != l must be explicitly convertible to bool; we didn't say that it needed to have a valid ! operator. some_function_template(f != l); // where template<typename T> void some_function_template(T); what does 'T' deduce to in this case? How would we describe the other requirements on the result type of f != l, so that we could ensure that the we meet some_function_template's constraints on 'T'?
You said it yourself: expressions are only half the solution; archetypes are the other half. They're not the whole and they're no substitute for using expressions to define and test concepts.
The whole point of concepts is that they have to be both. One description allows you to type-check a template definition and check a type against the requirements of a concept, and when both pass, you want to guarantee that the template instantiates property. That's type-soundness, and it is a fundamental goal of concepts. Valid expressions are fine for checking whether a type conforms to a concept, but they're poor at describing requirements for type-checking templates. Signatures naturally support type-checking in templates, and also make it fairly easy to check whether a type conforms to a concept. - Doug

On 28/09/10 13:20, Dave Abrahams wrote:
On Sep 28, 2010, at 4:33 AM, Mathias Gaunard<mathias.gaunard@ens-lyon.org> wrote:
I've done it and there was no particular problem. I just need to check the argument are input iterators and an unary function object taking the same value as the iterator value type and returning a boolean. (of course, by taking a particular value, I really mean taking something that can be converted from that particular value, and by returning a particular value I mean returning a type convertible to that value, but all of those implicit conversions are taken care of automatically by expressions)
All of that handwaving is exactly where the problem is.
I think I know where the problem is: you're confusing two distinct features that exist within the realm of concepts, and then comparing apples and oranges. The two features are: 1) the ability to check that a set of types model a concept, 2) the ability to check that the body of a function or class that requires certain parameters to be models of a particular concept is valid with the least amount of requirements that models of those concepts can provide. The former C++0x proposal provides both features. Boost.ConceptCheck provides both features as well, albeit using vastly different implementation techniques. For better error messages in libraries written within Boost, feature 1) is vastly more important than feature 2), since the only thing 2) does is ensuring the library is self-consistent (i.e. there is no need for these tests at all once the code has been properly debugged), while 1) ensures the user is rightly providing the preconditions required by the library. In Boost.ConceptCheck, 1) is achieved by defining a template function, whose body (a statement) contains all the expressions (and may contain statements too) that are necessary for the types to model the concept. If instantiating this function fails, which means the types do not model the concept, then you get a hard error you cannot catch at compile-time. What I'm suggesting is simply to replace that fairly dumb system of a function body by a set of expressions considered in an SFINAE context, which would allow to actually test whether the concept is being modeled or not without causing a hard error, which would be priceless in order to hugely improve the error messages. ConceptCheck also provides the mechanism of archetypes to implement feature 2), but that is quite unrelated, despite your insistence on talking about it. Yes, I agree that 2) helps in finding bugs in library code, such as possibly your find_if example, but it doesn't help at providing better error messages for the user assuming a bug-free library. It's a feature for the library developer, not its user. I'm sorry, but I still don't see how your comments are at all relevant.
When the computer is checking your work, you don't get that luxury ;-)
Right, but here we're not talking about the computer checking my work, but checking that what the user is giving me is what I'm expecting, so that I can tell him what he did wrong without throwing too much noise at thim.

At Tue, 28 Sep 2010 17:23:05 +0100, Mathias Gaunard wrote:
On 28/09/10 13:20, Dave Abrahams wrote:
On Sep 28, 2010, at 4:33 AM, Mathias Gaunard<mathias.gaunard@ens-lyon.org> wrote:
I've done it and there was no particular problem. I just need to check the argument are input iterators and an unary function object taking the same value as the iterator value type and returning a boolean. (of course, by taking a particular value, I really mean taking something that can be converted from that particular value, and by returning a particular value I mean returning a type convertible to that value, but all of those implicit conversions are taken care of automatically by expressions)
All of that handwaving is exactly where the problem is.
I think I know where the problem is: you're confusing two distinct features that exist within the realm of concepts, and then comparing apples and oranges.
The two features are: 1) the ability to check that a set of types model a concept, 2) the ability to check that the body of a function or class that requires certain parameters to be models of a particular concept is valid with the least amount of requirements that models of those concepts can provide.
The former C++0x proposal provides both features. Boost.ConceptCheck provides both features as well, albeit using vastly different implementation techniques.
Actually the implementation technique for checking bodies is exactly the same: the compiler generates archetypes internally.
For better error messages in libraries written within Boost, feature 1) is vastly more important than feature 2), since the only thing 2) does is ensuring the library is self-consistent (i.e. there is no need for these tests at all once the code has been properly debugged), while 1) ensures the user is rightly providing the preconditions required by the library.
Ha, everyone underestimates the importance of feature 2). Without it you are almost guaranteed to ship a buggy library, because the types you test your library with will not touch the boundaries of what's possible under the library's constraint specification. The library issues list is littered with corrections and updates to the STL specification because of this.
In Boost.ConceptCheck, 1) is achieved by defining a template function, whose body (a statement) contains all the expressions (and may contain statements too) that are necessary for the types to model the concept. If instantiating this function fails, which means the types do not model the concept, then you get a hard error you cannot catch at compile-time.
Just for the record, I wrote the current implementation of Boost.ConceptCheck, so I do understand how it works.
What I'm suggesting is simply to replace that fairly dumb system of a function body by a set of expressions considered in an SFINAE context,
The current system isn't dumb for C++03; it's the best you can do for many checks. There's no way to check for copy-constructibility, for example, without causing a compile-time error when the check fails.
which would allow to actually test whether the concept is being modeled or not without causing a hard error, which would be priceless in order to hugely improve the error messages.
I'm not convinced it would be a _huge_ improvement. You might be able to get rid of one level of instantiation stack, but AFAICT that's not a huge difference.
ConceptCheck also provides the mechanism of archetypes to implement feature 2), but that is quite unrelated, despite your insistence on talking about it.
It's not unrelated to the reason this part of the thread started. People were re-considering the design of Boost.ConceptCheck, and I noted that valid expressions were not the best way to express concepts.
Yes, I agree that 2) helps in finding bugs in library code, such as possibly your find_if example, but it doesn't help at providing better error messages for the user assuming a bug-free library.
That's a big assumption. As I said, without a way to check template bodies, you're almost guaranteed to ship a buggy library, and that chance is *massively* increased by the use of a "valid expressions" approach. Unfortunately, without real concepts we may not be able to do better, because the pseudo-signature approach inserts type conversions that we'd have to write by hand. Now here's an interesting thought: write a library using TMP that generates "concept wrappers" that will force the necessary conversions. Would require heavy use of rvalue references to avoid needless copies. Hmm...
It's a feature for the library developer, not its user.
Is a bug-free library a feature for the library developer, or its user? :-)
I'm sorry, but I still don't see how your comments are at all relevant.
Depends what you think the question is. IMO if we're going to talk about a library redesign for BCCL it should consider whether we can express concepts more effectively.
When the computer is checking your work, you don't get that luxury ;-)
Right, but here we're not talking about the computer checking my work, but checking that what the user is giving me is what I'm expecting, so that I can tell him what he did wrong without throwing too much noise at thim.
Maybe that's all you're talking about, but it's never been all I'm talking about. Personally I'm not interested in a discussion that doesn't at least attempt to address the other half of the problem. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 28/09/2010 19:14, David Abrahams wrote:
Just for the record, I wrote the current implementation of Boost.ConceptCheck, so I do understand how it works.
My bad, please excuse my crude inaccurate explanation then.
What I'm suggesting is simply to replace that fairly dumb system of a function body by a set of expressions considered in an SFINAE context,
The current system isn't dumb for C++03; it's the best you can do for many checks. There's no way to check for copy-constructibility, for example, without causing a compile-time error when the check fails.
SFINAE extended to expressions works in C++03 too, albeit it is not a mandatory feature and was implemented in most compilers only recently. (GCC didn't support it before mostly due to mangling issues)
which would allow to actually test whether the concept is being modeled or not without causing a hard error, which would be priceless in order to hugely improve the error messages.
I'm not convinced it would be a _huge_ improvement. You might be able to get rid of one level of instantiation stack, but AFAICT that's not a huge difference.
From my own experience using ConceptCheck, you still get a lot of unwanted errors. My experience did involve using ranges, iterators and their associated concepts, with archetypes used in the concept body. Thinking about it, since I needed to use archetypes to define my concept, an unified solution might be nice indeed.
Unfortunately, without real concepts we may not be able to do better, because the pseudo-signature approach inserts type conversions that we'd have to write by hand. Now here's an interesting thought: write a library using TMP that generates "concept wrappers" that will force the necessary conversions. Would require heavy use of rvalue references to avoid needless copies. Hmm...
A crazy idea if you want to be TMP-heavy: maybe you could represent the valid expressions with Proto and infer an archetype from these. This could provide better syntax than the "operator_increment" thing you suggested in some other part of the thread.
Depends what you think the question is. IMO if we're going to talk about a library redesign for BCCL it should consider whether we can express concepts more effectively.
But is someone thinking and actively working on a nice universal solution? Also, while time is spent thinking of that, the problem of bad error messages keeps on affecting more code.
Maybe that's all you're talking about, but it's never been all I'm talking about. Personally I'm not interested in a discussion that doesn't at least attempt to address the other half of the problem.
I didn't realize you shifted the original argument that much. Surely the couple of successive messages that gave no more explanations than "try it and you will see that approach is wrong" didn't help clarifying that up.

At Mon, 04 Oct 2010 00:39:41 +0100, Mathias Gaunard wrote:
On 28/09/2010 19:14, David Abrahams wrote:
Just for the record, I wrote the current implementation of Boost.ConceptCheck, so I do understand how it works.
My bad, please excuse my crude inaccurate explanation then.
No problem at all; just trying to save time.
The current system isn't dumb for C++03; it's the best you can do for many checks. There's no way to check for copy-constructibility, for example, without causing a compile-time error when the check fails.
SFINAE extended to expressions works in C++03 too, albeit it is not a mandatory feature and was implemented in most compilers only recently. (GCC didn't support it before mostly due to mangling issues)
That is not "works in C++03," that's "some C++03 compilers have an extension for it." I tried this with gcc 4.5.1 and it still couldn't detect copyability without causing an error: ----- template <unsigned> struct fu {}; template <class T> T make(); template <class T> char* copyable(T*, fu< sizeof(T(make<T>())) >* = 0); int copyable(...); struct X { private: X(X const&); }; char* test0 = copyable((int*)0); // works, proving int is copyable int test1 = copyable((X*)0); // I get an error here -----
which would allow to actually test whether the concept is being modeled or not without causing a hard error, which would be priceless in order to hugely improve the error messages.
I'm not convinced it would be a _huge_ improvement. You might be able to get rid of one level of instantiation stack, but AFAICT that's not a huge difference.
From my own experience using ConceptCheck, you still get a lot of unwanted errors.
The question is whether it's possible to do much better.
Unfortunately, without real concepts we may not be able to do better, because the pseudo-signature approach inserts type conversions that we'd have to write by hand. Now here's an interesting thought: write a library using TMP that generates "concept wrappers" that will force the necessary conversions. Would require heavy use of rvalue references to avoid needless copies. Hmm...
A crazy idea if you want to be TMP-heavy: maybe you could represent the valid expressions with Proto and infer an archetype from these.
This could provide better syntax than the "operator_increment" thing you suggested in some other part of the thread.
I'm not sure we can do much better with proto unless we have decltype.
Depends what you think the question is. IMO if we're going to talk about a library redesign for BCCL it should consider whether we can express concepts more effectively.
But is someone thinking and actively working on a nice universal solution?
Dunno.
Also, while time is spent thinking of that, the problem of bad error messages keeps on affecting more code.
Feel free to make incremental improvements, if you think you can.
Maybe that's all you're talking about, but it's never been all I'm talking about. Personally I'm not interested in a discussion that doesn't at least attempt to address the other half of the problem.
I didn't realize you shifted the original argument that much.
Since I started the original argument, don't I get to decide what counts as a shift?
Surely the couple of successive messages that gave no more explanations than "try it and you will see that approach is wrong" didn't help clarifying that up.
Sorry, been too busy to go into detail. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 04/10/10 00:39, Mathias Gaunard wrote:
On 28/09/2010 19:14, David Abrahams wrote:
Unfortunately, without real concepts we may not be able to do better, because the pseudo-signature approach inserts type conversions that we'd have to write by hand. Now here's an interesting thought: write a library using TMP that generates "concept wrappers" that will force the necessary conversions. Would require heavy use of rvalue references to avoid needless copies. Hmm...
A crazy idea if you want to be TMP-heavy: maybe you could represent the valid expressions with Proto and infer an archetype from these.
This could provide better syntax than the "operator_increment" thing you suggested in some other part of the thread.
Indeed, I already had that idea, and I implemented a proof-of-concept. I guess my posts went unnoticed in this huge sprawling thread. See: http://article.gmane.org/gmane.comp.lib.boost.devel/208946 http://article.gmane.org/gmane.comp.lib.boost.devel/208996 As you suggest, I think it should be possible to use a single definition to check models against concepts, make archetypes, and do concept-based overloading. What I'm not sure is whether (a) it's worthwhile to do it, and (b) whether it would be worth attempting the more ambitious "concept wrappers" idea Dave suggests above. John Bytheway

At Mon, 04 Oct 2010 09:48:12 +0100, John Bytheway wrote:
On 04/10/10 00:39, Mathias Gaunard wrote:
On 28/09/2010 19:14, David Abrahams wrote:
Unfortunately, without real concepts we may not be able to do better, because the pseudo-signature approach inserts type conversions that we'd have to write by hand. Now here's an interesting thought: write a library using TMP that generates "concept wrappers" that will force the necessary conversions. Would require heavy use of rvalue references to avoid needless copies. Hmm...
A crazy idea if you want to be TMP-heavy: maybe you could represent the valid expressions with Proto and infer an archetype from these.
This could provide better syntax than the "operator_increment" thing you suggested in some other part of the thread.
Indeed, I already had that idea, and I implemented a proof-of-concept. I guess my posts went unnoticed in this huge sprawling thread. See:
http://article.gmane.org/gmane.comp.lib.boost.devel/208946 http://article.gmane.org/gmane.comp.lib.boost.devel/208996
As you suggest, I think it should be possible to use a single definition to check models against concepts, make archetypes, and do concept-based overloading. What I'm not sure is whether (a) it's worthwhile to do it, and (b) whether it would be worth attempting the more ambitious "concept wrappers" idea Dave suggests above.
One step at a time, maybe. What you did looks good. One thing I should point out, though, is that SFINAE-controlled overloading and good error messages from concept checks are not exactly compatible. I think you need to leave off the SFINAE on the overloads with the least-refined concepts, somehow, if you want something better than a ridiculous list of non-matching candidates. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 04/10/10 11:22, David Abrahams wrote:
At Mon, 04 Oct 2010 09:48:12 +0100, John Bytheway wrote:
On 04/10/10 00:39, Mathias Gaunard wrote:
On 28/09/2010 19:14, David Abrahams wrote:
Unfortunately, without real concepts we may not be able to do better, because the pseudo-signature approach inserts type conversions that we'd have to write by hand. Now here's an interesting thought: write a library using TMP that generates "concept wrappers" that will force the necessary conversions. Would require heavy use of rvalue references to avoid needless copies. Hmm...
A crazy idea if you want to be TMP-heavy: maybe you could represent the valid expressions with Proto and infer an archetype from these.
This could provide better syntax than the "operator_increment" thing you suggested in some other part of the thread.
Indeed, I already had that idea, and I implemented a proof-of-concept. I guess my posts went unnoticed in this huge sprawling thread. See:
http://article.gmane.org/gmane.comp.lib.boost.devel/208946 http://article.gmane.org/gmane.comp.lib.boost.devel/208996
As you suggest, I think it should be possible to use a single definition to check models against concepts, make archetypes, and do concept-based overloading. What I'm not sure is whether (a) it's worthwhile to do it, and (b) whether it would be worth attempting the more ambitious "concept wrappers" idea Dave suggests above.
One step at a time, maybe. What you did looks good. One thing I should point out, though, is that SFINAE-controlled overloading and good error messages from concept checks are not exactly compatible.
Yes, I noticed that.
I think you need to leave off the SFINAE on the overloads with the least-refined concepts, somehow, if you want something better than a ridiculous list of non-matching candidates.
I would imagine that when one is doing concept-based dispatching one would use an outer function which asserted conformance to the least-refined concept, and then forwarded to various implementations for different refinements (e.g. essentially what happens now with iterator traits). That should avoid this particular issue. More problematic (I think) is that the SFINAE-based error messages are probably never going to be as good as from e.g. Boost.ConceptCheck, because they will have to go through some translation layer to make them non-fatal and thus won't be in the 'native language' of compiler errors and won't be closely tied to the source location where the requirements are defined. John Bytheway

On 27/09/10 17:56, David Abrahams wrote:
On Sep 27, 2010, at 9:51 AM, Andrew Sutton wrote:
* Write down the find_if algorithm in a natural way * write down the valid expressions implied by that algorithm * now re-check the algorithm against the valid expressions
So, no takers? :-)
I'm not sure I fully understand the problem.
Of course not; you didn't try it! ;-)
Surely, you can use the Concept_check library to describe constraints for the find_if algorithm, and the Concept_check library internally validates those constraints against expressions expected to be valid. To me, that would imply that valid expressions can be used to describe constraints.
Am I missing something?
Among other things, you're missing the other half of concept checking: checking algorithm bodies against constraints (which is done in the Concept_check library by using archetypes).
P.S. I'm not claiming concepts can't be described via valid expressions; I'm claiming it's the wrong approach, for reasons that should become evident to anyone who *actually does the exercise*.
I did do the exercise, so SPOILERS ahead for those who have not. I'm still not sure what your point is. I conjecture that the issue to which you were trying to draw attention was something to do with operator! and bool conversions. In particular I note that the libstdc++ implementation of find_if uses !bool(__pred(*__first)) rather than simply !__pred(*__first), which suggests this distinction is (at least potentially) important in practice. Presumably the concept modelled by the predicate merely asserts that its return type is convertible to bool (certainly that's all I had in mind), and in principle !__pred(*__first) is undefined behaviour with respect to the concept. Is your point that an archetype based on valid expressions would fail to detect the flaw in the naive implementation of find_if, but that an archetype based on member definitions would catch it? John Bytheway

At Mon, 27 Sep 2010 23:27:36 +0100, John Bytheway wrote:
On 27/09/10 17:56, David Abrahams wrote:
P.S. I'm not claiming concepts can't be described via valid expressions; I'm claiming it's the wrong approach, for reasons that should become evident to anyone who *actually does the exercise*.
I did do the exercise, so SPOILERS ahead for those who have not.
I'm still not sure what your point is. I conjecture that the issue to which you were trying to draw attention was something to do with operator! and bool conversions.
In particular I note that the libstdc++ implementation of find_if uses !bool(__pred(*__first)) rather than simply !__pred(*__first), which suggests this distinction is (at least potentially) important in practice.
Good! You're in exactly the right neighborhood. And depending on the valid expressions you came up with for constraints, there may be more problems of exactly that nature in that algorithm. I can't see your work, so I don't know.
Presumably the concept modelled by the predicate merely asserts that its return type is convertible to bool (certainly that's all I had in mind), and in principle !__pred(*__first) is undefined behaviour with respect to the concept.
"Undefined behavior" isn't quite the right word; the concept constraint would prevent the template body from compiling.
Is your point that an archetype based on valid expressions would fail to detect the flaw in the naive implementation of find_if, but that an archetype based on member definitions would catch it?
[Those things you're calling member definitions are actually called pseudo-signatures. They don't (only) define members.] Well, no. The flaw isn't in the implementation of find_if; it's in the actual concept requirements imposed by the "valid expression" form of the concept definition. Any concept mechanism that doesn't allow you to write a straightforward find_if is broken. My point is that using valid expressions makes it hard, if not impossible, to write a clean implementation of even the most elementary algorithms. (IIRC they also make it really easy to fail to satisfy those requirements in baffling ways, but I can't come up with an example offhand). They also tend to gloss over associated types that you eventually want. I wish I could call forth all the clear insights about this that I had when we were fighting the concepts battles. I'll let you know if anything comes back to me. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 28/09/10 01:17, David Abrahams wrote:
At Mon, 27 Sep 2010 23:27:36 +0100, John Bytheway wrote:
Presumably the concept modelled by the predicate merely asserts that its return type is convertible to bool (certainly that's all I had in mind), and in principle !__pred(*__first) is undefined behaviour with respect to the concept.
"Undefined behavior" isn't quite the right word; the concept constraint would prevent the template body from compiling.
For that paragraph I was talking in not-enforced-in-the-language concepts world.
Is your point that an archetype based on valid expressions would fail to detect the flaw in the naive implementation of find_if, but that an archetype based on member definitions would catch it?
[Those things you're calling member definitions are actually called pseudo-signatures. They don't (only) define members.]
Thanks, yes, I had encountered that name, and forgotten it...
Well, no. The flaw isn't in the implementation of find_if; it's in the actual concept requirements imposed by the "valid expression" form of the concept definition. Any concept mechanism that doesn't allow you to write a straightforward find_if is broken.
I'm still struggling; I hope I can entice you to explain a little further. For concreteness, I imagine that a "straightforward" implementation of find_if in concept land looks something like this (like everyone else here I reserve the right to get the syntax wrong, in my case having never worked with concepts this way before): template<InputIterator I, typename F> I find_if(I start, I const finish, F const p) where Predicate<F, InputIterator<I>::value_type> { while (start != finish && !p(*start)) ++start; return start; } Thus, I imagine that the pseudo-signature technique allows this implementation. Furthermore, I would have written the Predicate concept like auto concept Predicate<typename F, typename A> { bool operator()(A); } (I think this is simple enough to warrant an auto concept; no one else in this thread has mentioned them, so I don't know your opinions). So, my first question is: will the above find_if compile? If so, does it imply that predicates must return bool, or merely that they must return something convertible to bool? If the latter, then I don't see any difference from the valid expression approach. So, I guess it is the former? In that case I think I see what you're driving at. It prevents me from writing a perverse predicate such as: struct funny_bool { operator bool() { return true; } bool operator!() { return true; } }; struct predicate { funny_bool operator()(int) { return funny_bool; } }; although that would have been fine under the valid-expressions framework (and have potentially surprising behaviour). John

On Sep 28, 2010, at 12:18 PM, John Bytheway wrote:
For concreteness, I imagine that a "straightforward" implementation of find_if in concept land looks something like this (like everyone else here I reserve the right to get the syntax wrong, in my case having never worked with concepts this way before):
template<InputIterator I, typename F> I find_if(I start, I const finish, F const p) where Predicate<F, InputIterator<I>::value_type> { while (start != finish && !p(*start)) ++start; return start; }
Thus, I imagine that the pseudo-signature technique allows this implementation.
Furthermore, I would have written the Predicate concept like
auto concept Predicate<typename F, typename A> { bool operator()(A); }
Small syntax correction: You need F as the first parameter of that operator ().
So, my first question is: will the above find_if compile?
Yes.
If so, does it imply that predicates must return bool, or merely that they must return something convertible to bool?
The latter. That's why they're called pseudo-signatures.
If the latter, then I don't see any difference from the valid expression approach.
The difference is that, no matter what the actual predicate returns, it is treated as a bool.
So, I guess it is the former? In that case I think I see what you're driving at. It prevents me from writing a perverse predicate such as:
struct funny_bool { operator bool() { return true; } bool operator!() { return true; } };
struct predicate { funny_bool operator()(int) { return funny_bool; } };
although that would have been fine under the valid-expressions framework (and have potentially surprising behaviour).
No, if I remember correctly how concepts work, that's actually a perfectly valid thing to do. In find_if<InputIterator, predicate>, p(*start) is bound to the pseudo-signature operator(), which returns a bool. Therefore, the result of p(*start) is converted to a bool first, and only then is operator ! applied to it, so you get the normal not-operator. But I might remember incorrectly. Sebastian

On Tue, Sep 28, 2010 at 1:04 PM, Sebastian Redl <sebastian.redl@getdesigned.at> wrote:
On Sep 28, 2010, at 12:18 PM, John Bytheway wrote:
For concreteness, I imagine that a "straightforward" implementation of find_if in concept land looks something like this (like everyone else here I reserve the right to get the syntax wrong, in my case having never worked with concepts this way before):
template<InputIterator I, typename F> I find_if(I start, I const finish, F const p) where Predicate<F, InputIterator<I>::value_type> { while (start != finish && !p(*start)) ++start; return start; }
Thus, I imagine that the pseudo-signature technique allows this implementation.
Furthermore, I would have written the Predicate concept like
auto concept Predicate<typename F, typename A> { bool operator()(A); }
Small syntax correction: You need F as the first parameter of that operator ().
So, my first question is: will the above find_if compile?
Yes.
If so, does it imply that predicates must return bool, or merely that they must return something convertible to bool?
The latter. That's why they're called pseudo-signatures.
If the latter, then I don't see any difference from the valid expression approach.
The difference is that, no matter what the actual predicate returns, it is treated as a bool.
So, I guess it is the former? In that case I think I see what you're driving at. It prevents me from writing a perverse predicate such as:
struct funny_bool { operator bool() { return true; } bool operator!() { return true; } };
struct predicate { funny_bool operator()(int) { return funny_bool; } };
although that would have been fine under the valid-expressions framework (and have potentially surprising behaviour).
No, if I remember correctly how concepts work, that's actually a perfectly valid thing to do. In find_if<InputIterator, predicate>, p(*start) is bound to the pseudo-signature operator(), which returns a bool. Therefore, the result of p(*start) is converted to a bool first, and only then is operator ! applied to it, so you get the normal not-operator.
Yes, you've remembered correctly. As part of type-checking the "predicate" class against the "Predicate" concept, the compiler converts predicate's funny_bool return value to a bool. That way, a template that was type-checked against Predicate, where operator() returns a bool, will always be instantiated such that the operator() call returns a bool. This is crucial to the type-soundness contract that concepts provide: if a constrained template type-checks, and a set of template arguments to that template meet its constraints, the template will instantiate properly [*]. - Doug [*] There are corner cases involving broken specializations or overload sets where this won't be the case. They should be extremely rare in sane code.

On Sep 26, 2010, at 8:55 AM, Eric Niebler wrote:
On 9/25/2010 1:24 PM, David Abrahams wrote:
On Sep 24, 2010, at 11:51 PM, Eric Niebler wrote:
On 9/24/2010 9:37 PM, David Abrahams wrote:
Haha! No, not at all. Let's rephrase the problem a bit. If we still had C++0x concepts, what would the concept SpiritParser look like, such that we could define spirit::qi::rule::operator= such that it required its RHS to satisfy the SpiritParser concept?
That's easy to answer; just look at the operations that operator= et. al expect of such a type. Wouldn't SpiritParser just be some simple refinement of Callable?
No. rule::operator= expects SpiritParser to be a strongly-typed tree.
Meaning? Spell it out and you have your requirements.
Would it be any more illuminating that a simple wrapper around proto::matches<Expr, SpiritGrammar>?
If you want to apply concepts to this stuff (and I'm not sure whether it's really possible), you have to go back to the classic Generic Programming Process: look at concrete algorithms, lift out common requirements, cluster into concepts, etc.
Why do you doubt it's possible?
I just don't know.
If there are template constraints not practically expressible in concepts, then aren't concepts broken?
Not necessarily. Concepts are not about expressing arbitrary template constraints. They're about describing abstractions. One might say that if you can't figure out how to write a concept for a template parameter, you don't understand the abstraction it's modelling. -- Dave Abrahams BoostPro Computing http://boostpro.com

On 9/26/2010 9:44 PM, David Abrahams wrote:
On Sep 26, 2010, at 8:55 AM, Eric Niebler wrote:
On 9/25/2010 1:24 PM, David Abrahams wrote:
On Sep 24, 2010, at 11:51 PM, Eric Niebler wrote:
On 9/24/2010 9:37 PM, David Abrahams wrote:
Haha! No, not at all. Let's rephrase the problem a bit. If we still had C++0x concepts, what would the concept SpiritParser look like, such that we could define spirit::qi::rule::operator= such that it required its RHS to satisfy the SpiritParser concept?
That's easy to answer; just look at the operations that operator= et. al expect of such a type. Wouldn't SpiritParser just be some simple refinement of Callable?
No. rule::operator= expects SpiritParser to be a strongly-typed tree.
Meaning? Spell it out and you have your requirements.
Meaning: + If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum How do I express that as a concept?
Would it be any more illuminating that a simple wrapper around proto::matches<Expr, SpiritGrammar>?
If you want to apply concepts to this stuff (and I'm not sure whether it's really possible), you have to go back to the classic Generic Programming Process: look at concrete algorithms, lift out common requirements, cluster into concepts, etc.
Why do you doubt it's possible?
I just don't know.
If there are template constraints not practically expressible in concepts, then aren't concepts broken?
Not necessarily. Concepts are not about expressing arbitrary template constraints. They're about describing abstractions. One might say that if you can't figure out how to write a concept for a template parameter, you don't understand the abstraction it's modelling.
I understand the abstraction perfectly well. It's a strongly-typed tree that conforms to a grammar. I haven't a clue how to express that as a concept. It's not a reflection of my poor understanding of my abstraction. It reflects my poor understanding of concepts. I'm sincerely asking for help trying to understand this. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Mon, Sep 27, 2010 at 10:31 AM, Eric Niebler <eric@boostpro.com> wrote:
On 9/26/2010 9:44 PM, David Abrahams wrote:
Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum
How do I express that as a concept?
This is an interesting question. At first reading through this discussion, I was on the side of Dave thinking that "yes, why don't you just define a SpiritParser concept?" until I thought about it a little more. The way concepts are done now is to define the syntactic requirements (not so much the semantics) of what you require for a type. That means, that certain syntactic constructs be defined at the point of checking the concept (i.e. at compile time). In the case of a SpiritParser, I can see that there are really no syntactic requirements that you can require on each parser -- and although you can make a hierarchy of these concepts by refining a general SpiritParser concept, that only shows what you syntactically require of the type. Statements like: "The type T models a SpiritParser concept if T models DefaultConstructible, Assignable, and Swappable and also supports the following constructs: ``T t; t.foo(); t.bar();``" Can easily be accommodated by the current Boost.Concept_check library. I should know, I use it a lot of times to enforce structural/syntactic concept requirements. What's unique in the situation of Proto/Spirit is that they deal with compile-time constructs (syntax trees, expression templates) that can only be introspected by another compile-time construct (proto transforms, proto verify, MPL, etc.). What's missing in this case is a generic means of expressing tree structures, which complements the syntactic enforcement of requirements using Boost.Concept_check. Somehow I think this either has to be two things: 1. A compiler or extension to the language which allows for defining patterns and type enforcement for compile-time constructs. In Haskell there are Type Classes which are very similar to (the now defunct) C++ Concepts, but Type Classes allowed for the same (stratospheric) level of introspection at compilation time. This means Type Classes can themselves be used as definitions for enforcing the type of (or the *pattern* of, which is basically deduced anyway most of the time) a given function or data type. 2. A meta-meta programming language that allows for defining rules that enforce DSEL/EDSL grammars at compilation time. Although with C++ we are pretty much stuck with more template voodoo if we attempt to formalize something like this. Right now, Proto is a meta-EDSL -- I'm not sure if using just Proto to enforce the rules on Proto-based languages might be a sufficient solution if the aim is to make better compiler messages and/or for simplifying the user experience. Although right now, defining grammars using Proto also allows you to define transformations and even verifications to enforce the grammar, somehow either we need to have Proto Concepts to define tree patterns and valid globbings of Proto trees together that also somehow emit pretty compiler messages. #1 seems to require more man-years of work compared to #2, although #2 will require highly evolved and sufficiently capable template programming compilers.
Not necessarily. Concepts are not about expressing arbitrary template constraints. They're about describing abstractions. One might say that if you can't figure out how to write a concept for a template parameter, you don't understand the abstraction it's modelling.
I understand the abstraction perfectly well. It's a strongly-typed tree that conforms to a grammar. I haven't a clue how to express that as a concept. It's not a reflection of my poor understanding of my abstraction. It reflects my poor understanding of concepts. I'm sincerely asking for help trying to understand this.
If you can somehow transform the meaning of "strongly-typed tree" into a description that the compiler can enforce (similar to how you can require a member method "foo" with 0 arguments on a type T that supposedly models a concept Foo) then I think you have yourself an enforceable concept using Boost.Concept_check. Otherwise, you might have to make one for Proto to enforce not just grammar for the DSELs but also the concepts these strongly-typed trees are supposed to model. I can think of a Spirit Concept such as "SequenceParser" which can have as children a general "Parser", which has many refinements that can be checked by Proto, but only if Proto allowed for a means to define "SequenceParser" and "Parser" suitably as a Tree Pattern or generally a Concept that it itself (Proto) also enforces alongside the Grammar. I'll have to think a little bit more of how this might be achievable with just Boost.Concept_check. HTH and I really hope I made sense. :D -- Dean Michael Berris deanberris.com

At Mon, 27 Sep 2010 11:40:31 +0800, Dean Michael Berris wrote:
The way concepts are done now is to define the syntactic requirements (not so much the semantics)
I think I know what you mean, but I wouldn't say it that way. Notwithstanding the fact that the compiler can never verify semantic conformance to a concept, the semantics are a required part of any concept definition. If your concept omits semantics, it's broken. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Mon, Sep 27, 2010 at 12:54 PM, David Abrahams <dave@boostpro.com> wrote:
At Mon, 27 Sep 2010 11:40:31 +0800, Dean Michael Berris wrote:
The way concepts are done now is to define the syntactic requirements (not so much the semantics)
I think I know what you mean, but I wouldn't say it that way.
I was looking for a better way to say it, but the way it is done now with Boost.Concept_check is pretty much by trying out the interface of the type being checked whether it models a given concept. I'm not sure how to define a model using pure C++ code to say that a given expression should have O(n) computational complexity with regards to `n` being the size of the input -- of course, you can define this in documentation, which I'm not sure really helps with better compiler error messages. :D
Notwithstanding the fact that the compiler can never verify semantic conformance to a concept, the semantics are a required part of any concept definition. If your concept omits semantics, it's broken.
Definitely true in the strict sense that Concepts in the STL documentation are defined. I was referring to the way it's done with Boost.Concept_check and with regards to better compiler error messages. :) -- Dean Michael Berris deanberris.com

David Abrahams wrote:
At Mon, 27 Sep 2010 11:40:31 +0800, Dean Michael Berris wrote:
The way concepts are done now is to define the syntactic requirements (not so much the semantics)
I think I know what you mean, but I wouldn't say it that way. Notwithstanding the fact that the compiler can never verify semantic conformance to a concept, the semantics are a required part of any concept definition. If your concept omits semantics, it's broken.
Hmmm - my dabbling in concepts led me to the opposite conclusion. That is, an attempt to capture semantics in concepts is doomed to fail. It's the implementation which defines the semantics while the concepts only can check the syntax. The idea that concepts could somehow help in clarifying the semantics made it much harder to grasp what the concept (of concepts) was about. Of course the nomenclature doesn't help any either. I realise that Eric's concern is about Proto. But I'm more interested in considering a more limited question. a) Should all libraries include compile time code to verify that template parameters meet their stated syntactic requirements? b) Is there code (i.e. Boost concept check or other) which can actually do this in a practical way. Practical would mean has a decent syntax of its own, doesn't take toooooo long to compile etc. c) Is there a straight forward idiom/pattern such as that suggested by Eric for truncating compile time error listings? d) Should new libraries be expected to implement this? should old libraries be expected to add it in? It seems that the discussion of Proto goes way beyond the issue raised checking template parameters. If there is a system which fullfills the requirements above, it would be useful even if it doesn't handle the issues presented by proto. Robert Ramey

On Sep 27, 2010, at 12:38 PM, Robert Ramey wrote:
David Abrahams wrote:
At Mon, 27 Sep 2010 11:40:31 +0800, Dean Michael Berris wrote:
The way concepts are done now is to define the syntactic requirements (not so much the semantics)
I think I know what you mean, but I wouldn't say it that way. Notwithstanding the fact that the compiler can never verify semantic conformance to a concept, the semantics are a required part of any concept definition. If your concept omits semantics, it's broken.
Hmmm - my dabbling in concepts led me to the opposite conclusion.
That is, an attempt to capture semantics in concepts is doomed to fail.
Semantics are part of what makes up a concept, by definition.
It's the implementation which defines the semantics while the concepts only can check the syntax.
Yes. You won't get automated semantic validation, period, which is one reason I was very uneasy about the "axioms" feature in the later concept proposals. That doesn't mean semantics aren't a part of concepts, just that maybe they shouldn't be part of the language feature (or library emulation thereof).
The idea that concepts could somehow help in clarifying the semantics made it much harder to grasp what the concept (of concepts) was about.
No offense, but you cannot possibly be grasping the "concept (of concepts)" now if you are ignoring semantics. If you got the understanding you have now by omitting semantics, it's a mis-understanding. A simple demonstration of why semantics are essential to concepts: take std::sort for example: template <class RandomAccessIterator It, class StrictWeakOrdering> void sort(It start, It end, StrictWeakOrdering Cmp); It is impossible to write sort (or document what it does) without the assumption that StrictWeakOrdering follows certain semantic laws (described at http://cpp-next.com/archive/2010/02/order-i-say/). It's not just a binary function object to which you can pass It's value_type and whose return value is convertible to bool. Those are the syntactic requirements, but there are lots of such functions that won't result in a sorted sequence.
Of course the nomenclature doesn't help any either.
Sorry, I don't know what you're talking about here. -- Dave Abrahams BoostPro Computing http://boostpro.com

David Abrahams wrote:
On Sep 27, 2010, at 12:38 PM, Robert Ramey wrote:
David Abrahams wrote:
A simple demonstration of why semantics are essential to concepts: take std::sort for example:
template <class RandomAccessIterator It, class StrictWeakOrdering> void sort(It start, It end, StrictWeakOrdering Cmp);
It is impossible to write sort (or document what it does) without the assumption that StrictWeakOrdering follows certain semantic laws (described at http://cpp-next.com/archive/2010/02/order-i-say/). It's not just a binary function object to which you can pass It's value_type and whose return value is convertible to bool. Those are the syntactic requirements, but there are lots of such functions that won't result in a sorted sequence.
My experience was in trying to apply this to the serialization library. My example was ar << t; where t is of serializable type T and ar is an instance of an Archive class The "archive concept" would admit a wide variety of semantics. The archive classes included in the library include code for deep copy/recover of pointers to polymorphic classes. But in another context, such as one which only displays data for debugging, one might implement this functionality as just a top level display so that he can make a header only version. So here one has two useful widely varying implementation symantics with exactly the same syntax requirements. Is it wrong to use concepts here? Is it wrong to make such a library? Is is wrong, to "hijack" a types concepts for unintended uses? I concluded that it not really possible to specify symantics and any formal or rigorous way without actually writing the code itself.. It doesn't make the idea of validation of template parameters useless, it just puts a limit of one can realistically expect from it.
Of course the nomenclature doesn't help any either.
Sorry, I don't know what you're talking about here.
I was referring to the usage of the word "concept" in this context. I found this to be very misleading and not at all descriptive of the the function of "checking template parameters at compile time" Robert Ramey

At Mon, 27 Sep 2010 11:18:47 -0800, Robert Ramey wrote:
My experience was in trying to apply this to the serialization library.
That is, admittedly, a tough one.
My example was
ar << t;
where t is of serializable type T and ar is an instance of an Archive class
The "archive concept" would admit a wide variety of semantics. The archive classes included in the library include code for deep copy/recover of pointers to polymorphic classes. But in another context, such as one which only displays data for debugging, one might implement this functionality as just a top level display so that he can make a header only version.
Well, there's license to break almost any rule when debugging :-) Try to come up with a non-debugging example.
So here one has two useful widely varying implementation symantics with exactly the same syntax requirements. Is it wrong to use concepts here?
No, but you'll need decide what the abstraction's (abstract) semantics are if you're going to document/understand the code as anything more than an English-language translation of its exact implementation (which would be pretty useless as documentation). As soon as your documentation is something more than that, there are abstract semantics involved. You can build and use concepts that have no attached semantics, but you can't say anything interesting about what the code using those concepts "does."
Is it wrong to make such a library? Is is wrong, to "hijack" a types concepts for unintended uses?
I don't have a moral judgement about this. The only right/wrong position I'm taking here is about the interpretation of the term Concept. I happen to know what it was intended to mean, so I can speak with authority on that.
I concluded that it not really possible to specify symantics and any formal or rigorous way without actually writing the code itself.
I recall a conversation on this list where someone (named Sebastian?) successfully, and impressively, described a semantic relationship required between the load/save operations of a read/write archive pair. The fact that it's hard to do doesn't make it a pointless affair. Until you've done it, you have at best an "intuitive feel" for the problem your library is solving.
It doesn't make the idea of validation of template parameters useless, it just puts a limit of one can realistically expect from it.
That's unrelated to the rest of the conversation, because as I have stated many times, validation of template parameters by computer can never check semantic constraints. It's equivalent to solving the halting problem.
Of course the nomenclature doesn't help any either.
Sorry, I don't know what you're talking about here.
I was referring to the usage of the word "concept" in this context. I found this to be very misleading and not at all descriptive of the the function of "checking template parameters at compile time"
Because that's not what a concept is. That's just one of the most important things you can do with a concept. Concepts are about mathematical abstraction. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
I recall a conversation on this list where someone (named Sebastian?) successfully, and impressively, described a semantic relationship required between the load/save operations of a read/write archive pair.
After many years, that little episode sticks in my mind. I think it was Joaquin though. Actually it was the very episode that convinced me that the quest for some sort of semantic formality was destined to be a dead end which ultimate relied on some sort of shared intuitive understanding of what a sentence means. So I concluded one would just as well avoid the attempt with no real loss.
That's unrelated to the rest of the conversation, because as I have stated many times, validation of template parameters by computer can never check semantic constraints. It's equivalent to solving the halting problem.
I think that's what I'm trying to say.
I was referring to the usage of the word "concept" in this context. I found this to be very misleading and not at all descriptive of the the function of "checking template parameters at compile time"
Because that's not what a concept is. That's just one of the most important things you can do with a concept. Concepts are about mathematical abstraction.
maybe that's why I had trouble. I think when I first saw it as I skimmed through the boost documentation, , I wasn't looking for something one would describe as "checking template parameters". That's what I meant when I said "the nomenclature doesn't help" Robert Ramey

Robert Ramey escribió:
David Abrahams wrote:
recall a conversation on this list where someone (named Sebastian?) successfully, and impressively, described a semantic relationship required between the load/save operations of a read/write archive pair.
After many years, that little episode sticks in my mind. I think it was Joaquin though. Actually it was the very episode that convinced me that the quest for some sort of semantic formality was destined to be a dead end which ultimate relied on some sort of shared intuitive understanding of what a sentence means. So I concluded one would just as well avoid the attempt with no real loss.
For our readers' reference, the original proposal is: http://lists.boost.org/Archives/boost/2005/09/93367.php I still think that the approach suggested there is basically sound (some details are missing) and solves all the fuziness around serialization concepts, but alas I didn't manage to get traction with it. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo

joaquin@tid.es wrote:
Robert Ramey escribió:
David Abrahams wrote:
recall a conversation on this list where someone (named Sebastian?) successfully, and impressively, described a semantic relationship required between the load/save operations of a read/write archive pair.
After many years, that little episode sticks in my mind. I think it was Joaquin though. Actually it was the very episode that convinced me that the quest for some sort of semantic formality was destined to be a dead end which ultimate relied on some sort of shared intuitive understanding of what a sentence means. So I concluded one would just as well avoid the attempt with no real loss.
For our readers' reference, the original proposal is:
http://lists.boost.org/Archives/boost/2005/09/93367.php
I still think that the approach suggested there is basically sound (some details are missing) and solves all the fuziness around serialization concepts, but alas I didn't manage to get traction with it.
Well it wasn't a total loss. I was very intrigued with it. And i did consider it carefully. I didn't critique it in a definitive way as I thought that it would a spawn a long discussion which would consume a lot of time and wouldn't change anything. Of course I had some other monkey on my back so I wanted to pick my target and I decided (out of character perhaps) to let this one go by. And I didn't see that this was a question regarding the serialization library per se, but rather the whole question of the relationship between programming language (or any algebraic) syntax and underlying meaning. These questions have been extensively studied and I believe effectively put to rest in a way which supports my view that what we can hope for from semantics is limited by factors outside outside our perview. So, I tended to regard your proposal as like a design for a very clever perpetual motion machine which I know cannot work, but cannot refute the very clever and intricate explanation of it's inventor without a lot of investment of effort. Of course this is waaaayyyyy off topic for this list, but it's one of the things I love most about it.(this list) In any case, I am still VERY interested in seeing what the C++ implemenation of concepts can do for me. If it can a) check template parameters end emit a compile error b) with the rules specified with a simple and natural grammar c) without brinig the compiler to halt d) and is portable or can be suppressed for non-conformant compilers e) has documentation which is helpful That's all it needs. The fact that this thread even exists suggests to me that the above doesn't currently exist. Then there's the fact that I spent some time looking at the documentation and had problems. Admitidly, part of this is due to the fact that my area of application (the serialization library) has some poorly or incorrectly defined areas. But this actually shows that the need for such a system is even greater. One should have as system which is used from the beginning - like the type parameters on function arguments. Of course I realize I'm preachng to the preacher here. But the fact that a serious person of some level of competence tries to use the system and finds it hard should be seen as a bug at some level. And that's not happening. Robert Ramey

At Tue, 28 Sep 2010 10:17:39 -0800, Robert Ramey wrote:
joaquin@tid.es wrote:
For our readers' reference, the original proposal is:
http://lists.boost.org/Archives/boost/2005/09/93367.php
I still think that the approach suggested there is basically sound (some details are missing) and solves all the fuziness around serialization concepts, but alas I didn't manage to get traction with it.
Well it wasn't a total loss. I was very intrigued with it. And i did consider it carefully. I didn't critique it in a definitive way as I thought that it would a spawn a long discussion which would consume a lot of time and wouldn't change anything. Of course I had some other monkey on my back so I wanted to pick my target and I decided (out of character perhaps) to let this one go by.
And I didn't see that this was a question regarding the serialization library per se, but rather the whole question of the relationship between programming language (or any algebraic) syntax and underlying meaning. These questions have been extensively studied
Yes.
and I believe effectively put to rest in a way which supports my view that what we can hope for from semantics is limited by factors outside outside our perview.
That statement is sufficiently vague so as to be irrefutable. Unfortunately that also makes it kind of meaningless.
So, I tended to regard your proposal as like a design for a very clever perpetual motion machine which I know cannot work, but cannot refute the very clever and intricate explanation of it's inventor without a lot of investment of effort.
Maybe I shouldn't have been, but this remark left me feeling quite insulted on Joaquin's behalf. Nothing in the literature supports the notion that what Joaquin was trying to accomplish is unachievable, and to suggest that he's some kind of charlatan just because he put together a logical framework you haven't got your head around yet seems uncharitable at best. What he did is akin to defining what it means for a range to be sorted, as opposed to merely hoping that some common conventional understanding of the word "sort" is enough to describe the meaning of my sort algorithm. That definition allows me to clearly determine whether there's a bug in my code, or in the comparison predicate you passed me, based on its results. Of course, "sort" *does* have a rigorously-defined meaning that corresponds exactly to common convention, so it's actually much less important that someone repeat that definition than in your case. Joaquin essentially gave a rigorous definition to "serialize" and "deserialize." Maybe you don't think something like that matters, but for understanding what your library is actually supposed to do, it matters to me. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
At Tue, 28 Sep 2010 10:17:39 -0800, Robert Ramey wrote:
And I didn't see that this was a question regarding the serialization library per se, but rather the whole question of the relationship between programming language (or any algebraic) syntax and underlying meaning. These questions have been extensively studied
Yes.
and I believe effectively put to rest in a way which supports my view that what we can hope for from semantics is limited by factors outside outside our perview.
That statement is sufficiently vague so as to be irrefutable. Unfortunately that also makes it kind of meaningless.
My view on this is probably best described in Douglas Hofstadter Godel, Escher, Bach - Chapter II which describes the mapping of symbol expressions to deeper meaning. This left me concluding that that it's unknowable what some formal syntax might map to. And indeed there are many cases where the same formal system maps to sidely differing domains. (e.g. differencial equations to systems of springs and weights and compacitors and inductors). So, although the "semantics" provides motivation and direction in an intuitive way, it can't be formulized itself.
So, I tended to regard your proposal as like a design for a very clever perpetual motion machine which I know cannot work, but cannot refute the very clever and intricate explanation of it's inventor without a lot of investment of effort.
Maybe I shouldn't have been, but this remark left me feeling quite insulted on Joaquin's behalf.
You're right, you shouldn't have been insulted on anyone's behalf.
to suggest that he's some kind of charlatan ...
I didn't do that.
Of course, "sort" *does* have a rigorously-defined meaning that corresponds exactly to common convention, so it's actually much less important that someone repeat that definition than in your case.
If it could be done.
Joaquin essentially gave a rigorous definition to "serialize" and "deserialize." Maybe you don't think something like that matters, but for understanding what your library is actually supposed to do, it matters to me.
It's not so much that I don't think that it matters, it's that I think the minute you try to pin it down, you discover that there are some ambiguities. When you try pin those down, it gets deeper and deeper. Using the serialization library as an example. Suppose someone says "this is OK" there's only one thing, I don't want the library to create new objects when a pointer is serialized. I want it to just update the objects pointed to". This is a perfectly plausible point of view which is consistent with the current syntax of the library. If someone makes such an archive, should be be considered "non-confoming" in some sense? If so, how would this be detected and enforced? Maybe my reservations about the idea of "formal semantics" are more clear. Robert Ramey

At Mon, 4 Oct 2010 10:04:04 -0800, Robert Ramey wrote:
David Abrahams wrote:
At Tue, 28 Sep 2010 10:17:39 -0800, Robert Ramey wrote:
And I didn't see that this was a question regarding the serialization library per se, but rather the whole question of the relationship between programming language (or any algebraic) syntax and underlying meaning. These questions have been extensively studied
Yes.
and I believe effectively put to rest in a way which supports my view that what we can hope for from semantics is limited by factors outside outside our perview.
That statement is sufficiently vague so as to be irrefutable. Unfortunately that also makes it kind of meaningless.
My view on this is probably best described in Douglas Hofstadter Godel, Escher, Bach - Chapter II which describes the mapping of symbol expressions to deeper meaning. This left me concluding that that it's unknowable what some formal syntax might map to.
Yeah, it's unknowable, unless it's specified. I don't think anyone has a question about what the formal syntax called "Java" maps to.
And indeed there are many cases where the same formal system maps to sidely differing domains. (e.g. differencial equations to systems of springs and weights and compacitors and inductors). So, although the "semantics" provides motivation and direction in an intuitive way, it can't be formulized itself.
I don't even know what that means.
So, I tended to regard your proposal as like a design for a very clever perpetual motion machine which I know cannot work, but cannot refute the very clever and intricate explanation of it's inventor without a lot of investment of effort.
Maybe I shouldn't have been, but this remark left me feeling quite insulted on Joaquin's behalf.
You're right, you shouldn't have been insulted on anyone's behalf.
to suggest that he's some kind of charlatan ...
I didn't do that.
Comparing his proposal to a design for a perpetual motion machine suggested to me that you think he's either trying to put one over on you, or is pathetically misguided. But maybe it was just an poorly-chosen analogy.
Of course, "sort" *does* have a rigorously-defined meaning that corresponds exactly to common convention, so it's actually much less important that someone repeat that definition than in your case.
If it could be done.
If _what_ could be done? Defining "sort" mathematically? Or serialize/deserialize?
Joaquin essentially gave a rigorous definition to "serialize" and "deserialize." Maybe you don't think something like that matters, but for understanding what your library is actually supposed to do, it matters to me.
It's not so much that I don't think that it matters, it's that I think the minute you try to pin it down, you discover that there are some ambiguities. When you try pin those down, it gets deeper and deeper.
Do you think that's true of sort? If so, I'll dig up the mathematical descriptions and you can show me the ambiguities.
Using the serialization library as an example. Suppose someone says "this is OK" there's only one thing, I don't want the library to create new objects when a pointer is serialized. I want it to just update the objects pointed to". This is a perfectly plausible point of view which is consistent with the current syntax of the library. If someone makes such an archive, should be be considered "non-confoming" in some sense?
I don't know. Let's look at Joaquin's proposal and see if it's compatible with such an archive, and decide whether or not you think it should be allowed. That will help us know whether Joaquin's proposal works for your idea of the library.
If so, how would this be detected and enforced?
Detection and enforcement is beside the point. You can't reliably detect a broken comparison predicate used with sorting, but knowing the mathematical constraints make it understandable how to build one that will allow the sort algorithm to, um, actually sort.
Maybe my reservations about the idea of "formal semantics" are more clear.
They're clear. You are saying, AFAICT, that you don't want to try to pin down any semantics because that might rule out, purely on the basis of specification, some useful application of your existing code. The only problem with that is that without some such specification, nobody even knows if their existing applications of your code are going to work tomorrow. We're in a "use the source, Luke" situation at the moment. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
At Mon, 4 Oct 2010 10:04:04 -0800,
Maybe my reservations about the idea of "formal semantics" are more clear.
They're clear. You are saying, AFAICT, that you don't want to try to pin down any semantics because that might rule out, purely on the basis of specification, some useful application of your existing code. The only problem with that is that without some such specification, nobody even knows if their existing applications of your code are going to work tomorrow. We're in a "use the source, Luke" situation at the moment.
That's part of it. But its more than that. I have no problem with saying that the library does this or that. In fact the documentation for the library does define what the word means in the context of that documentation. It defines it in english appealing to shared intuition and world view between the writer and reader of the document. I doesn't do so in any formal sense. I don't think that is possible and that attempts to do so just end up requiring a sharing of other ideas. So it's better just to cut the recurrsion and just accept that we share and idea of what serialization and de-serialization are but that idea can have differing interpretations. To summarize - I'm just very uncomfortable with the term "formal symatics" and don't think it can mean anything. Robert Ramey

At Mon, 4 Oct 2010 11:08:16 -0800, Robert Ramey wrote:
To summarize - I'm just very uncomfortable with the term "formal symatics" and don't think it can mean anything.
So do I understand correctly that you believe there are no possible meaningful formal semantics for "sort?" -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
At Mon, 4 Oct 2010 11:08:16 -0800, Robert Ramey wrote:
To summarize - I'm just very uncomfortable with the term "formal symatics" and don't think it can mean anything.
So do I understand correctly that you believe there are no possible meaningful formal semantics for "sort?"
I would feel comfortable saying there are meaningful semantics for "sort". When I see the term "formal semantics", it conjures up something along the lines that described in http://en.wikipedia.org/wiki/Formal_semantics That it, "formal" suggest to me a level of mathematical rigor that doesn't exist here. It may be possible supply such rigor in this or some other cases, but I don't think that boost documentation can justify such an effort. I think a clear concise explanation of what the implemenation has to do is all that is necessary and all that is generally possible. I also think that in many cases it will unavoidable that there will be unforseen ambiguities. The usage of the term "formal semantics" suggests that this would not be the case. .I'm thinking the usage of the adjetive "formal" in this context is misleading. That's all. Robert Ramey

At Sun, 26 Sep 2010 22:31:54 -0400, Eric Niebler wrote:
On 9/26/2010 9:44 PM, David Abrahams wrote:
On Sep 26, 2010, at 8:55 AM, Eric Niebler wrote:
On 9/25/2010 1:24 PM, David Abrahams wrote:
On Sep 24, 2010, at 11:51 PM, Eric Niebler wrote:
On 9/24/2010 9:37 PM, David Abrahams wrote:
Haha! No, not at all. Let's rephrase the problem a bit. If we still had C++0x concepts, what would the concept SpiritParser look like, such that we could define spirit::qi::rule::operator= such that it required its RHS to satisfy the SpiritParser concept?
That's easy to answer; just look at the operations that operator= et. al expect of such a type. Wouldn't SpiritParser just be some simple refinement of Callable?
No. rule::operator= expects SpiritParser to be a strongly-typed tree.
Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum
No offense, but did you really answer the question I asked? Does operator= actually use those requirements? On the face of it, it just seems a lot more likely that it simply requires its RHS to also be a SpiritParser.
How do I express that as a concept?
Summoning up my dim memory of proposed concept notation (so there are guaranteed nits to pick), that might translate into something like: concept SpiritNode<typename Node, typename Tag> { } template <class Node> concept_map SpiritNode<typename Node, plus> { requires SameType<Node::children,int_<1> >; typename child_t; requires SpiritParser<child_t>; child_t child; } template <class Node> concept_map SpiritNode<typename Node, shift_right> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires SpiritParser<child2_t>; child1_t child1; child2_t child2; } template <class Node> concept_map SpiritNode<typename Node, subscript> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires PhoenixLambda<child2_t>; child1_t child1; child2_t child2; } concept RHSOfAssign<typename X> { typename tag_type; // Every RHSOfAssign has an associated tag_type requires SpiritNode<X, tag_type>; } This isn't really how concepts were "meant to be used," though. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 9/27/2010 12:49 AM, David Abrahams wrote:
At Sun, 26 Sep 2010 22:31:54 -0400, Eric Niebler wrote:
On 9/26/2010 9:44 PM, David Abrahams wrote:
On Sep 26, 2010, at 8:55 AM, Eric Niebler wrote:
On 9/25/2010 1:24 PM, David Abrahams wrote:
On Sep 24, 2010, at 11:51 PM, Eric Niebler wrote:
On 9/24/2010 9:37 PM, David Abrahams wrote:
Haha! No, not at all. Let's rephrase the problem a bit. If we still had C++0x concepts, what would the concept SpiritParser look like, such that we could define spirit::qi::rule::operator= such that it required its RHS to satisfy the SpiritParser concept?
That's easy to answer; just look at the operations that operator= et. al expect of such a type. Wouldn't SpiritParser just be some simple refinement of Callable?
No. rule::operator= expects SpiritParser to be a strongly-typed tree.
Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum
No offense, but did you really answer the question I asked? Does operator= actually use those requirements?
It does. operator= doesn't know how to process the tree (that is, what the requirements are) until it looks at the top-most node type and dispatches to the correct helper functions. Those functions have their own requirements determined solely by the substructure of the trees they operate on. For instance, take semantic actions, denoted by the subscript tag. If a subscript node appears anywhere in the tree, the right child of that node must satisfy the concept PolymorphicFunctionObject. rule::operator= will fail to compile if it does not; it has assumed that requirement as a result of the recursive function calls made within the body of operator=. I'd like to be able to check for that condition up front. I still don't know how using concepts.
On the face of it, it just seems a lot more likely that it simply requires its RHS to also be a SpiritParser.
Where a SpiritParser is ...?
How do I express that as a concept?
Summoning up my dim memory of proposed concept notation (so there are guaranteed nits to pick), that might translate into something like:
concept SpiritNode<typename Node, typename Tag> { }
template <class Node> concept_map SpiritNode<typename Node, plus> { requires SameType<Node::children,int_<1> >; typename child_t; requires SpiritParser<child_t>; child_t child; }
template <class Node> concept_map SpiritNode<typename Node, shift_right> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires SpiritParser<child2_t>;
child1_t child1; child2_t child2; }
template <class Node> concept_map SpiritNode<typename Node, subscript> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires PhoenixLambda<child2_t>;
child1_t child1; child2_t child2; }
concept RHSOfAssign<typename X> { typename tag_type; // Every RHSOfAssign has an associated tag_type requires SpiritNode<X, tag_type>; }
This isn't really how concepts were "meant to be used," though.
You haven't shown how the SpiritNode and SpiritParser concepts are related. There must be some recursive relationship, like "a SpiritParser is a SpiritNode<plus> /or/ a SpiritNode<right_shift> /or/ ..." It requires OR constraints which, as Sebastian has pointed out, were yanked from the concepts proposal. rule::operator= is an algorithm. The structure of the type on which it's parametrized and the grammar to which that type must conform together determine what the overall algorithm requirements are. It by necessity cannot be expressed without OR constraints. What I seem to have is a situation where an algorithm has type requirements that cannot be expressed with concepts, so concepts cannot help me validate my type parameter. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Sep 27, 2010, at 10:11 AM, Eric Niebler wrote:
On 9/27/2010 12:49 AM, David Abrahams wrote:
At Sun, 26 Sep 2010 22:31:54 -0400, Eric Niebler wrote:
On 9/26/2010 9:44 PM, David Abrahams wrote:
On Sep 26, 2010, at 8:55 AM, Eric Niebler wrote:
On 9/25/2010 1:24 PM, David Abrahams wrote:
On Sep 24, 2010, at 11:51 PM, Eric Niebler wrote: > On 9/24/2010 9:37 PM, David Abrahams wrote: > > Haha! No, not at all. Let's rephrase the problem a bit. If we still had > C++0x concepts, what would the concept SpiritParser look like, such that > we could define spirit::qi::rule::operator= such that it required its > RHS to satisfy the SpiritParser concept?
That's easy to answer; just look at the operations that operator= et. al expect of such a type. Wouldn't SpiritParser just be some simple refinement of Callable?
No. rule::operator= expects SpiritParser to be a strongly-typed tree.
Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum
No offense, but did you really answer the question I asked? Does operator= actually use those requirements?
It does. operator= doesn't know how to process the tree (that is, what the requirements are) until it looks at the top-most node type and dispatches to the correct helper functions. Those functions have their own requirements determined solely by the substructure of the trees they operate on.
For instance, take semantic actions, denoted by the subscript tag. If a subscript node appears anywhere in the tree, the right child of that node must satisfy the concept PolymorphicFunctionObject. rule::operator= will fail to compile if it does not; it has assumed that requirement as a result of the recursive function calls made within the body of operator=. I'd like to be able to check for that condition up front. I still don't know how using concepts.
On the face of it, it just seems a lot more likely that it simply requires its RHS to also be a SpiritParser.
Where a SpiritParser is ...?
How do I express that as a concept?
Summoning up my dim memory of proposed concept notation (so there are guaranteed nits to pick), that might translate into something like:
concept SpiritNode<typename Node, typename Tag> { }
template <class Node> concept_map SpiritNode<typename Node, plus> { requires SameType<Node::children,int_<1> >; typename child_t; requires SpiritParser<child_t>; child_t child; }
template <class Node> concept_map SpiritNode<typename Node, shift_right> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires SpiritParser<child2_t>;
child1_t child1; child2_t child2; }
template <class Node> concept_map SpiritNode<typename Node, subscript> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires PhoenixLambda<child2_t>;
child1_t child1; child2_t child2; }
concept RHSOfAssign<typename X> { typename tag_type; // Every RHSOfAssign has an associated tag_type requires SpiritNode<X, tag_type>; }
This isn't really how concepts were "meant to be used," though.
You haven't shown how the SpiritNode and SpiritParser concepts are related.
Of course not; you didn't say they were related. I just translated the constraints you wrote down into concepts-land.
There must be some recursive relationship, like "a SpiritParser is a SpiritNode<plus> /or/ a SpiritNode<right_shift> /or/ ..." It requires OR constraints
Do say that, you just use concept_maps to declare that SpiritNode<plus> models SpiritParser, etc.: concept_map SpiritParser<SpiritNode<plus> > { // ... } concept_map SpiritParser<SpiritNode<right_shift> > { // ... }
which, as Sebastian has pointed out, were yanked from the concepts proposal.
With good reason.
rule::operator= is an algorithm. The structure of the type on which it's parametrized and the grammar to which that type must conform together determine what the overall algorithm requirements are. It by necessity cannot be expressed without OR constraints.
I don't understand how you can come to that conclusion with such certainty, while at the same time declaring that you don't understand concepts.
What I seem to have is a situation where an algorithm has type requirements that cannot be expressed with concepts, so concepts cannot help me validate my type parameter.
I don't see it (yet), sorry. Am I missing something? -- Dave Abrahams BoostPro Computing http://boostpro.com

On 9/27/2010 1:02 PM, David Abrahams wrote:
On Sep 27, 2010, at 10:11 AM, Eric Niebler wrote:
On 9/27/2010 12:49 AM, David Abrahams wrote:
At Sun, 26 Sep 2010 22:31:54 -0400, Eric Niebler wrote:
No. rule::operator= expects SpiritParser to be a strongly-typed tree.
Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum <snip> How do I express that as a concept?
Summoning up my dim memory of proposed concept notation (so there are guaranteed nits to pick), that might translate into something like:
concept SpiritNode<typename Node, typename Tag> { }
template <class Node> concept_map SpiritNode<typename Node, plus> { requires SameType<Node::children,int_<1> >; typename child_t; requires SpiritParser<child_t>; child_t child; }
template <class Node> concept_map SpiritNode<typename Node, shift_right> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires SpiritParser<child2_t>;
child1_t child1; child2_t child2; }
template <class Node> concept_map SpiritNode<typename Node, subscript> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires PhoenixLambda<child2_t>;
child1_t child1; child2_t child2; }
concept RHSOfAssign<typename X> { typename tag_type; // Every RHSOfAssign has an associated tag_type requires SpiritNode<X, tag_type>; }
This isn't really how concepts were "meant to be used," though.
You haven't shown how the SpiritNode and SpiritParser concepts are related.
Of course not; you didn't say they were related. I just translated the constraints you wrote down into concepts-land.
You were the one who invented the SpiritNode concept, and then didn't tie it into the discussion! I described a recursive description of the SpiritParser concept and you showed one without any recursive property at all. :-P
There must be some recursive relationship, like "a SpiritParser is a SpiritNode<plus> /or/ a SpiritNode<right_shift> /or/ ..." It requires OR constraints
Do say that, you just use concept_maps to declare that SpiritNode<plus> models SpiritParser, etc.:
concept_map SpiritParser<SpiritNode<plus> > { // ... }
concept_map SpiritParser<SpiritNode<right_shift> > { // ... }
Not according to my reading of the standard. A concept_map is for mapping *types* to concepts, not *concepts* to concepts, as you appear to be doing here. The grammar for a concept_map requires the thing after "concept_map" to be a concept-id, which may include a template-argument-list (as yours does). That consists of template-arguments which are constant-expression, type-id, or id-expression. I looked up each. None appear to allow concept-ids. I can also find no examples in the standard of using concept maps to map one concept to another. Can you?
which, as Sebastian has pointed out, were yanked from the concepts proposal.
With good reason.
Algorithmic complexity, IIRC.
rule::operator= is an algorithm. The structure of the type on which it's parametrized and the grammar to which that type must conform together determine what the overall algorithm requirements are. It by necessity cannot be expressed without OR constraints.
I don't understand how you can come to that conclusion with such certainty, while at the same time declaring that you don't understand concepts.
Oh, my shoddy reasoning went something like: "it seems like I would need an OR constraint." Heh.
What I seem to have is a situation where an algorithm has type requirements that cannot be expressed with concepts, so concepts cannot help me validate my type parameter.
I don't see it (yet), sorry. Am I missing something?
If I do indeed need an OR constraint or something like it, and OR constraints were pulled for good reason, then it seems like what I'm trying to do is D.O.A. -- Eric Niebler BoostPro Computing http://www.boostpro.com

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost- bounces@lists.boost.org] On Behalf Of Eric Niebler Sent: Monday, September 27, 2010 10:53 AM To: boost@lists.boost.org Subject: Re: [boost] [guidelines] why template errors suck
On Sep 27, 2010, at 10:11 AM, Eric Niebler wrote:
On 9/27/2010 12:49 AM, David Abrahams wrote:
At Sun, 26 Sep 2010 22:31:54 -0400, Eric Niebler wrote:
> No. rule::operator= expects SpiritParser to be a strongly-typed
Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum <snip> How do I express that as a concept?
Summoning up my dim memory of proposed concept notation (so there are guaranteed nits to pick), that might translate into something like:
concept SpiritNode<typename Node, typename Tag> { }
template <class Node> concept_map SpiritNode<typename Node, plus> { requires SameType<Node::children,int_<1> >; typename child_t; requires SpiritParser<child_t>; child_t child; }
template <class Node> concept_map SpiritNode<typename Node, shift_right> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires SpiritParser<child2_t>;
child1_t child1; child2_t child2; }
template <class Node> concept_map SpiritNode<typename Node, subscript> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires PhoenixLambda<child2_t>;
child1_t child1; child2_t child2; }
concept RHSOfAssign<typename X> { typename tag_type; // Every RHSOfAssign has an associated tag_type requires SpiritNode<X, tag_type>; }
This isn't really how concepts were "meant to be used," though.
You haven't shown how the SpiritNode and SpiritParser concepts are related.
Of course not; you didn't say they were related. I just translated
On 9/27/2010 1:02 PM, David Abrahams wrote: tree. the constraints you wrote down into concepts-land.
You were the one who invented the SpiritNode concept, and then didn't tie it into the discussion! I described a recursive description of the SpiritParser concept and you showed one without any recursive property at all. :-P
There must be some recursive relationship, like "a SpiritParser is a SpiritNode<plus> /or/ a SpiritNode<right_shift> /or/ ..." It requires OR constraints
Do say that, you just use concept_maps to declare that SpiritNode<plus> models SpiritParser, etc.:
concept_map SpiritParser<SpiritNode<plus> > { // ... }
concept_map SpiritParser<SpiritNode<right_shift> > { // ... }
Not according to my reading of the standard. A concept_map is for mapping *types* to concepts, not *concepts* to concepts, as you appear to be doing here.
The grammar for a concept_map requires the thing after "concept_map" to be a concept-id, which may include a template-argument-list (as yours does). That consists of template-arguments which are constant-expression, type-id, or id-expression. I looked up each. None appear to allow concept-ids.
I can also find no examples in the standard of using concept maps to map one concept to another. Can you?
I ported parts of the BGL, GIL, MTL, and IETL to ConceptGCC. Then I set up a modeling relationship between the various concepts (in layers) to implement the Shi-Malik algorithm for image segmentation. I wrote up the results: http://portal.acm.org/citation.cfm?id=1289971.1289984 Mapping concept-to-concept is part of the core ideas defined by Gregor & Stroustrup. -j.

On 9/27/2010 2:04 PM, Smith, Jacob N wrote:
On Monday, September 27, 2010 10:53 AM, Eric Niebler wrote:
On Sep 27, 2010, at 10:11 AM, Eric Niebler wrote:
On 9/27/2010 12:49 AM, David Abrahams wrote:
At Sun, 26 Sep 2010 22:31:54 -0400, Eric Niebler wrote:
>> No. rule::operator= expects SpiritParser to be a strongly-typed
> > Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum <snip> How do I express that as a concept?
Summoning up my dim memory of proposed concept notation (so there are guaranteed nits to pick), that might translate into something like:
concept SpiritNode<typename Node, typename Tag> { }
template <class Node> concept_map SpiritNode<typename Node, plus> { requires SameType<Node::children,int_<1> >; typename child_t; requires SpiritParser<child_t>; child_t child; }
template <class Node> concept_map SpiritNode<typename Node, shift_right> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires SpiritParser<child2_t>;
child1_t child1; child2_t child2; }
template <class Node> concept_map SpiritNode<typename Node, subscript> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires PhoenixLambda<child2_t>;
child1_t child1; child2_t child2; }
concept RHSOfAssign<typename X> { typename tag_type; // Every RHSOfAssign has an associated tag_type requires SpiritNode<X, tag_type>; }
This isn't really how concepts were "meant to be used," though.
You haven't shown how the SpiritNode and SpiritParser concepts are related.
Of course not; you didn't say they were related. I just translated
On 9/27/2010 1:02 PM, David Abrahams wrote: tree. the constraints you wrote down into concepts-land.
You were the one who invented the SpiritNode concept, and then didn't tie it into the discussion! I described a recursive description of the SpiritParser concept and you showed one without any recursive property at all. :-P
There must be some recursive relationship, like "a SpiritParser is a SpiritNode<plus> /or/ a SpiritNode<right_shift> /or/ ..." It requires OR constraints
Do say that, you just use concept_maps to declare that SpiritNode<plus> models SpiritParser, etc.:
concept_map SpiritParser<SpiritNode<plus> > { // ... }
concept_map SpiritParser<SpiritNode<right_shift> > { // ... }
Not according to my reading of the standard. A concept_map is for mapping *types* to concepts, not *concepts* to concepts, as you appear to be doing here.
The grammar for a concept_map requires the thing after "concept_map" to be a concept-id, which may include a template-argument-list (as yours does). That consists of template-arguments which are constant-expression, type-id, or id-expression. I looked up each. None appear to allow concept-ids.
I can also find no examples in the standard of using concept maps to map one concept to another. Can you?
I ported parts of the BGL, GIL, MTL, and IETL to ConceptGCC. Then I set up a modeling relationship between the various concepts (in layers) to implement the Shi-Malik algorithm for image segmentation. I wrote up the results:
I can't access the pdf. Can you put it someplace we can read it? I'm not sure about MTL or IETL, but I know that BGL and GIL are generic libraries cut from the same cloth as STL: algorithms over homogeneous containers. I'm not sure the same techniques apply to heterogeneous composites like Proto trees.
Mapping concept-to-concept is part of the core ideas defined by Gregor & Stroustrup.
Then it seems curious that the standard didn't seem to allow it. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Sep 27, 2010, at 11:04 AM, Smith, Jacob N wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost- bounces@lists.boost.org] On Behalf Of Eric Niebler Sent: Monday, September 27, 2010 10:53 AM To: boost@lists.boost.org Subject: Re: [boost] [guidelines] why template errors suck
I can also find no examples in the standard of using concept maps to map one concept to another. Can you?
I ported parts of the BGL, GIL, MTL, and IETL to ConceptGCC. Then I set up a modeling relationship between the various concepts (in layers) to implement the Shi-Malik algorithm for image segmentation. I wrote up the results:
http://portal.acm.org/citation.cfm?id=1289971.1289984
Mapping concept-to-concept is part of the core ideas defined by Gregor & Stroustrup.
But Dave got the syntax wrong. It works something like this: template <ConceptA T> concept_map ConceptB<T> { // implement ConceptB in terms of ConceptA here } Also, concept maps never contain concept constructs like requires, standalone typenames, or similar. Sebastian

At Mon, 27 Sep 2010 13:53:22 -0400, Eric Niebler wrote:
On 9/27/2010 1:02 PM, David Abrahams wrote:
On Sep 27, 2010, at 10:11 AM, Eric Niebler wrote:
On 9/27/2010 12:49 AM, David Abrahams wrote:
At Sun, 26 Sep 2010 22:31:54 -0400, Eric Niebler wrote:
> No. rule::operator= expects SpiritParser to be a strongly-typed tree.
Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum <snip> How do I express that as a concept?
Summoning up my dim memory of proposed concept notation (so there are guaranteed nits to pick), that might translate into something like:
concept SpiritNode<typename Node, typename Tag> { }
template <class Node> concept_map SpiritNode<typename Node, plus> { requires SameType<Node::children,int_<1> >; typename child_t; requires SpiritParser<child_t>; child_t child; }
template <class Node> concept_map SpiritNode<typename Node, shift_right> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires SpiritParser<child2_t>;
child1_t child1; child2_t child2; }
template <class Node> concept_map SpiritNode<typename Node, subscript> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires PhoenixLambda<child2_t>;
child1_t child1; child2_t child2; }
concept RHSOfAssign<typename X> { typename tag_type; // Every RHSOfAssign has an associated tag_type requires SpiritNode<X, tag_type>; }
This isn't really how concepts were "meant to be used," though.
You haven't shown how the SpiritNode and SpiritParser concepts are related.
Of course not; you didn't say they were related. I just translated the constraints you wrote down into concepts-land.
You were the one who invented the SpiritNode concept, and then didn't tie it into the discussion!
Oh, whoops. Replace RHSOfAssign with SpiritParser.
I described a recursive description of the SpiritParser concept and you showed one without any recursive property at all. :-P
Uhm, still not sure where the recursion is in your description.
There must be some recursive relationship, like "a SpiritParser is a SpiritNode<plus> /or/ a SpiritNode<right_shift> /or/ ..."
And that's not recursion.
Do say that, you just use concept_maps to declare that SpiritNode<plus> models SpiritParser, etc.:
concept_map SpiritParser<SpiritNode<plus> > { // ... }
concept_map SpiritParser<SpiritNode<right_shift> > { // ... }
Not according to my reading of the standard. A concept_map is for mapping *types* to concepts, not *concepts* to concepts, as you appear to be doing here.
Sorry, I may have spelled it wrong, but you can do it. Concept map templates allow you to map types that satisfy given concepts to other concepts. There's an example on page 44 of http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2710.pdf template <typename Node> requires SpiritNode<Node, plus> concept_map SpiritParser<Node> { // ... } template <typename Node> requires SpiritNode<Node, right_shift> concept_map SpiritParser<Node> { // ... }
I can also find no examples in the standard of using concept maps to map one concept to another. Can you?
which, as Sebastian has pointed out, were yanked from the concepts proposal.
With good reason.
Algorithmic complexity, IIRC.
Also there's no real-world scenario that can benefit from it. There's nothing that OR constraints could express that couldn't also be expressed with concept_maps, and once you got inside the constrained template you wouldn't be able to use any operations that aren't in the intersection of the constraints (remember that template bodies need to typecheck against their constraints). -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 9/27/2010 3:01 PM, David Abrahams wrote:
At Mon, 27 Sep 2010 13:53:22 -0400, Eric Niebler wrote:
On 9/27/2010 1:02 PM, David Abrahams wrote:
On Sep 27, 2010, at 10:11 AM, Eric Niebler wrote:
On 9/27/2010 12:49 AM, David Abrahams wrote:
At Sun, 26 Sep 2010 22:31:54 -0400, Eric Niebler wrote:
>> No. rule::operator= expects SpiritParser to be a strongly-typed tree. > > Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum <snip> How do I express that as a concept?
Summoning up my dim memory of proposed concept notation (so there are guaranteed nits to pick), that might translate into something like:
concept SpiritNode<typename Node, typename Tag> { }
template <class Node> concept_map SpiritNode<typename Node, plus> { requires SameType<Node::children,int_<1> >; typename child_t; requires SpiritParser<child_t>; child_t child; }
template <class Node> concept_map SpiritNode<typename Node, shift_right> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires SpiritParser<child2_t>;
child1_t child1; child2_t child2; }
template <class Node> concept_map SpiritNode<typename Node, subscript> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires PhoenixLambda<child2_t>;
child1_t child1; child2_t child2; }
concept RHSOfAssign<typename X> { typename tag_type; // Every RHSOfAssign has an associated tag_type requires SpiritNode<X, tag_type>; }
This isn't really how concepts were "meant to be used," though.
You haven't shown how the SpiritNode and SpiritParser concepts are related.
Of course not; you didn't say they were related. I just translated the constraints you wrote down into concepts-land.
You were the one who invented the SpiritNode concept, and then didn't tie it into the discussion!
Oh, whoops. Replace RHSOfAssign with SpiritParser.
OK, that helps some. But SpiritParser is already defined above. I assume we can replace that definition with a forward declaration. I assume concepts can be forward declared.
I described a recursive description of the SpiritParser concept and you showed one without any recursive property at all. :-P
Uhm, still not sure where the recursion is in your description.
Look again at my description at the top of this email. A SpiritParser is + a node with a plus tag where the child is a SpiritParser, or + a node with a right shift tag where both children are SpiritParsers, or ... Recursive.
There must be some recursive relationship, like "a SpiritParser is a SpiritNode<plus> /or/ a SpiritNode<right_shift> /or/ ..."
And that's not recursion.
It is, because your own SpiritNode concepts are defined in terms of the SpiritParser concept, closing the circle. Have you lost the thread of this conversation?
Do say that, you just use concept_maps to declare that SpiritNode<plus> models SpiritParser, etc.:
concept_map SpiritParser<SpiritNode<plus> > { // ... }
concept_map SpiritParser<SpiritNode<right_shift> > { // ... }
Not according to my reading of the standard. A concept_map is for mapping *types* to concepts, not *concepts* to concepts, as you appear to be doing here.
Sorry, I may have spelled it wrong, but you can do it. Concept map templates allow you to map types that satisfy given concepts to other concepts. There's an example on page 44 of http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2710.pdf
template <typename Node> requires SpiritNode<Node, plus> concept_map SpiritParser<Node> { // ... }
template <typename Node> requires SpiritNode<Node, right_shift> concept_map SpiritParser<Node> { // ... }
Ah, well this is interesting. I didn't know you could constrain a concept_map like that and overload them on the constraints. But I'm still confused. If we can make recursive concepts as you've suggested above (by renaming RHSOfAssign to SpiritParser and closing the loop), then is this step even needed?
I can also find no examples in the standard of using concept maps to map one concept to another. Can you?
which, as Sebastian has pointed out, were yanked from the concepts proposal.
With good reason.
Algorithmic complexity, IIRC.
Also there's no real-world scenario that can benefit from it. There's nothing that OR constraints could express that couldn't also be expressed with concept_maps, and once you got inside the constrained template you wouldn't be able to use any operations that aren't in the intersection of the constraints (remember that template bodies need to typecheck against their constraints).
The /intersection/? This seems like a real problem then, doesn't it? I want to say that a SpiritParser is a SpiritNode<plus> OR a SpiritNode<right_shift> OR a SpiritNode<terminal> (the plus and right_shift concepts being recursively defined in terms of SpiritParser, the terminal concept having little more than a value_type typedef and a member), the intersection of all those concepts would have exactly nothing. So I couldn't do anything with it from a constrained context. What have I missed? How is the SpiritParser concept you've defined above supposed to be used? -- Eric Niebler BoostPro Computing http://www.boostpro.com

At Mon, 27 Sep 2010 15:57:49 -0400, Eric Niebler wrote:
Oh, whoops. Replace RHSOfAssign with SpiritParser.
OK, that helps some. But SpiritParser is already defined above. I assume we can replace that definition with a forward declaration. I assume concepts can be forward declared.
Sorry, don't remember. But you could always put the concept maps last.
I described a recursive description of the SpiritParser concept and you showed one without any recursive property at all. :-P
Uhm, still not sure where the recursion is in your description.
Look again at my description at the top of this email. A SpiritParser is + a node with a plus tag where the child is a SpiritParser, or + a node with a right shift tag where both children are SpiritParsers, or ...
Recursive.
Duh! Sorry for vision failure.
There must be some recursive relationship, like "a SpiritParser is a SpiritNode<plus> /or/ a SpiritNode<right_shift> /or/ ..."
And that's not recursion.
It is, because your own SpiritNode concepts are defined in terms of the SpiritParser concept, closing the circle. Have you lost the thread of this conversation?
Do say that, you just use concept_maps to declare that SpiritNode<plus> models SpiritParser, etc.:
concept_map SpiritParser<SpiritNode<plus> > { // ... }
concept_map SpiritParser<SpiritNode<right_shift> > { // ... }
Not according to my reading of the standard. A concept_map is for mapping *types* to concepts, not *concepts* to concepts, as you appear to be doing here.
Sorry, I may have spelled it wrong, but you can do it. Concept map templates allow you to map types that satisfy given concepts to other concepts. There's an example on page 44 of http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2710.pdf
template <typename Node> requires SpiritNode<Node, plus> concept_map SpiritParser<Node> { // ... }
template <typename Node> requires SpiritNode<Node, right_shift> concept_map SpiritParser<Node> { // ... }
Ah, well this is interesting. I didn't know you could constrain a concept_map like that and overload them on the constraints. But I'm still confused. If we can make recursive concepts as you've suggested above (by renaming RHSOfAssign to SpiritParser and closing the loop), then is this step even needed?
Which step? Needed for what? I don't quite know what you're trying to accomplish yet.
which, as Sebastian has pointed out, were yanked from the concepts proposal.
With good reason.
Algorithmic complexity, IIRC.
Also there's no real-world scenario that can benefit from it. There's nothing that OR constraints could express that couldn't also be expressed with concept_maps, and once you got inside the constrained template you wouldn't be able to use any operations that aren't in the intersection of the constraints (remember that template bodies need to typecheck against their constraints).
The /intersection/?
Of course. How could you typecheck the body otherwise without causing a possible error at instantiation time?
This seems like a real problem then, doesn't it?
No, the answer is easy: don't put OR constraints in the language, because they don't work. They're the problem.
I want to say that a SpiritParser is a SpiritNode<plus> OR a SpiritNode<right_shift> OR a SpiritNode<terminal> (the plus and right_shift concepts being recursively defined in terms of SpiritParser,
I showed you how to handle it above. You're thinking about this the wrong way. Concepts are a rule-based system, and you're approaching them functionally. It's a totally different programming paradigm.
the terminal concept having little more than a value_type typedef and a member), the intersection of all those concepts would have exactly nothing. So I couldn't do anything with it from a constrained context. What have I missed? How is the SpiritParser concept you've defined above supposed to be used?
Heck if I know; it's your library, man! It's empty because I don't know what operations your operator= needs from it. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 9/27/2010 5:06 PM, David Abrahams wrote:
At Mon, 27 Sep 2010 15:57:49 -0400, Eric Niebler wrote:
I want to say that a SpiritParser is a SpiritNode<plus> OR a SpiritNode<right_shift> OR a SpiritNode<terminal> (the plus and right_shift concepts being recursively defined in terms of SpiritParser,
I showed you how to handle it above.
You showed me (in broken code according to Sebastian) how to define a concept that, AFAICT, is useless.
You're thinking about this the wrong way. Concepts are a rule-based system, and you're approaching them functionally. It's a totally different programming paradigm.
This doesn't help me understand it.
the terminal concept having little more than a value_type typedef and a member), the intersection of all those concepts would have exactly nothing. So I couldn't do anything with it from a constrained context. What have I missed? How is the SpiritParser concept you've defined above supposed to be used?
Heck if I know; it's your library, man! It's empty because I don't know what operations your operator= needs from it.
<shrug> I leave this conversation as I leave every conversation about concepts: just as confused as when I started. I write template libraries and care about error messages. Concepts were billed as a way to improve error messages from template libraries. And yet nobody can tell me how to actually use concepts to implement my libraries and improve error messages. -- Eric Niebler BoostPro Computing http://www.boostpro.com

At Mon, 27 Sep 2010 18:09:16 -0400, Eric Niebler wrote:
On 9/27/2010 5:06 PM, David Abrahams wrote:
At Mon, 27 Sep 2010 15:57:49 -0400, Eric Niebler wrote:
I want to say that a SpiritParser is a SpiritNode<plus> OR a SpiritNode<right_shift> OR a SpiritNode<terminal> (the plus and right_shift concepts being recursively defined in terms of SpiritParser,
I showed you how to handle it above.
You showed me (in broken code according to Sebastian)
Yeah, I fixed it in a followup message. I told you my memory was dim and there would be nits to pick!
how to define a concept that, AFAICT, is useless.
As I said, of course I didn't put any content in it because you didn't tell me what operations it needs to have.
You're thinking about this the wrong way. Concepts are a rule-based system, and you're approaching them functionally. It's a totally different programming paradigm.
This doesn't help me understand it.
Well, it should. You keep beating your head against the "write a metafunction to calculate this and that" wall, and I'm telling you, "there's a wall there." If that has made you start looking in other directions for understanding, then it helped, even if you don't completely understand yet. If it didn't, well... You have to approach this from the POV of how concepts are used, *fundamentally*. You can't take a bunch of existing metaprogram code and tack on concepts as a constraint mechanism later. That works with generic code, which is written with concepts in mind, but not with metaprograms.
the terminal concept having little more than a value_type typedef and a member), the intersection of all those concepts would have exactly nothing. So I couldn't do anything with it from a constrained context. What have I missed? How is the SpiritParser concept you've defined above supposed to be used?
Heck if I know; it's your library, man! It's empty because I don't know what operations your operator= needs from it.
<shrug> I leave this conversation as I leave every conversation about concepts: just as confused as when I started. I write template libraries and care about error messages. Concepts were billed as a way to improve error messages from template libraries.
That is one benefit available to a wide range of template libraries. Certainly, generic libraries accrue that benefit. I don't know whether Proto or Spirit could.
And yet nobody can tell me how to actually use concepts to implement my libraries and improve error messages.
I know you know the generic programming process, since we taught a course on it together. You have to start at the beginning and discover the concepts by looking at concrete algorithms, lifting out common requirements, and clustering those into concepts. If that doesn't work in your problem domain, concepts might not be an appropriate mechanism for what you want to. But I wouldn't draw any conclusions about what's possible without first trying it. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Mon, 27 Sep 2010 18:35:49 -0400 David Abrahams <dave@boostpro.com> wrote:
At Mon, 27 Sep 2010 18:09:16 -0400, Eric Niebler wrote:
On 9/27/2010 5:06 PM, David Abrahams wrote:
[snip..]
<shrug> I leave this conversation as I leave every conversation about concepts: just as confused as when I started. I write template libraries and care about error messages. Concepts were billed as a way to improve error messages from template libraries.
That is one benefit available to a wide range of template libraries. Certainly, generic libraries accrue that benefit. I don't know whether Proto or Spirit could.
And yet nobody can tell me how to actually use concepts to implement my libraries and improve error messages.
I know you know the generic programming process, since we taught a course on it together. You have to start at the beginning and discover the concepts by looking at concrete algorithms, lifting out common requirements, and clustering those into concepts. If that doesn't work in your problem domain, concepts might not be an appropriate mechanism for what you want to. But I wouldn't draw any conclusions about what's possible without first trying it.
My experiences with concepts, and reading of proposals, I always had a reservation with the way concepts were being sold to the masses. Publicly many were touting concepts as means to improve error messages - they do a good job of that certainly; but that is not their ultimate purpose. A good few on this list, Eric and Dave included know that the utility had with concepts is principally to enforce stronger checking on code, and its requirements; and along the way you get better type-checking, and improvement in error messages. I consider it unfortunate that concepts weren't spelled out in just this fashion; and hunch everyone jumped on the "it will improve error messages" band-wagon, because many knew the majority of developers wouldn't use it, it only served utility for generic programming; but something had to be said to the rest, to sell the idea of more than doubling the size of the language specification. On the subject, I'm probably one of the few who values the template instantiation stack thrown back in the face of template errors - especially when I'm dealing with code that isn't mine; for it teaches you a bit more about the implementation of the library you happen to be using (where you may otherwise not look), and proves a means of learning - I count that as fun, (others find it a pain) - I lamented for a time that concepts would take that away. Cheers, -- Manfred

On 9/27/2010 6:35 PM, David Abrahams wrote:
At Mon, 27 Sep 2010 18:09:16 -0400, Eric Niebler wrote:
And yet nobody can tell me how to actually use concepts to implement my libraries and improve error messages.
I know you know the generic programming process, since we taught a course on it together. You have to start at the beginning and discover the concepts by looking at concrete algorithms, lifting out common requirements, and clustering those into concepts. If that doesn't work in your problem domain, concepts might not be an appropriate mechanism for what you want to. But I wouldn't draw any conclusions about what's possible without first trying it.
I think so far I have failed to make my use case sufficiently clear, so I'll simplify and try to follow the Process. Take the following simple algorithm sum_ints: template<class First, class Second> int sum_ints_impl( std::pair<First, Second> const & p ) { return sum_ints_impl(p.first) + sum_ints_impl(p.second); } int sum_ints_impl( int i ) { return i; } template<class T> int sum_ints( T const & t ) { return sum_ints_impl( t ); } Should be self explanatory. sum_ints accepts ints or std::pairs of ints, or std::pairs of std::pairs of ints, etc. Now, I'd like to concept-check the argument to sum_ints. What can I say about it? It must be something I can pass to sum_ints_impl. That means it must be an int or a std::pair. How shall I define this int-or-std::pair concept? So I look for requirements. Let's say there is a concept Integer that is satisfied for type int. Let's also say we define a concept Pair that checks for the presence of members "first" and "second". Finally, let's define a concept "SumIntsAble"---that is the concept that must be satisfied by the argument to the sum_ints algorithm. To me, it seems like SumIntsAble<T> is "Integer<T> || Pair<T>". (Actually, it's more complicated than that because we must also check that Pair's first and second members are also SumIntsAble, but let's skip that.) There is no such thing as OR constraints, but as you say we can fake it with concept maps. As a result, the concept SumIntsAble takes the intersection of the two concepts Integer and Pair. Which is nothing interesting. We wouldn't be able to call the first sum_ints_impl because that function requires "first" and "second" data members (which aren't present in the Integer concept). And we can't call the second because that requires that the parameter is convertible to int (which Pair does not enforce). I'm stuck. Can you show me how you would approach this problem? -- Eric Niebler BoostPro Computing http://www.boostpro.com

AMDG On 9/27/2010 6:06 PM, Eric Niebler wrote:
I think so far I have failed to make my use case sufficiently clear, so I'll simplify and try to follow the Process. Take the following simple algorithm sum_ints:
template<class First, class Second> int sum_ints_impl( std::pair<First, Second> const& p ) { return sum_ints_impl(p.first) + sum_ints_impl(p.second); }
int sum_ints_impl( int i ) { return i; }
template<class T> int sum_ints( T const& t ) { return sum_ints_impl( t ); }
<snip>
Can you show me how you would approach this problem?
The only solution I know is to make sum_ints_impl a concept itself with concept maps for int and std::pair. Frankly, I think that moving the metaprogramming into concepts like this is a terrible idea. The current situation with template error messages may be bad, but I strongly suspect that the errors coming from such a concept metaprogramming system would be far worse, because of the "silently fail to match" behavior of concepts. In Christ, Steven Watanabe

At Mon, 27 Sep 2010 18:22:41 -0700, Steven Watanabe wrote:
AMDG
On 9/27/2010 6:06 PM, Eric Niebler wrote:
I think so far I have failed to make my use case sufficiently clear, so I'll simplify and try to follow the Process. Take the following simple algorithm sum_ints:
template<class First, class Second> int sum_ints_impl( std::pair<First, Second> const& p ) { return sum_ints_impl(p.first) + sum_ints_impl(p.second); }
int sum_ints_impl( int i ) { return i; }
template<class T> int sum_ints( T const& t ) { return sum_ints_impl( t ); }
<snip>
Can you show me how you would approach this problem?
The only solution I know is to make sum_ints_impl a concept itself with concept maps for int and std::pair. Frankly, I think that moving the metaprogramming into concepts like this is a terrible idea.
That seems to presume that the use case is inherently more-naturally addressed by metaprogramming than by generic programming. That may be true for some use-cases, but I wouldn't know how to identify them. It certainly does *not* seem to be true for Eric's example.
The current situation with template error messages may be bad, but I strongly suspect that the errors coming from such a concept metaprogramming system would be far worse, because of the "silently fail to match" behavior of concepts.
You lost me there. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 28/09/10 08:21, David Abrahams wrote:
At Mon, 27 Sep 2010 18:22:41 -0700, Steven Watanabe wrote:
AMDG
On 9/27/2010 6:06 PM, Eric Niebler wrote:
I think so far I have failed to make my use case sufficiently clear, so I'll simplify and try to follow the Process. Take the following simple algorithm sum_ints:
template<class First, class Second> int sum_ints_impl( std::pair<First, Second> const& p ) { return sum_ints_impl(p.first) + sum_ints_impl(p.second); }
int sum_ints_impl( int i ) { return i; }
template<class T> int sum_ints( T const& t ) { return sum_ints_impl( t ); }
<snip>
Can you show me how you would approach this problem?
The only solution I know is to make sum_ints_impl a concept itself with concept maps for int and std::pair. Frankly, I think that moving the metaprogramming into concepts like this is a terrible idea.
That seems to presume that the use case is inherently more-naturally addressed by metaprogramming than by generic programming. That may be true for some use-cases, but I wouldn't know how to identify them. It certainly does *not* seem to be true for Eric's example.
Won't it be solved if we used external free function first() and second() that take pair or int and return the pair element or the scalar itself/0 for int ? The constraints then becomes BehaveAsPair and check for any x of type T if first(x) and second(x) is well formed ? Then again, if I understood all the concept stuff correctly, one can write a sum_int_impl using this Concept or is it too hacky ?

On Sep 28, 2010, at 5:44 AM, Joel Falcou <joel.falcou@lri.fr> wrote:
Then again, if I understood all the concept stuff correctly, one can write a sum_int_impl using this Concept or is it too hacky ?
Yes and yes :-) -- BoostPro Computing * http://boostpro.com [Sent from coveted but awkward mobile device]

On 28/09/10 14:30, Dave Abrahams wrote:
On Sep 28, 2010, at 5:44 AM, Joel Falcou<joel.falcou@lri.fr> wrote:
Then again, if I understood all the concept stuff correctly, one can write a sum_int_impl using this Concept or is it too hacky ?
Yes and yes :-)
Yes as in "hacky" I suppose ? clueless-ly yours ;)

Joel Falcou wrote:
On 28/09/10 14:30, Dave Abrahams wrote:
On Sep 28, 2010, at 5:44 AM, Joel Falcou<joel.falcou@lri.fr> wrote:
Then again, if I understood all the concept stuff correctly, one can write a sum_int_impl using this Concept or is it too hacky ?
Yes and yes :-)
Yes as in "hacky" I suppose ?
'...one can write a sum_int_impl using this Concept...' "Yes and" '...is it too hacky?' "yes" HTH, Rob IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

On Sep 28, 2010, at 11:10 AM, Stewart, Robert wrote:
Joel Falcou wrote:
On 28/09/10 14:30, Dave Abrahams wrote:
On Sep 28, 2010, at 5:44 AM, Joel Falcou<joel.falcou@lri.fr> wrote:
Then again, if I understood all the concept stuff correctly, one can write a sum_int_impl using this Concept or is it too hacky ?
Yes and yes :-)
Yes as in "hacky" I suppose ?
'...one can write a sum_int_impl using this Concept...'
"Yes and"
'...is it too hacky?'
"yes"
HTH,
What he said. -- Dave Abrahams BoostPro Computing http://boostpro.com

On 28/09/10 17:10, Stewart, Robert wrote:
'...one can write a sum_int_impl using this Concept...'
"Yes and"
'...is it too hacky?'
"yes"
Guess I embarassed myself then *blushes* Why is it hacky ? The fact that second(int) return 0 ? Then can't we say that second(int) is ill-formed and have HasFirstComponent and HasSecondComponent and thus making the std::pair sum require AND relation ? (or am I still guessing it wrong ?)

On Tue, Sep 28, 2010 at 12:44 PM, joel falcou <joel.falcou@lri.fr> wrote:
On 28/09/10 17:10, Stewart, Robert wrote:
'...one can write a sum_int_impl using this Concept...'
"Yes and"
'...is it too hacky?'
"yes"
Guess I embarassed myself then *blushes* Why is it hacky ? The fact that second(int) return 0 ? Then can't we say that second(int) is ill-formed and have HasFirstComponent and HasSecondComponent and thus making the std::pair sum require AND relation ?
(or am I still guessing it wrong ?)
Actually, I don't think it's a bad hack. Formally, summation in the integers is a _binary_ relation; i.e. a relation from the cartesian product Z^2 into Z. I would write Eric's functions using something like... int sum_int(int x, int y = 0) { return x + y; } int sum_int(pair<int,int> p) { return sum(first(p), second(p)); } To "conceptually" describe the argument(s) of sum_int, you need an abstraction to represent members of Z^2. But in this case I'm not sure there's a satisfying way to do that, because an ordered pair from Z^2 can be encoded in C++ as a single function parameter (a pair) or by two function parameters... To me this seems related to a persistent kink in mathematical notation. Let f : Z^2 -> Z. But we write f(x, y) for <x,y> in Z^2, instead of the more formal f(<x,y>). Also, for those who haven't read it, I would recommend chapter 2 of Jeremy Siek's book The Boost Graph Library as an introduction to concepts. It's a great piece of expository writing and helped me a lot. Daniel Walker

At Tue, 28 Sep 2010 18:44:27 +0200, joel falcou wrote:
On 28/09/10 17:10, Stewart, Robert wrote:
'...one can write a sum_int_impl using this Concept...'
"Yes and"
'...is it too hacky?'
"yes"
Guess I embarassed myself then *blushes* Why is it hacky ? The fact that second(int) return 0 ? Then can't we say that second(int) is ill-formed and have HasFirstComponent and HasSecondComponent and thus making the std::pair sum require AND relation ?
(or am I still guessing it wrong ?)
This BehaveAsPair thing doesn't identify any real abstraction. I promise you're not going to come across any uses for BehaveAsPair other than in solving the exact implementation problem you're attacking. In short, it lacks "truthiness." :-) Was there something unsatisfying about the solution I posted in http://permalink.gmane.org/gmane.comp.lib.boost.devel/209012 ? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 28/09/10 19:37, David Abrahams wrote:
This BehaveAsPair thing doesn't identify any real abstraction. I promise you're not going to come across any uses for BehaveAsPair other than in solving the exact implementation problem you're attacking.
I see
In short, it lacks "truthiness." :-)
Agreed
Was there something unsatisfying about the solution I posted in http://permalink.gmane.org/gmane.comp.lib.boost.devel/209012 ?
Nope, I just missed it. /me goes back to his Concepts base stuff

On Tue, Sep 28, 2010 at 1:48 PM, joel falcou <joel.falcou@lri.fr> wrote:
On 28/09/10 19:37, David Abrahams wrote:
This BehaveAsPair thing doesn't identify any real abstraction. I promise you're not going to come across any uses for BehaveAsPair other than in solving the exact implementation problem you're attacking.
I see
In short, it lacks "truthiness." :-)
Agreed
Was there something unsatisfying about the solution I posted in http://permalink.gmane.org/gmane.comp.lib.boost.devel/209012 ?
Nope, I just missed it.
Oh wait! I think I have another example that's more truthy. :) I'll use Boost.ConceptCheck to flesh it out. Let's define the concept of an AdditivePair to represent the arguments to binary addition. To model the concept a type needs to support first(x), second(x) and add(x) for an object x where all three functions have integer values. So, the (simplest) concept checking class would be: template <class T> struct AdditivePair { BOOST_CONCEPT_USAGE(AdditivePair) { int a = first(x); int b = second(x); int c = add(x); } T x; }; We can now write Eric's function using concept requirements. template<class T> BOOST_CONCEPT_REQUIRES( ((AdditivePair<T>)), (int) ) sum_ints( T const & t ) { return add( t ); } We can now adapt std::pair<int,int> to model the AdditivePair concept. int first(std::pair<int,int> x) { return x.first; } int second(std::pair<int,int> x) { return x.second; } int add(std::pair<int,int> x) { return first(x) + second(x); } And we can also adapt int to model the AdditivePair concept; i.e. int is an additive pair where the second member is 0. int first(int x) { return x; } int second(int x) { return 0; } int add(int x) { return x; } That seems pretty natural to me. Daniel Walker

On Tue, Sep 28, 2010 at 3:36 PM, Daniel Walker <daniel.j.walker@gmail.com> wrote:
On Tue, Sep 28, 2010 at 1:48 PM, joel falcou <joel.falcou@lri.fr> wrote:
On 28/09/10 19:37, David Abrahams wrote:
This BehaveAsPair thing doesn't identify any real abstraction. I promise you're not going to come across any uses for BehaveAsPair other than in solving the exact implementation problem you're attacking.
I see
In short, it lacks "truthiness." :-)
Agreed
Was there something unsatisfying about the solution I posted in http://permalink.gmane.org/gmane.comp.lib.boost.devel/209012 ?
Nope, I just missed it.
Oh wait! I think I have another example that's more truthy. :) I'll use Boost.ConceptCheck to flesh it out. Let's define the concept of an AdditivePair to represent the arguments to binary addition. To model the concept a type needs to support first(x), second(x) and add(x) for an object x where all three functions have integer values. So, the (simplest) concept checking class would be:
template <class T> struct AdditivePair { BOOST_CONCEPT_USAGE(AdditivePair) { int a = first(x); int b = second(x); int c = add(x); } T x; };
We can now write Eric's function using concept requirements.
template<class T> BOOST_CONCEPT_REQUIRES( ((AdditivePair<T>)), (int) ) sum_ints( T const & t ) { return add( t ); }
We can now adapt std::pair<int,int> to model the AdditivePair concept.
int first(std::pair<int,int> x) { return x.first; } int second(std::pair<int,int> x) { return x.second; } int add(std::pair<int,int> x) { return first(x) + second(x); }
And we can also adapt int to model the AdditivePair concept; i.e. int is an additive pair where the second member is 0.
int first(int x) { return x; } int second(int x) { return 0; } int add(int x) { return x; }
That seems pretty natural to me.
Crap. I just realized that Eric's original function was recursive! Doh! Let me try that again. To handle recursive pairs the AdditivePair concept needs a pair_traits class, which will determine the return types of first(x) and second(x). The concept checking class is almost the same as before: template<class> struct pair_traits; template <class T> struct AdditivePair { BOOST_CONCEPT_USAGE(AdditivePair) { typename pair_traits<T>::first_type a = first(x); typename pair_traits<T>::second_type b = second(x); int c = add(x); } T x; }; Eric's function can still use concept requirements. In fact, no changes are required for the recursive version. template<class T> BOOST_CONCEPT_REQUIRES( ((AdditivePair<T>)), (int) ) sum_ints( T const & t ) { return add( t ); } Adapting pair<int,int> and int to model AdaptivePair is basically the same, but now we need to specialize the traits class. // adapt pair int first(std::pair<int,int> x) { return x.first; } int second(std::pair<int,int> x) { return x.second; } int add(std::pair<int,int> x) { return first(x) + second(x); } template<> struct pair_traits<std::pair<int,int> > { typedef int first_type; typedef int second_type; }; // adapt int int first(int x) { return x; } int second(int x) { return 0; } int add(int x) { return x; } template<> struct pair_traits<int> { typedef int first_type; typedef int second_type; }; And now we can adapt recursive pairs to model the AdditivePair concept. template<class T0, class T1> T0 first(std::pair<T0,T1> x) { return x.first; } template<class T0, class T1> T1 second(std::pair<T0,T1> x) { return x.second; } template<class T0, class T1> int add(std::pair<T0,T1> x) { return add(first(x)) + add(second(x)); } template<class T0, class T1> struct pair_traits<std::pair<T0,T1> > { typedef T0 first_type; typedef T1 second_type; }; Now that works as expected. Sorry for any confusion. (It illustrates an interesting point, though. I was concerned with the base case in Eric's recursion where I a saw unary function called sum_ints, which didn't make sense to me conceptually since addition is a binary operation. So, I was focused on trying to work out the concept modeled by the argument to sum_ints, and missed the fact that sum_ints was recursive. Later, though, having the concept worked out made implementing the recursive version trivial.) Daniel Walker

At Tue, 28 Sep 2010 15:36:30 -0400, Daniel Walker wrote:
Was there something unsatisfying about the solution I posted in http://permalink.gmane.org/gmane.comp.lib.boost.devel/209012 ?
Nope, I just missed it.
Oh wait! I think I have another example that's more truthy. :) I'll use Boost.ConceptCheck to flesh it out. Let's define the concept of an AdditivePair to represent the arguments to binary addition. To model the concept a type needs to support first(x), second(x) and add(x) for an object x where all three functions have integer values.
<snip>
We can now adapt std::pair<int,int> to model the AdditivePair concept.
<snip>
And we can also adapt int to model the AdditivePair concept; i.e. int is an additive pair where the second member is 0.
<snip>
That seems pretty natural to me.
Do you honestly think it's as truthy as the solution I offered? IMO it's a slight improvement over the list's previous offering, but making int model AdditivePair is still an odd contortion. AdditivePair still doesn't represent a real abstraction that you'd use anywhere else. By contrast, you could make int model ComplexNumber; that would make real sense and be useful outside of this toy problem. Are you just exploring, or is there another reason we keep heading down this path? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Tue, Sep 28, 2010 at 9:14 PM, David Abrahams <dave@boostpro.com> wrote:
At Tue, 28 Sep 2010 15:36:30 -0400, Daniel Walker wrote:
Was there something unsatisfying about the solution I posted in http://permalink.gmane.org/gmane.comp.lib.boost.devel/209012 ?
Nope, I just missed it.
Oh wait! I think I have another example that's more truthy. :) I'll use Boost.ConceptCheck to flesh it out. Let's define the concept of an AdditivePair to represent the arguments to binary addition. To model the concept a type needs to support first(x), second(x) and add(x) for an object x where all three functions have integer values.
<snip>
We can now adapt std::pair<int,int> to model the AdditivePair concept.
<snip>
And we can also adapt int to model the AdditivePair concept; i.e. int is an additive pair where the second member is 0.
<snip>
That seems pretty natural to me.
Do you honestly think it's as truthy as the solution I offered?
Oh, I didn't mean to compare my example to yours. I only meant mine was more truthy than the previous example in the thread that used first(x) and second(x). Describing the concept in terms of pairs is more natural when you consider that an addition operation always has a first and second argument. Your solution comes at it from a different angle, which is also plenty truthy: the pairs are being reduced to int values, so in that sense IntValued naturally arises as a concept.
IMO it's a slight improvement over the list's previous offering, but making int model AdditivePair is still an odd contortion. AdditivePair still doesn't represent a real abstraction that you'd use anywhere else. By contrast, you could make int model ComplexNumber; that would make real sense and be useful outside of this toy problem.
Are you just exploring, or is there another reason we keep heading down this path?
Just exploring. :) It's a good toy problem to see how concepts can express the type requirements of a function, even when the types are seemingly unrelated like int and pair. It works in this case, because of the semantics of addition, so it also illustrates how semantics are important to concepts, as you have stressed. Daniel Walker

At Mon, 27 Sep 2010 21:06:05 -0400, Eric Niebler wrote:
On 9/27/2010 6:35 PM, David Abrahams wrote:
At Mon, 27 Sep 2010 18:09:16 -0400, Eric Niebler wrote:
And yet nobody can tell me how to actually use concepts to implement my libraries and improve error messages.
I know you know the generic programming process, since we taught a course on it together. You have to start at the beginning and discover the concepts by looking at concrete algorithms, lifting out common requirements, and clustering those into concepts. If that doesn't work in your problem domain, concepts might not be an appropriate mechanism for what you want to. But I wouldn't draw any conclusions about what's possible without first trying it.
I think so far I have failed to make my use case sufficiently clear, so I'll simplify and try to follow the Process. Take the following simple algorithm sum_ints:
template<class First, class Second> int sum_ints_impl( std::pair<First, Second> const & p ) { return sum_ints_impl(p.first) + sum_ints_impl(p.second); }
int sum_ints_impl( int i ) { return i; }
template<class T> int sum_ints( T const & t ) { return sum_ints_impl( t ); }
Should be self explanatory. sum_ints accepts ints or std::pairs of ints, or std::pairs of std::pairs of ints, etc. Now, I'd like to concept-check the argument to sum_ints. What can I say about it? It must be something I can pass to sum_ints_impl. That means it must be an int or a std::pair. How shall I define this int-or-std::pair concept?
Naturally, it's hard to do the Process justice with one tiny example here... but it doesn't seem right to define a concept that must have a particular type structure, any more than you'd define a concept that must be either type A or type B. It's like you didn't lift the abstraction far enough. I guess one test of a proper concept is that it should be possible for someone to create a "new" type that models it (for some reasonable definition of "new" that doesn't include a slightly different instantiation of the same class template).
So I look for requirements. Let's say there is a concept Integer that is satisfied for type int. Let's also say we define a concept Pair that checks for the presence of members "first" and "second". Finally, let's define a concept "SumIntsAble"---that is the concept that must be satisfied by the argument to the sum_ints algorithm. To me, it seems like SumIntsAble<T> is "Integer<T> || Pair<T>". (Actually, it's more complicated than that because we must also check that Pair's first and second members are also SumIntsAble, but let's skip that.)
No need to skip it. This would be a way to solve the same problem without violating the spirit of concepts: namespace my_limited_domain { concept IntValued<typename T> { int value(T); } concept_map IntValued<int> { int value(int x) { return x; } }; template <IntValued L, IntegralValue R> concept_map IntValued< std::pair<L,R> > { int value(std::pair<L,R> const& x) { return value(x.first) + value(x.second); } }; template <IntValued X> int sum_ints(X const& a) // rename to "get_value" { return value(a); } }
There is no such thing as OR constraints, but as you say we can fake it with concept maps.
"Faking it" is the wrong way to look at it; you're still banging your head against that wall, brother. I hope you're wearing a helmet, at least! If you're "doing generic programming," you pretty much never get to the point of wanting to constrain something to be "either this or that." The whole point is to identify a class of types that has a meaning, like IntValued above (which still isn't really as general as it should be, but let's leave it there for now). You end up with a class of types that's *open for extension*. I suppose you could artificially wall off that class of types by not documenting IntValued, but then what would it mean to a user when the error message says, "sorry, that type doesn't model IntValued"? Not much, IMO. So then you could change the name of IntValued to something like MustBeAnIntOrAPairOfIntsOrAPairOfPairsOrEtc, and then you'd be right back in the land of MPL_ASSERT_MESSAGE, yuck! If you want to reap the benefits of concepts, you have to work with them, not against 'em.
As a result, the concept SumIntsAble takes the intersection of the two concepts Integer and Pair. Which is nothing interesting. We wouldn't be able to call the first sum_ints_impl because that function requires "first" and "second" data members (which aren't present in the Integer concept). And we can't call the second because that requires that the parameter is convertible to int (which Pair does not enforce).
I'm stuck.
Can you show me how you would approach this problem?
Does the above help? hoping-erics-head-doesn't-hurt-too-much-ly y'rs, -- Dave Abrahams BoostPro Computing http://www.boostpro.com

(Sorry for going AWOL. I was on a plane most of yesterday, and will be away from the computer most of today. But this thread has been very illuminating.) On 9/28/2010 2:14 AM, David Abrahams wrote:
This would be a way to solve the same problem without violating the spirit of concepts:
namespace my_limited_domain { concept IntValued<typename T> { int value(T); }
concept_map IntValued<int> { int value(int x) { return x; } };
template <IntValued L, IntegralValue R> concept_map IntValued< std::pair<L,R> > { int value(std::pair<L,R> const& x) { return value(x.first) + value(x.second); } };
template <IntValued X> int sum_ints(X const& a) // rename to "get_value" { return value(a); } }
Got it. You solved this particular problem by moving the logic, the very algorithm itself, into a concept map. (And Steven anticipated where I was going with this by bringing up Fusion algorithms, because this is essentially a trivial (segmented) Fusion algorithm.) Putting the algorithm into the concept map is clever, but I find it somewhat distasteful. ("We can improve your template errors! Just don't write templates. Here are some concept maps for your trouble." ;-) There's a more substantial objection below. Let me make the analogy with Proto and Spirit explicit. The tree of std::pairs is like a Proto tree. The IntValued is like the SpiritParser concept (ill-named because the raw Proto tree is not yet a Spirit parser), and sum_ints is the Proto transform that traverses the tree and turns it into a Spirit parser---a necessarily domain-specific algorithm. The "more substantial" objection is that the tree traversal is now done in the domain-specific part of the library. In Proto, the traversal is done by Proto itself, which provides various traversal strategies (more coming eventually, I hope). Traversal of a Proto tree is actually a very tricky problem---branch selection is driven by (sub-)grammar matching---so Proto is the best place for the traversal logic to live. On the face of it, it looks like this is incompatible with your solution above, which would require the traversal logic to live in Spirit's concept maps. Kind of hand-wavy, I know, but have I made my concerns understood? Another potential problem is the building of Proto expression trees. The operator overloads that build Proto trees also live in Proto. If I were to concept-ify Proto, I imagine I'd want to define the ProtoExpr concept (forgive my poor syntax, I hope intent is clear): concept ProtoExpr<typename T> { // A ProtoExpr has a + operator that returns // a new ProtoExpr template<ProtoExpr RHS> ProtoExpr T::operator+(T, RHS); template<ProtoExpr LHS> ProtoExpr T::operator+(LHS, T); // ... and so-on for all the other operators. } This makes ProtoExpr's totally general, as they are today. Now, third party code can use ProtoExpr to constrain their templates... template<ProtoExpr A, ProtoExpr B> void add_expressions(A const &a, B const &b) { a + b; // OK } But when type-checking this template, all we know about a+b is that it is a ProtoExpr. If we want to sum_ints, for instance: template<ProtoExpr A, ProtoExpr B> void add_expressions(A const &a, B const &b) { sum_ints( a + b ); // ERROR: a+b is not IntValued } The body of add_expressions won't type-check because the compiler can't verify that a+b is in fact IntValued. Building expression trees generically and traversing/manipulating them generically in concept-ified code is always where my reasoning breaks down. Thoughts? -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Wed, Sep 29, 2010 at 12:34 PM, Eric Niebler <eric@boostpro.com> wrote:
Building expression trees generically and traversing/manipulating them generically in concept-ified code is always where my reasoning breaks down. Thoughts?
A tree is just a particular kind of graph. I've found the BGL concepts useful for specifying APIs that deal with trees; the concepts are more generic than I need but that has never hurt anything. So, one place to start, at least, would be the BGL's conceptualization of graphs (vertices, edges, etc.). It might give you some inspiration/ideas you could build on. Daniel Walker

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost- bounces@lists.boost.org] On Behalf Of Eric Niebler Sent: Wednesday, September 29, 2010 9:35 AM To: boost@lists.boost.org Subject: Re: [boost] [guidelines] why template errors suck
(Sorry for going AWOL. I was on a plane most of yesterday, and will be away from the computer most of today. But this thread has been very illuminating.)
On 9/28/2010 2:14 AM, David Abrahams wrote:
This would be a way to solve the same problem without violating the spirit of concepts:
namespace my_limited_domain { concept IntValued<typename T> { int value(T); }
concept_map IntValued<int> { int value(int x) { return x; } };
template <IntValued L, IntegralValue R> concept_map IntValued< std::pair<L,R> > { int value(std::pair<L,R> const& x) { return value(x.first) + value(x.second); } };
template <IntValued X> int sum_ints(X const& a) // rename to "get_value" { return value(a); } }
Got it. You solved this particular problem by moving the logic, the very algorithm itself, into a concept map. (And Steven anticipated where I was going with this by bringing up Fusion algorithms, because this is essentially a trivial (segmented) Fusion algorithm.) Putting the algorithm into the concept map is clever, but I find it somewhat distasteful. ("We can improve your template errors! Just don't write templates. Here are some concept maps for your trouble." ;-) There's a more substantial objection below.
Let me make the analogy with Proto and Spirit explicit. The tree of std::pairs is like a Proto tree. The IntValued is like the SpiritParser concept (ill-named because the raw Proto tree is not yet a Spirit parser), and sum_ints is the Proto transform that traverses the tree and turns it into a Spirit parser---a necessarily domain-specific algorithm.
The "more substantial" objection is that the tree traversal is now done in the domain-specific part of the library. In Proto, the traversal is done by Proto itself, which provides various traversal strategies (more coming eventually, I hope). Traversal of a Proto tree is actually a very tricky problem---branch selection is driven by (sub-)grammar matching---so Proto is the best place for the traversal logic to live. On the face of it, it looks like this is incompatible with your solution above, which would require the traversal logic to live in Spirit's concept maps. Kind of hand-wavy, I know, but have I made my concerns understood?
Another potential problem is the building of Proto expression trees. The operator overloads that build Proto trees also live in Proto. If I were to concept-ify Proto, I imagine I'd want to define the ProtoExpr concept (forgive my poor syntax, I hope intent is clear):
concept ProtoExpr<typename T> { // A ProtoExpr has a + operator that returns // a new ProtoExpr
template<ProtoExpr RHS> ProtoExpr T::operator+(T, RHS);
template<ProtoExpr LHS> ProtoExpr T::operator+(LHS, T);
// ... and so-on for all the other operators. }
Let us just assume Proto only has binary plus and terminals. Adding in the other operators is more of the same. concept Proto<typename T> { } // trivial concept ProtoBinaryPlus<typename L, typename R> { requires Proto<L> && Proto<R>; Proto result_type; result_type operator+ (L, R); }
This makes ProtoExpr's totally general, as they are today. Now, third party code can use ProtoExpr to constrain their templates...
template<ProtoExpr A, ProtoExpr B> void add_expressions(A const &a, B const &b) { a + b; // OK }
The general case function add_expressions as provided by Proto (which would capture any type that satisfies the requirements) is: template <Proto L, Proto R> requires ProtoBinaryPlus<L, R> void add_expressions(L const& l, R const& r) { a + b; // ok; general case }
But when type-checking this template, all we know about a+b is that it is a ProtoExpr. If we want to sum_ints, for instance:
template<ProtoExpr A, ProtoExpr B> void add_expressions(A const &a, B const &b) { sum_ints( a + b ); // ERROR: a+b is not IntValued }
This is the user's specialization of the Proto library; it is just *one* way of specifying the constraints on the expression. The set of constraints on the following code is a refinement of the general Proto "add_expressions" function. template <Proto L, Proto R> requires ProtoBinaryPlus<L, R> && IntVal<L> && IntVal<R> void add_expressions(L const& l, R const& r) { sum_ints( a + b ); // ok; specialization added by the user } Perhaps I don't really understand your problem well enough? I'm assuming the first "add_expressions" function is the general function that Proto provides. The second "add_expressions" function is a user-defined functions.
The body of add_expressions won't type-check because the compiler can't verify that a+b is in fact IntValued.
Building expression trees generically and traversing/manipulating them generically in concept-ified code is always where my reasoning breaks down. Thoughts?
-- Eric Niebler BoostPro Computing http://www.boostpro.com _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

AMDG On 9/27/2010 12:57 PM, Eric Niebler wrote:
Also there's no real-world scenario that can benefit from it. There's nothing that OR constraints could express that couldn't also be expressed with concept_maps, and once you got inside the constrained template you wouldn't be able to use any operations that aren't in the intersection of the constraints (remember that template bodies need to typecheck against their constraints). The /intersection/? This seems like a real problem then, doesn't it? I want to say that a SpiritParser is a SpiritNode<plus> OR a SpiritNode<right_shift> OR a SpiritNode<terminal> (the plus and right_shift concepts being recursively defined in terms of SpiritParser,
On 9/27/2010 3:01 PM, David Abrahams wrote: the terminal concept having little more than a value_type typedef and a member), the intersection of all those concepts would have exactly nothing. So I couldn't do anything with it from a constrained context. What have I missed? How is the SpiritParser concept you've defined above supposed to be used?
IMHO, the fundamental problem is that concepts guarantee that if the template type-checks against the concept definition, and the template arguments are models of the concept, then there will be no type errors when the template is instantiated. This is a nice property, but the problem is that for template metaprogramming, the type /is/ the data. The things that would normally be runtime errors become type errors. Thus, once you actually instantiate the template, there can be no errors whatsoever. We don't apply this level of type-checking to runtime code, because it's simply impractical to guarantee. In Christ, Steven Watanabe

On 9/27/2010 5:13 PM, Steven Watanabe wrote:
On 9/27/2010 12:57 PM, Eric Niebler wrote:
On 9/27/2010 3:01 PM, David Abrahams wrote:
Also there's no real-world scenario that can benefit from it. There's nothing that OR constraints could express that couldn't also be expressed with concept_maps, and once you got inside the constrained template you wouldn't be able to use any operations that aren't in the intersection of the constraints (remember that template bodies need to typecheck against their constraints).
The /intersection/? This seems like a real problem then, doesn't it? I want to say that a SpiritParser is a SpiritNode<plus> OR a SpiritNode<right_shift> OR a SpiritNode<terminal> (the plus and right_shift concepts being recursively defined in terms of SpiritParser, the terminal concept having little more than a value_type typedef and a member), the intersection of all those concepts would have exactly nothing. So I couldn't do anything with it from a constrained context. What have I missed? How is the SpiritParser concept you've defined above supposed to be used?
IMHO, the fundamental problem is that concepts guarantee that if the template type-checks against the concept definition, and the template arguments are models of the concept, then there will be no type errors when the template is instantiated. This is a nice property, but the problem is that for template metaprogramming, the type /is/ the data. The things that would normally be runtime errors become type errors. Thus, once you actually instantiate the template, there can be no errors whatsoever. We don't apply this level of type-checking to runtime code, because it's simply impractical to guarantee.
Thanks, Steven. So if I understand you correctly, you're of the opinion that using C++0x concepts to type-check Spirit's templates (or other Proto-based EDSL, or even metaprograms in general) is not going to work? -- Eric Niebler BoostPro Computing http://www.boostpro.com

AMDG On 9/27/2010 3:00 PM, Eric Niebler wrote:
Thanks, Steven. So if I understand you correctly, you're of the opinion that using C++0x concepts to type-check Spirit's templates (or other Proto-based EDSL, or even metaprograms in general) is not going to work?
No, I don't. In Christ, Steven Watanabe

At Mon, 27 Sep 2010 14:13:26 -0700, Steven Watanabe wrote:
IMHO, the fundamental problem is that concepts guarantee that if the template type-checks against the concept definition, and the template arguments are models of the concept, then there will be no type errors when the template is instantiated.
Right in principle about how concepts work. [Full disclosure: there are some corner cases where you can still have a type error but those are waaaaaaaay off in the obscure corners. Concepts catch all the basic cases where you try to use an operation that's not guaranteed by the constraints. However, I don't yet see what makes that the *fundamental* problem.]... ...but I'm not even sure there is a problem yet, even though I was one of the first people to recognize that concepts clash with TMP as practiced today. The question is whether a different approach can accomplish the same ends. As far as I'm concerned, that's still an open question.
This is a nice property, but the problem is that for template metaprogramming, the type /is/ the data. The things that would normally be runtime errors become type errors. Thus, once you actually instantiate the template, there can be no errors whatsoever. We don't apply this level of type-checking to runtime code, because it's simply impractical to guarantee.
I think you mean we don't apply this level of runtime checking to runtime code? But isn't that exactly what contract programming is? (http://en.wikipedia.org/wiki/Design_by_contract#Languages_with_native_suppor... -- Dave Abrahams BoostPro Computing http://www.boostpro.com

AMDG On 9/27/2010 3:15 PM, David Abrahams wrote:
IMHO, the fundamental problem is that concepts guarantee that if the template type-checks against the concept definition, and the template arguments are models of the concept, then there will be no type errors when the template is instantiated. Right in principle about how concepts work. [Full disclosure: there are some corner cases where you can still have a type error but those are waaaaaaaay off in the obscure corners. Concepts catch all the basic cases where you try to use an operation that's not guaranteed by
At Mon, 27 Sep 2010 14:13:26 -0700, Steven Watanabe wrote: the constraints. However, I don't yet see what makes that the *fundamental* problem.]...
...but I'm not even sure there is a problem yet, even though I was one of the first people to recognize that concepts clash with TMP as practiced today. The question is whether a different approach can accomplish the same ends. As far as I'm concerned, that's still an open question.
Until someone finds a way, it might as well not be possible.
This is a nice property, but the problem is that for template metaprogramming, the type /is/ the data. The things that would normally be runtime errors become type errors. Thus, once you actually instantiate the template, there can be no errors whatsoever. We don't apply this level of type-checking to runtime code, because it's simply impractical to guarantee. I think you mean we don't apply this level of runtime checking to runtime code?
No, I really do mean type checking. If template A calls template B, then it won't compile unless the constraints on the parameters of A /guarantee/ that the parameters of B will be correct, right? Runtime checks would only verify that specific concrete parameters to B are correct.
But isn't that exactly what contract programming is? (http://en.wikipedia.org/wiki/Design_by_contract#Languages_with_native_suppor...
I don't know that much about contract programming, but I thought that languages using it had to fall back to dynamic checks some of the time. In Christ, Steven Watanabe

At Mon, 27 Sep 2010 18:49:06 -0700, Steven Watanabe wrote:
...but I'm not even sure there is a problem yet, even though I was one of the first people to recognize that concepts clash with TMP as practiced today. The question is whether a different approach can accomplish the same ends. As far as I'm concerned, that's still an open question.
Until someone finds a way, it might as well not be possible.
That might be a little like saying, "until someone demonstrates that you can write the Sieve of Eratosthenes algorithm in BASIC, it might as well not be possible." At some level, all programming paradigms are equivalent.
This is a nice property, but the problem is that for template metaprogramming, the type /is/ the data. The things that would normally be runtime errors become type errors.
So here you're making an analogy by mapping the runtime world into the compile-time world.
Thus, once you actually instantiate the template, there can be no errors whatsoever. We don't apply this level of type-checking to runtime code, because it's simply impractical to guarantee.
I think you mean we don't apply this level of runtime checking to runtime code?
No, I really do mean type checking.
I don't see how that's consistent with your analogy. When I map back into the runtime world, why doesn't type checking become runtime (value) checking again?
If template A calls template B, then it won't compile unless the constraints on the parameters of A /guarantee/ that the parameters of B will be correct, right? Runtime checks would only verify that specific concrete parameters to B are correct.
And the above seems to continue to mix up the analogy in the same way. I must be totally misunderstanding you.
But isn't that exactly what contract programming is? (http://en.wikipedia.org/wiki/Design_by_contract#Languages_with_native_suppor...
I don't know that much about contract programming, but I thought that languages using it had to fall back to dynamic checks some of the time.
Exactly, all the time. That's my point :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

AMDG On 9/27/2010 11:28 PM, David Abrahams wrote:
At Mon, 27 Sep 2010 18:49:06 -0700, Steven Watanabe wrote:
...but I'm not even sure there is a problem yet, even though I was one of the first people to recognize that concepts clash with TMP as practiced today. The question is whether a different approach can accomplish the same ends. As far as I'm concerned, that's still an open question. Until someone finds a way, it might as well not be possible. That might be a little like saying, "until someone demonstrates that you can write the Sieve of Eratosthenes algorithm in BASIC, it might as well not be possible." At some level, all programming paradigms are equivalent.
You, yourself, just said that it was an open question. Would you describe writing the sieve of Eratosthenes in BASIC as an open question? FWIW, I'm sure that there /exists/ another way to do the things that we do now with TMP, which will work with concepts. I'm not at all convinced that it won't have more problems than current TMP.
This is a nice property, but the problem is that for template metaprogramming, the type /is/ the data. The things that would normally be runtime errors become type errors. So here you're making an analogy by mapping the runtime world into the compile-time world.
Exactly.
Thus, once you actually instantiate the template, there can be no errors whatsoever. We don't apply this level of type-checking to runtime code, because it's simply impractical to guarantee. I think you mean we don't apply this level of runtime checking to runtime code? No, I really do mean type checking. I don't see how that's consistent with your analogy. When I map back into the runtime world, why doesn't type checking become runtime (value) checking again?
This would be true without C++0x concepts. Half of concepts is type checking the bodies of templates when they are defined. These checks are equivalent to ordinary type checking in the runtime world. The problem is that concepts force all checks into this realm.
But isn't that exactly what contract programming is? (http://en.wikipedia.org/wiki/Design_by_contract#Languages_with_native_suppor... I don't know that much about contract programming, but I thought that languages using it had to fall back to dynamic checks some of the time. Exactly, all the time. That's my point :-)
This is equivalent to throwing static_assert all over the place, which I'm all in favor of. It isn't equivalent to C++0x concepts, if I understand them correctly. In Christ, Steven Watanabe

At Tue, 28 Sep 2010 10:12:29 -0700, Steven Watanabe wrote:
AMDG
On 9/27/2010 11:28 PM, David Abrahams wrote:
At Mon, 27 Sep 2010 18:49:06 -0700, Steven Watanabe wrote:
...but I'm not even sure there is a problem yet, even though I was one of the first people to recognize that concepts clash with TMP as practiced today. The question is whether a different approach can accomplish the same ends. As far as I'm concerned, that's still an open question. Until someone finds a way, it might as well not be possible. That might be a little like saying, "until someone demonstrates that you can write the Sieve of Eratosthenes algorithm in BASIC, it might as well not be possible." At some level, all programming paradigms are equivalent.
You, yourself, just said that it was an open question.
Yes. That doesn't make it as good as impossible. There are things that are known/proven to be impossible, you know.
Would you describe writing the sieve of Eratosthenes in BASIC as an open question? FWIW, I'm sure that there /exists/ another way to do the things that we do now with TMP, which will work with concepts. I'm not at all convinced that it won't have more problems than current TMP.
Me neither; I think we just don't know. AFAIK nobody has tried very hard to look for concepts-compatible solutions to the same problems we currently use TMP for.
Thus, once you actually instantiate the template, there can be no errors whatsoever. We don't apply this level of type-checking to runtime code, because it's simply impractical to guarantee. I think you mean we don't apply this level of runtime checking to runtime code? No, I really do mean type checking. I don't see how that's consistent with your analogy. When I map back into the runtime world, why doesn't type checking become runtime (value) checking again?
This would be true without C++0x concepts. Half of concepts is type checking the bodies of templates when they are defined. These checks are equivalent to ordinary type checking in the runtime world.
If you say so. I still don't see how. [I think you mean "analogous," not "equivalent," right? They're certainly not substitutable for one another.]
The problem is that concepts force all checks into this realm.
I don't know what you're getting at, but it might be related to this: the fact that concepts do early checking is problematic for many of our current TMP idioms, which rely on having checks only at instantiation time. For example, foo<mpl::_1> couldn't be written if foo were a constrained template, unless mpl::_1 just happened to satisfy its requirements. Anyway, yeah, that's a clash. But maybe in a "concepts" world we wouldn't want to do that in the first place. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

AMDG On 9/28/2010 10:53 AM, David Abrahams wrote:
The problem is that concepts force all checks into this realm. I don't know what you're getting at, but it might be related to this:
At Tue, 28 Sep 2010 10:12:29 -0700, Steven Watanabe wrote: the fact that concepts do early checking is problematic for many of our current TMP idioms, which rely on having checks only at instantiation time. For example, foo<mpl::_1> couldn't be written if foo were a constrained template, unless mpl::_1 just happened to satisfy its requirements.
This isn't really what I mean. I personally avoid using placeholders anyway, as I've had bad experiences with unexpected instantiations.
Anyway, yeah, that's a clash. But maybe in a "concepts" world we wouldn't want to do that in the first place.
I'm obviously failing to communicate, so I'll try to come up with a more concrete example. In Christ, Steven Watanabe

AMDG On 9/28/2010 10:53 AM, David Abrahams wrote:
At Tue, 28 Sep 2010 10:12:29 -0700, Steven Watanabe wrote:
This would be true without C++0x concepts. Half of concepts is type checking the bodies of templates when they are defined. These checks are equivalent to ordinary type checking in the runtime world. If you say so. I still don't see how. [I think you mean "analogous," not "equivalent," right? They're certainly not substitutable for one another.]
The problem is that concepts force all checks into this realm. I don't know what you're getting at, but it might be related to this: the fact that concepts do early checking is problematic for many of our current TMP idioms, which rely on having checks only at instantiation time. For example, foo<mpl::_1> couldn't be written if foo were a constrained template, unless mpl::_1 just happened to satisfy its requirements.
Anyway, yeah, that's a clash. But maybe in a "concepts" world we wouldn't want to do that in the first place.
Okay. Let's try this: How do you define a concept for a fusion sequence? First of all, this a reasonable thing to do, since there are many algorithms that can be defined for an arbitrary fusion sequence. As an example, let's use the fusion::less algorithm. template<class S> bool less(const S& lhs, const S& rhs); This algorithm requires that S is a FusionSequence, and all the elements are LessThanComparable Here's a first attempt (playing fast and loose with syntax because I don't remember the details): ------------------------------------------- concept FusionSequence<class S> { // What goes here? }; template<FusionSequence S> requires LessThanComparable<???> bool less(const S& lhs, const S& rhs); ------------------------------------------- Suppose that we use something like this: concept FusionSequence<class S> { FusionSequence next_type; next_type pop_front(const S&); S::front_type; front_type front(const S&); constexpr bool empty(const S&); }; This almost works, but what about the empty sequence? pop_front for the empty sequence has to return itself, and front has to return a dummy value. auto concept LessThanComparableFusionSequence<S> : FusionSequence<S> { LessThanComparableFusionSequence<S::next_type>; LessThanComparable<S::front_type>; }; --------------------------------------- Now, how do we write the less algorithm? template<LessThanComparableFusionSequence S> requires empty(S) bool less(const S& lhs, const S& rhs) { return false; } template<LessThanComparableFusionSequence S> requires !empty(S) bool less(const S& lhs, const S& rhs) { if(front(lhs) < front(rhs)) return true; if(front(rhs) < front(lhs)) return false; return less(pop_front(lhs), pop_front(rhs)); } --------------------------------------- Okay, I've convinced myself that something like this should actually work. What are the problems? Well, we've just suppressed errors when accessing past the end of the sequence. Currently in fusion such errors would be caught when the template is instantiated. concepts have just forced us to suppress the error entirely, just to make the code compile. This is going to be even worse for Eric's Proto-based code. For Fusion, we only have two cases, empty and non-empty. With proto expression trees, we have one case for each kind of node (or more, if we need to match more complex patterns in a transform). ------------------------------------------------------ Another way to solve this would be something like auto concept less_impl<EmptyFusionSequence> { ... }; auto concept less_impl<FusionSequence> { ... }; This has the unfortunate side-effect of exposing implementation details of function templates using less. template<class S> requires less_impl<S> void f(const S& arg) { S other = /*...*/; less(other, arg); } User's of f don't care that it is less internally, but the fact of its use has to get propagated up to all calling templates. In Christ, Steven Watanabe

At Tue, 28 Sep 2010 15:31:42 -0700, Steven Watanabe wrote:
How do you define a concept for a fusion sequence?
First of all, this a reasonable thing to do, since there are many algorithms that can be defined for an arbitrary fusion sequence.
Fantastic example!
As an example, let's use the fusion::less algorithm.
That would be great, except that I can't find such an algorithm in http://www.boost.org/doc/libs/1_44_0/libs/fusion/doc/html/index.html, and how the rest of this conversation goes would depend on its semantics. For example, does it do a full lexicographic compare, or must the two sequences have the same length? If you mean what's in boost/fusion/sequence/comparison/less.hpp, it's the former.
template<class S> bool less(const S& lhs, const S& rhs);
This algorithm requires that S is a FusionSequence, and all the elements are LessThanComparable
Here's a first attempt (playing fast and loose with syntax because I don't remember the details):
-------------------------------------------
concept FusionSequence<class S> { // What goes here? };
template<FusionSequence S> requires LessThanComparable<???> bool less(const S& lhs, const S& rhs);
-------------------------------------------
Suppose that we use something like this:
concept FusionSequence<class S> { FusionSequence next_type; next_type pop_front(const S&); S::front_type; front_type front(const S&); constexpr bool empty(const S&); };
This almost works, but what about the empty sequence? pop_front for the empty sequence has to return itself, and front has to return a dummy value.
Close. What you need is a refined concept for NonEmptyFusionSequence that includes front_type and pop_front. Just riffing here, but this could work: concept FusionSequence<class S> { CompileTimeInt size_type = int_<0>; constexpr bool empty(const S&) { return size_type::value; } constexpr std::size_t size(const S&) { return size_type::value; } } auto concept NonEmptyFusionSequence<class S> : FusionSequence<S> { FusionSequence next_type; next_type pop_front(const S&); typename front_type; front_type front(const S&); CompileTimeInt size_type = int_<next_type::size_t::value+1>; }; #if THE_VERBOSE_NON_TRICKY_WAY auto concept EmptyFusionSequence<class S> : FusionSequence<S> requires SameType<S::size_type,int_<0> > {}; template <EmptyFusionSequence S1, EmptyFusionSequence S2> concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return false; } } template <NonEmptyFusionSequence S1, EmptyFusionSequence S2> concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return false; } } template <EmptyFusionSequence S1, NonEmptyFusionSequence S2> concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return true; } } #else // A shorter way. Because of refinement, this one will only get // used when either S1 or S2 is empty. template <FusionSequence S1, FusionSequence S2> // a constraint you could add for clarity, but don't need // requires SameType<int_<0>, int_<S1::size_type()*S2::size_type()> > concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return S1::size_type() < S2::size_type(); } } #endif template <NonEmptyFusionSequence S1, NonEmptyFusionSequence S2> requires LessThanComparable<S1::front_type,S2::front_type> && LessThanComparable<S1::next_type,S2::next_type> concept_map LessThanComparable<S1,S2> { operator<(S1 const& x, S2 const& y) { return front(x) < front(y) || pop_front(x) < pop_front(y) } }
Another way to solve this would be something like auto concept less_impl<EmptyFusionSequence> { ... }; auto concept less_impl<FusionSequence> { ... };
This has the unfortunate side-effect of exposing implementation details of function templates using less.
template<class S> requires less_impl<S> void f(const S& arg) { S other = /*...*/; less(other, arg); }
User's of f don't care that it is less internally, but the fact of its use has to get propagated up to all calling templates.
I don't see where you get that, at all. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

AMDG Sorry for the delayed response. On 9/28/2010 6:04 PM, David Abrahams wrote:
At Tue, 28 Sep 2010 15:31:42 -0700, Steven Watanabe wrote:
How do you define a concept for a fusion sequence?
First of all, this a reasonable thing to do, since there are many algorithms that can be defined for an arbitrary fusion sequence. Fantastic example!
As an example, let's use the fusion::less algorithm. That would be great, except that I can't find such an algorithm in http://www.boost.org/doc/libs/1_44_0/libs/fusion/doc/html/index.html, and how the rest of this conversation goes would depend on its semantics. For example, does it do a full lexicographic compare, or must the two sequences have the same length? If you mean what's in boost/fusion/sequence/comparison/less.hpp, it's the former.
I was actually using a simplified version that requires both sequences to be the same type. (And thus the same length). A full lexicographical comparison is fine though.
Close. What you need is a refined concept for NonEmptyFusionSequence that includes front_type and pop_front. Just riffing here, but this could work:
concept FusionSequence<class S> { CompileTimeInt size_type = int_<0>;
constexpr bool empty(const S&) { return size_type::value; } constexpr std::size_t size(const S&) { return size_type::value; } }
auto concept NonEmptyFusionSequence<class S> : FusionSequence<S> { FusionSequence next_type; next_type pop_front(const S&);
typename front_type; front_type front(const S&);
CompileTimeInt size_type = int_<next_type::size_t::value+1>; };
#if THE_VERBOSE_NON_TRICKY_WAY
auto concept EmptyFusionSequence<class S> : FusionSequence<S> requires SameType<S::size_type,int_<0> > {};
template<EmptyFusionSequence S1, EmptyFusionSequence S2> concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return false; } }
template<NonEmptyFusionSequence S1, EmptyFusionSequence S2> concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return false; } }
template<EmptyFusionSequence S1, NonEmptyFusionSequence S2> concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return true; } }
#else
// A shorter way. Because of refinement, this one will only get // used when either S1 or S2 is empty.
template<FusionSequence S1, FusionSequence S2> // a constraint you could add for clarity, but don't need // requires SameType<int_<0>, int_<S1::size_type()*S2::size_type()> > concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return S1::size_type()< S2::size_type(); } }
Isn't the extra constraint needed to make sure that trying to compare sequences whose elements are /not/ LessThanComparable won't silently misbehave?
#endif
template<NonEmptyFusionSequence S1, NonEmptyFusionSequence S2> requires LessThanComparable<S1::front_type,S2::front_type> && LessThanComparable<S1::next_type,S2::next_type> concept_map LessThanComparable<S1,S2> { operator<(S1 const& x, S2 const& y) { return front(x)< front(y) || pop_front(x)< pop_front(y) } }
Okay. Here's why I don't like this. Suppose that I define a second algorithm with exactly the same requirements. As an example, I'll use what fusion::less actually does. (It doesn't quite do a lexicographical comparison. It truncates to the length of the shorter sequence.) This may be slightly contrived, but I'm sure you'll agree that there exist algorithms with identical requirements, neither one of which can be easily implemented in terms of the other. Now, suppose that I want to write a function template<FusionSequence S1, FusionSequence S2> void foo(S1, S2); which calls both of these algorithms. How can I write the requirements for foo? a) requires LessThanComparable<S1, S2> && TruncatingLessThanComparable<S1, S2> Is every function template that calls foo also going to have to list both of these? This is silly and makes foo difficult to change, because the concept requirements include extraneous information. Remember that the two sets of types that model these concepts are the same. b) Leave it with just FusionSequence. I hope this won't compile unless you have a concept_map LessThanComparable<FusionSequence, FusionSequence> which is a bad idea, as I explained above. c) Somehow define a concept that allows both LessThanComparable and TruncatingLessThanComparable to be deduced. This would be ideal, but I don't have the least idea how to accomplish it. d) Something else I haven't thought of? In Christ, Steven Watanabe

At Wed, 13 Oct 2010 10:09:39 -0700, Steven Watanabe wrote:
AMDG
Sorry for the delayed response.
Back atcha! Been looking forward to getting back to this thread.
As an example, let's use the fusion::less algorithm.
I was actually using a simplified version that requires both sequences to be the same type. (And thus the same length). A full lexicographical comparison is fine though.
Close. What you need is a refined concept for NonEmptyFusionSequence that includes front_type and pop_front. Just riffing here, but this could work:
concept FusionSequence<class S> { CompileTimeInt size_type = int_<0>;
constexpr bool empty(const S&) { return size_type::value; } constexpr std::size_t size(const S&) { return size_type::value; } }
auto concept NonEmptyFusionSequence<class S> : FusionSequence<S> { FusionSequence next_type; next_type pop_front(const S&);
typename front_type; front_type front(const S&);
CompileTimeInt size_type = int_<next_type::size_t::value+1>; };
// A shorter way. Because of refinement, this one will only get // used when either S1 or S2 is empty.
template<FusionSequence S1, FusionSequence S2> // a constraint you could add for clarity, but don't need // requires SameType<int_<0>, int_<S1::size_type()*S2::size_type()> > concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return S1::size_type()< S2::size_type(); } }
Isn't the extra constraint needed to make sure that trying to compare sequences whose elements are /not/ LessThanComparable won't silently misbehave?
I see what you mean; I overlooked that issue. Technically the correct answer is that I guess it depends on your desired semantics; as I coded it, it would mean make_tuple(1, 2) < make_tuple("a", "b", "c") is that misbehavior? Well, I guess it's not what tuple is currently specified to do, so maybe it is.
template<NonEmptyFusionSequence S1, NonEmptyFusionSequence S2> requires LessThanComparable<S1::front_type,S2::front_type> && LessThanComparable<S1::next_type,S2::next_type> concept_map LessThanComparable<S1,S2> { operator<(S1 const& x, S2 const& y) { return front(x)< front(y) || pop_front(x)< pop_front(y) } }
Okay. Here's why I don't like this.
Suppose that I define a second algorithm with exactly the same requirements.
So the first algorithm is fusion::less. If you accept the above concept_maps, fusion::less would just look like: template <FusionSequence S1, FusionSequence S2> requires LessThanComparable<S1,S2> bool less(S1 const& x, S2 const& y) { return x < y; }
As an example, I'll use what fusion::less actually does. (It doesn't quite do a lexicographical comparison. It truncates to the length of the shorter sequence.) This may be slightly contrived, but I'm sure you'll agree that there exist algorithms with identical requirements, neither one of which can be easily implemented in terms of the other.
Granted. Okay, so that would be template <FusionSequence S1, FusionSequence S2> requires LessThanComparable<S1,S2> bool what_fusion_less_actually_does(S1 const& x, S2 const& y) { ... } Right? Same requirements.
Now, suppose that I want to write a function template<FusionSequence S1, FusionSequence S2> void foo(S1, S2); which calls both of these algorithms. How can I write the requirements for foo?
template <FusionSequence S1, FusionSequence S2> requires LessThanComparable<S1,S2> bool foo(S1 const& x, S2 const& y) { return less(x,y) && what_fusion_less_actually_does(x,y); }
a) requires LessThanComparable<S1, S2> && TruncatingLessThanComparable<S1, S2> Is every function template that calls foo also going to have to list both of these? This is silly and makes foo difficult to change, because the concept requirements include extraneous information. Remember that the two sets of types that model these concepts are the same. b) Leave it with just FusionSequence. I hope this won't compile unless you have a concept_map LessThanComparable<FusionSequence, FusionSequence> which is a bad idea, as I explained above. c) Somehow define a concept that allows both LessThanComparable and TruncatingLessThanComparable to be deduced. This would be ideal, but I don't have the least idea how to accomplish it. d) Something else I haven't thought of?
I don't understand why you'd go down any of those roads. Am I missing something? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

AMDG On 10/31/2010 7:48 PM, David Abrahams wrote:
At Wed, 13 Oct 2010 10:09:39 -0700, Steven Watanabe wrote:
Sorry for the delayed response. Back atcha! Been looking forward to getting back to this thread.
Me too, although I'm starting to feel like we're going in circles.
// A shorter way. Because of refinement, this one will only get // used when either S1 or S2 is empty.
template<FusionSequence S1, FusionSequence S2> // a constraint you could add for clarity, but don't need // requires SameType<int_<0>, int_<S1::size_type()*S2::size_type()> > concept_map LessThanComparable<S1,S2> { constexpr operator<(S1 const&, S2 const&) { return S1::size_type()< S2::size_type(); } } Isn't the extra constraint needed to make sure that trying to compare sequences whose elements are /not/ LessThanComparable won't silently misbehave? I see what you mean; I overlooked that issue. Technically the correct answer is that I guess it depends on your desired semantics; as I coded it, it would mean
make_tuple(1, 2)< make_tuple("a", "b", "c")
is that misbehavior? Well, I guess it's not what tuple is currently specified to do, so maybe it is.
I would find this behavior very surprising. Anyway, this isn't really related to the main discussion, so I'll leave it at that.
template<NonEmptyFusionSequence S1, NonEmptyFusionSequence S2> requires LessThanComparable<S1::front_type,S2::front_type> && LessThanComparable<S1::next_type,S2::next_type> concept_map LessThanComparable<S1,S2> { operator<(S1 const& x, S2 const& y) { return front(x)< front(y) || pop_front(x)< pop_front(y) } } Okay. Here's why I don't like this.
Suppose that I define a second algorithm with exactly the same requirements. So the first algorithm is fusion::less. If you accept the above concept_maps, fusion::less would just look like:
template<FusionSequence S1, FusionSequence S2> requires LessThanComparable<S1,S2> bool less(S1 const& x, S2 const& y) { return x< y; }
As an example, I'll use what fusion::less actually does. (It doesn't quite do a lexicographical comparison. It truncates to the length of the shorter sequence.) This may be slightly contrived, but I'm sure you'll agree that there exist algorithms with identical requirements, neither one of which can be easily implemented in terms of the other. Granted.
Okay, so that would be
template<FusionSequence S1, FusionSequence S2> requires LessThanComparable<S1,S2> bool what_fusion_less_actually_does(S1 const& x, S2 const& y) { ... }
Right? Same requirements.
That "..." is exactly what I'm confused about. The theoretical requirements are the same. What I don't understand is how to capture them in a concept in a way that allows both algorithms to be implemented. The concept_map for LessThanCompareable that you show above is tightly coupled to the implementation of the algorithm. In order to implement what_fusion_less_actually_does, we need to iterate over the sequence and compare the elements. The requirements only tell us that the two template parameters are FusionSequences and that we can compare the sequences /as a whole/. I don't understand how the concept_maps, as defined above, allow you to deduce that the elements are LessThanComparable. How can the compiler know that the specialization for NonEmptyFusionSequence is going to be chosen? Even if you overload what_fusion_less_actually_does to handle NonEmptyFusionSequences, the concept_map that we care about has other constaints. I don't see how the compiler can deduce from the fact that some of the constraints for a particular concept_map are satisfied, that the others must be as well. Of course, you and I know that there is no other concept_map available, but how can the compiler know that? Any part of this problem can be solved, but I don't know how to solve all of it at once. So, I'll try to restate exactly what's needed. I think this should cover all the constraints that I've thought of so far. * A concept LessThanComparable for fusion sequences * An algorithm less, which does a full lexicographical comparison of two fusion sequences. * An algorithm what_fusion_less_actually_does which does a lexicographical comparison of two fusion sequences truncated to the length of the shorter sequence. * Both algorithms must have the signature template<FusionSequence S1, FusionSequence S2> requires LessThanComparable<S1, S2> bool f(const S1&, const S2&); * The solution must be extensible to more algorithms with the same requirements. Changing the LessThanComparable concept to accommodate a new algorithm is not acceptable. * Corollary: The solution must not depend on special characteristics of the problem. (e.g. having the concept produce a 0,1,-1 by comparing the two sequences up to the length of the shorter sequence is not acceptable.) * LessThanComparable for the sequences does not have to be the same concept as LessThanComparable for the elements. Splitting it into two concepts is permissible, since the two uses have different purposes in this context. * "Exercises for the reader" are strictly prohibited. I don't think I'm really going to understand this until I have it ironed out to the point where it could be run through a (hypothetical) compiler that supports concepts.
Now, suppose that I want to write a function template<FusionSequence S1, FusionSequence S2> void foo(S1, S2); which calls both of these algorithms. How can I write the requirements for foo? template<FusionSequence S1, FusionSequence S2> requires LessThanComparable<S1,S2> bool foo(S1 const& x, S2 const& y) { return less(x,y)&& what_fusion_less_actually_does(x,y); }
This would be wonderful, if there were a way to define LessThanComparabe that makes this possible. I don't know how to do it.
a) requires LessThanComparable<S1, S2> && TruncatingLessThanComparable<S1, S2> Is every function template that calls foo also going to have to list both of these? This is silly and makes foo difficult to change, because the concept requirements include extraneous information. Remember that the two sets of types that model these concepts are the same. b) Leave it with just FusionSequence. I hope this won't compile unless you have a concept_map LessThanComparable<FusionSequence, FusionSequence> which is a bad idea, as I explained above. c) Somehow define a concept that allows both LessThanComparable and TruncatingLessThanComparable to be deduced. This would be ideal, but I don't have the least idea how to accomplish it. d) Something else I haven't thought of? I don't understand why you'd go down any of those roads. Am I missing something?
I think we just haven't yet reached common ground on what the problem I'm having is. In Christ, Steven Watanabe

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost- bounces@lists.boost.org] On Behalf Of David Abrahams Sent: Sunday, September 26, 2010 9:49 PM To: boost@lists.boost.org Subject: Re: [boost] [guidelines] why template errors suck
On 9/26/2010 9:44 PM, David Abrahams wrote:
On Sep 26, 2010, at 8:55 AM, Eric Niebler wrote:
On 9/25/2010 1:24 PM, David Abrahams wrote:
On Sep 24, 2010, at 11:51 PM, Eric Niebler wrote:
On 9/24/2010 9:37 PM, David Abrahams wrote:
Haha! No, not at all. Let's rephrase the problem a bit. If we still had C++0x concepts, what would the concept SpiritParser look like, such that we could define spirit::qi::rule::operator= such that it required its RHS to satisfy the SpiritParser concept?
That's easy to answer; just look at the operations that operator= et. al expect of such a type. Wouldn't SpiritParser just be some simple refinement of Callable?
No. rule::operator= expects SpiritParser to be a strongly-typed
At Sun, 26 Sep 2010 22:31:54 -0400, Eric Niebler wrote: tree.
Meaning? Spell it out and you have your requirements.
Meaning:
+ If the topmost node has tag-type "plus", there must be exactly one child node that models SpiritParser + If the topmost node has tag-type "shift_right", there must be exactly two child nodes that model SpiritParser, + If the topmost node has tag-type "subscript", there must be a left node that models SpiritParser and a right node that models PhoenixLambda, ... etc, etc, etc, ad nauseum
No offense, but did you really answer the question I asked? Does operator= actually use those requirements? On the face of it, it just seems a lot more likely that it simply requires its RHS to also be a SpiritParser.
How do I express that as a concept?
Summoning up my dim memory of proposed concept notation (so there are guaranteed nits to pick), that might translate into something like:
concept SpiritNode<typename Node, typename Tag> { }
template <class Node> concept_map SpiritNode<typename Node, plus> { requires SameType<Node::children,int_<1> >; typename child_t; requires SpiritParser<child_t>; child_t child; }
template <class Node> concept_map SpiritNode<typename Node, shift_right> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires SpiritParser<child2_t>;
child1_t child1; child2_t child2; }
template <class Node> concept_map SpiritNode<typename Node, subscript> { requires SameType<Node::children,int_<2> >; typename child1_t; requires SpiritParser<child1_t>; typename child2_t; requires PhoenixLambda<child2_t>;
child1_t child1; child2_t child2; }
concept RHSOfAssign<typename X> { typename tag_type; // Every RHSOfAssign has an associated tag_type requires SpiritNode<X, tag_type>; }
This isn't really how concepts were "meant to be used," though.
Disclaimer: it has been a couple of years since I worked with concepts. concept ProtoNode<typename Node> { ChildList children = Node::children; } // Assert the pseudoproto-unary_plus models a ProtoNode template <ProtoNode Node> concept_map ProtoNode< unary_plus<Node> > { } // Assert the pseudoproto-binary_plus models a ProtoNode template <ProtoNode LNode, ProtoNode RNode> concept_map ProtoNode< binary_plus<LNode, RNode> > { } // Assert the pseudoproto-binary_minus models a ProtoNode template <ProtoNode LNode, ProtoNode RNode> concept_map ProtoNode< binary_minus<LNode, RNode> > { } struct foo { }; struct bar { }; struct baz { }; concept_map ProtoNode<foo> { typedef nil children; } concept_map ProtoNode<bar> { typedef nil children; } auto rule = foo() + (+ bar()) + baz(); // error: baz does not satisfy the concept "ProtoNode". // Pseudo-spirit concept. concept SpiritNode<typename Node> { ChildList children = Node::children; ForwardIterator iterator = Node::iterator; iterator parse (iterator, iterator, Node&); } // Assert that all SpiritNodes model ProtoNodes template <SpiritNode Node> concept_map ProtoNode<Node> { } concept ShortProtoTree<typename Node> { // Checks that we have (in "hand-wavey" english): // ProtoNode node that contains two ProtoNode nodes of ProtoNode. // My personal ConceptGCC build uses "," not "&&". requires ProtoNode<Node>, ProtoNode<get_child<_0, ProtoNode<Node>::children>::type>, ProtoNode<get_child<_1, ProtoNode<Node>::children>::type>; } concept IsBinaryPlusOperator<typename Node> { requires SpiritNode<Node>; } concept IsUnaryPlusOperator<typename Node> { requires SpiritNode<Node>; } concept SpiritPlusesShortProtoTree<typename Node> { // Checks that the "ShortProtoTree" is the expression: (+X) + (+Y) requires SpiritNode<Node>, ShortProtoTree<Node>, IsBinaryPlusOperator<Node>, IsUnaryPlusOperator<get_child<_0, ProtoNode<Node>::children>::type>, IsUnaryPlusOperator<get_child<_1, ProtoNode<Node>::children>::type>; } I am waving my hands at the "get_child" functor; it is straightforward to describe using concepts. Concepts are fairly tightly related to algebra declarations. Any algebra can be thought of as a tree, therefore, any tree should be checkable via concepts. Disjoint ("or") concept requirements are not needed in this situation since the concept mechanism will choose the most specialized concept that the model satisfies, not the least specialized. [This is not to say that disjoint concepts aren't necessary in some cases.] -j.
-- Dave Abrahams BoostPro Computing http://www.boostpro.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 9/27/2010 1:16 PM, Smith, Jacob N wrote:
Disclaimer: it has been a couple of years since I worked with concepts.
concept ProtoNode<typename Node> { ChildList children = Node::children; }
// Assert the pseudoproto-unary_plus models a ProtoNode template <ProtoNode Node> concept_map ProtoNode< unary_plus<Node> > { }
// Assert the pseudoproto-binary_plus models a ProtoNode template <ProtoNode LNode, ProtoNode RNode> concept_map ProtoNode< binary_plus<LNode, RNode> > { }
// Assert the pseudoproto-binary_minus models a ProtoNode template <ProtoNode LNode, ProtoNode RNode> concept_map ProtoNode< binary_minus<LNode, RNode> > { }
struct foo { }; struct bar { }; struct baz { };
concept_map ProtoNode<foo> { typedef nil children; } concept_map ProtoNode<bar> { typedef nil children; }
auto rule = foo() + (+ bar()) + baz();
// error: baz does not satisfy the concept "ProtoNode".
Yes, you can concept-check a tree that need not conform to a grammar.
// Pseudo-spirit concept. concept SpiritNode<typename Node> { ChildList children = Node::children; ForwardIterator iterator = Node::iterator;
iterator parse (iterator, iterator, Node&); }
// Assert that all SpiritNodes model ProtoNodes template <SpiritNode Node> concept_map ProtoNode<Node> { }
concept ShortProtoTree<typename Node> { // Checks that we have (in "hand-wavey" english): // ProtoNode node that contains two ProtoNode nodes of ProtoNode. // My personal ConceptGCC build uses "," not "&&". requires ProtoNode<Node>, ProtoNode<get_child<_0, ProtoNode<Node>::children>::type>, ProtoNode<get_child<_1, ProtoNode<Node>::children>::type>; }
concept IsBinaryPlusOperator<typename Node> { requires SpiritNode<Node>; } concept IsUnaryPlusOperator<typename Node> { requires SpiritNode<Node>; }
concept SpiritPlusesShortProtoTree<typename Node> { // Checks that the "ShortProtoTree" is the expression: (+X) + (+Y) requires SpiritNode<Node>, ShortProtoTree<Node>, IsBinaryPlusOperator<Node>, IsUnaryPlusOperator<get_child<_0, ProtoNode<Node>::children>::type>, IsUnaryPlusOperator<get_child<_1, ProtoNode<Node>::children>::type>; }
I am waving my hands at the "get_child" functor; it is straightforward to describe using concepts.
Concepts are fairly tightly related to algebra declarations. Any algebra can be thought of as a tree, therefore, any tree should be checkable via concepts. Disjoint ("or") concept requirements are not needed in this situation since the concept mechanism will choose the most specialized concept that the model satisfies, not the least specialized. [This is not to say that disjoint concepts aren't necessary in some cases.]
You have shown that you can concept-check the expression (+X) + (+Y). You did it by traversing the tree in the SpiritPlusesShortProtoTree concept and verifying each node explicitly. I don't see (yet) how you can generalize this. You appear to be writing a new concept for each valid tree type. But there are an infinite number of trees that are valid Spirit trees (and an infinite number that are not). I still don't see how concepts can tell the difference between the two sets. -- Eric Niebler BoostPro Computing http://www.boostpro.com

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost- bounces@lists.boost.org] On Behalf Of Eric Niebler Sent: Monday, September 27, 2010 11:06 AM To: boost@lists.boost.org Subject: Re: [boost] [guidelines] why template errors suck
On 9/27/2010 1:16 PM, Smith, Jacob N wrote:
Disclaimer: it has been a couple of years since I worked with
concepts.
concept SpiritPlusesShortProtoTree<typename Node> { // Checks that the "ShortProtoTree" is the expression: (+X) +
(+Y)
requires SpiritNode<Node>, ShortProtoTree<Node>, IsBinaryPlusOperator<Node>, IsUnaryPlusOperator<get_child<_0, ProtoNode<Node>::children>::type>, IsUnaryPlusOperator<get_child<_1, ProtoNode<Node>::children>::type>; }
I am waving my hands at the "get_child" functor; it is straightforward to describe using concepts.
Concepts are fairly tightly related to algebra declarations. Any algebra can be thought of as a tree, therefore, any tree should be checkable via concepts. Disjoint ("or") concept requirements are not needed in this situation since the concept mechanism will choose the most specialized concept that the model satisfies, not the least specialized. [This is not to say that disjoint concepts aren't necessary in some cases.]
You have shown that you can concept-check the expression (+X) + (+Y). You did it by traversing the tree in the SpiritPlusesShortProtoTree concept and verifying each node explicitly. I don't see (yet) how you can generalize this.
Do you want a concept checking the set of legal constructions of Spirit parsers or a concept checking a particular instance of the Spirit parsing framework? The first instance seems easier: concept SpiritNode<typename Node> { Iterator iterator = Node::iterator; typename children = Node::children; // opaque iterator parse(iterator, iterator, Node&); } concept SpiritTerminal<typename Node> : SpiritNode<Node> { } concept SpiritUnaryNode<typename Node> { requires SpiritNode<Node>; requires SpiritNode<get_child<_0, SpiritNode<Node>::children>::type>; } concept SpiritBinaryNode<typename Node> { requires SpiritNode<Node>; requires SpiritNode<get_child<_0, SpiritNode<Node>::children>::type>, SpiritNode<get_child<_1, SpiritNode<Node>::children>::type>; } // etc. Althought I think this covers all the legal overloadable operators. I'm not sure how "interesting" this is, other than to force a mapping to known arities for spirit nodes. In the second case, checking a particular instance of a grammar, let me take a short stab at it [this will assuredly have mistakes, but I don't think the fundamental solution is flawed modulo the "no concepts in C++" issue]: S ::= E0 E0 ::= E1 + E0 | E1 E1 ::= <symbol> concept MyGrammarS<typename Node> { requires SpiritNode<Node>; // Semantic qualification: // It is not legal to define a direct modeling relationship // to MyGrammarS. All such relationships must occur // indirectly, i.e., through MyGrammarE0, MyGrammarE1, or MyGrammarE1pE0. } concept MyGrammarE0<typename Node> { requires MyGrammarS<Node>; } concept MyGrammarE1<typename Node> { requires SpiritNode<Node>; } concept MyGrammarE1pE0<typename Node> { requires SpiritBinaryNode<Node>; requires MyGrammarE1<get_child<_0, SpiritNode<Node>::children>::type>, MyGrammarE0<get_child<_1, SpiritNode<Node>::children>::type>; } template <MyGrammarE0 Node> concept_map MyGrammarS<Node> { } template <MyGrammarE1 Node> concept_map MyGrammarE0<Node> { } // The following modeling relationship uses a made up syntax. I'm not sure // anyone implemented a modeling relationship syntax from multisorted concepts // without an underlying carrier. template <MyGrammarS L, MyGrammarS R> concept_map MyGrammarE0<MyGrammarE1pE0<L,R>> { } Now when we define the modeling relationship from types to concepts, we do not allow an definition to the concept "MyGrammarS". For instance: template <typename L, typename R> struct plus_node { }; template <MyGrammarS L, MyGrammarS R> concept_map MyGrammarE1pE0<plus_node<L, R>> { } template <int K> struct symbol_node { }; template <int K> concept_map MyGrammarE1<symbol_node<K>> { } The following type is legal: typedef plus_node<plus_node<symbol_node<0>, symbol_node<1>>, plus_node<symbol_node<1>, symbol_node<2>>> my_gram0; // legal But this type is illegal: typedef plus_node<int, double> my_gram1; // illegal Even if we have types that otherwise satisfy Spirit nodes: struct foo { }; concept_map SpiritNode<foo> { /* impl */ } They can't be used within my grammar: typedef plus_node<foo, foo> my_gram2; // illegal This of course relies on the fact that I don't try to "hide" MyGrammar* nodes by having them model MyGrammarS and not the fully specialized variant of the concept hierarchy for my node's type.
You appear to be writing a new concept for each valid tree type. But there are an infinite number of trees that are valid Spirit trees (and an infinite number that are not). I still don't see how concepts can tell the difference between the two sets.
-- Eric Niebler BoostPro Computing http://www.boostpro.com _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

At Mon, 27 Sep 2010 16:55:58 -0700, Smith, Jacob N wrote:
concept MyGrammarE1pE0<typename Node> { requires SpiritBinaryNode<Node>; requires MyGrammarE1<get_child<_0, SpiritNode<Node>::children>::type>, MyGrammarE0<get_child<_1, SpiritNode<Node>::children>::type>; }
template <MyGrammarE0 Node> concept_map MyGrammarS<Node> { }
template <MyGrammarE1 Node> concept_map MyGrammarE0<Node> { }
// The following modeling relationship uses a made up syntax. I'm not sure // anyone implemented a modeling relationship syntax from multisorted concepts // without an underlying carrier. template <MyGrammarS L, MyGrammarS R> concept_map MyGrammarE0<MyGrammarE1pE0<L,R>> { }
Isn't MyGrammarE1pE0 a single-type ("single-sorted" if you're nasty) concept? I don't know what you mean by "carrier" above. Could you explain what you're trying to accomplish with this made-up syntax? Maybe I can help. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost- bounces@lists.boost.org] On Behalf Of David Abrahams Sent: Monday, September 27, 2010 5:25 PM To: boost@lists.boost.org Subject: Re: [boost] [guidelines] why template errors suck
At Mon, 27 Sep 2010 16:55:58 -0700, Smith, Jacob N wrote:
concept MyGrammarE1pE0<typename Node> { requires SpiritBinaryNode<Node>; requires MyGrammarE1<get_child<_0,
SpiritNode<Node>::children>::type>,
MyGrammarE0<get_child<_1,
SpiritNode<Node>::children>::type>;
}
template <MyGrammarE0 Node> concept_map MyGrammarS<Node> { }
template <MyGrammarE1 Node> concept_map MyGrammarE0<Node> { }
// The following modeling relationship uses a made up syntax. I'm not sure // anyone implemented a modeling relationship syntax from multisorted concepts // without an underlying carrier. template <MyGrammarS L, MyGrammarS R> concept_map MyGrammarE0<MyGrammarE1pE0<L,R>> { }
Isn't MyGrammarE1pE0 a single-type ("single-sorted" if you're nasty) concept?
Just a mistake. I suppose template <MyGramarE1pE0 Node> concept_mp MyGrammarE0<Node> { } is what I was trying to say.
I don't know what you mean by "carrier" above. Could you explain what you're trying to accomplish with this made-up syntax? Maybe I can help.
-- Dave Abrahams BoostPro Computing http://www.boostpro.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sep 26, 2010, at 7:31 PM, Eric Niebler wrote:
On 9/26/2010 9:44 PM, David Abrahams wrote:
On Sep 26, 2010, at 8:55 AM, Eric Niebler wrote:
If there are template constraints not practically expressible in concepts, then aren't concepts broken?
Not necessarily. Concepts are not about expressing arbitrary template constraints. They're about describing abstractions. One might say that if you can't figure out how to write a concept for a template parameter, you don't understand the abstraction it's modelling.
I understand the abstraction perfectly well. It's a strongly-typed tree that conforms to a grammar. I haven't a clue how to express that as a concept. It's not a reflection of my poor understanding of my abstraction. It reflects my poor understanding of concepts. I'm sincerely asking for help trying to understand this.
I don't really see how Proto parse trees are an abstraction. They feel very concrete to me. The abstraction here are heterogenous trees of arbitrary branching level - Proto really doesn't enforce much beyond the ability to get the arity, children and value of a node. Spirit adds requirements on the nodes, e.g. that they have to be able to process input text (for parsers), and expose an attribute type. All of this can be modeled. Proto grammars essentially extend the basic Proto tree node concept by setting requirements on the arity, value and children of the node. The main problem here is that grammars are a very succinct way of specifying these requirements, while the equivalent concept language would be quite verbose. In particular, alternative grammars - proto::or_ - are very hard to express, because concepts don't have a natural way of expressing "concept A is fulfilled if B or C are fulfilled, and it doesn't matter which". The reason for this is that concept are designed to place requirements on types that allow C++ code to be validated. You can't write a template function that makes use of the requirement "either binary plus or binary minus will compile, but either may fail", not without first determining which and then dispatching to helper functions. Which is exactly how Proto grammars would be implemented. And that's just a very inefficient way to do it. Sebastian

On 27/09/10 08:14, Sebastian Redl wrote:
I don't really see how Proto parse trees are an abstraction. They feel very concrete to me. The abstraction here are heterogenous trees of arbitrary branching level - Proto really doesn't enforce much beyond the ability to get the arity, children and value of a node. Spirit adds requirements on the nodes, e.g. that they have to be able to process input text (for parsers), and expose an attribute type. All of this can be modeled. Proto grammars essentially extend the basic Proto tree node concept by setting requirements on the arity, value and children of the node. The main problem here is that grammars are a very succinct way of specifying these requirements, while the equivalent concept language would be quite verbose. In particular, alternative grammars - proto::or_ - are very hard to express, because concepts don't have a natural way of expressing "concept A is fulfilled if B or C are fulfilled, and it doesn't matter which". The reason for this is that concept are designed to place requirements on types that allow C++ code to be validated. You can't write a template function that makes use of the requirement "either binary plus or binary minus will compile, but either may fail", not without first determining which and then dispatching to helper functions. Which is exactly how Proto grammars would be implemented. And that's just a very inefficient way to do it.
I quite agree. In my own poor layman use of cocnept in another proto based library, I never had to specify anything proto in my concepts. NT2 concepts of EvaluableExpression, ElementwiseEvaluableExpression etc just rely on outer interface availability. Heck if I had to capture the whole grammar of Matlab in a concept I owuld have died already :/

On 9/27/2010 2:14 AM, Sebastian Redl wrote:
On Sep 26, 2010, at 7:31 PM, Eric Niebler wrote:
On 9/26/2010 9:44 PM, David Abrahams wrote:
On Sep 26, 2010, at 8:55 AM, Eric Niebler wrote:
If there are template constraints not practically expressible in concepts, then aren't concepts broken?
Not necessarily. Concepts are not about expressing arbitrary template constraints. They're about describing abstractions. One might say that if you can't figure out how to write a concept for a template parameter, you don't understand the abstraction it's modelling.
I understand the abstraction perfectly well. It's a strongly-typed tree that conforms to a grammar. I haven't a clue how to express that as a concept. It's not a reflection of my poor understanding of my abstraction. It reflects my poor understanding of concepts. I'm sincerely asking for help trying to understand this.
I don't really see how Proto parse trees are an abstraction. They feel very concrete to me.
The abstraction I was referring to was the one applicable to the RHS of Spirit's rule::operator=.
The abstraction here are heterogenous trees of arbitrary branching level - Proto really doesn't enforce much beyond the ability to get the arity, children and value of a node.
Right.
Spirit adds requirements on the nodes, e.g. that they have to be able to process input text (for parsers), and expose an attribute type.
Not quite. At the point at which it is assigned to a rule, the RHS is not yet a parser; i.e. it cannot process input text or expose attributes. It's just a dumb tree. It's the job of rule::operator= to turn it into a Spirit parser. To do that, it must walk the tree and manipulate it, and to do /that/, it places certain constraints on the compile-time structure of the tree: that it satisfies a grammar. It is THAT constraint that I'm trying to model.
All of this can be modeled. Proto grammars essentially extend the basic Proto tree node concept by setting requirements on the arity, value and children of the node. The main problem here is that grammars are a very succinct way of specifying these requirements, while the equivalent concept language would be quite verbose. In particular, alternative grammars - proto::or_ - are very hard to express, because concepts don't have a natural way of expressing "concept A is fulfilled if B or C are fulfilled, and it doesn't matter which". The reason for this is that concept are designed to place requirements on types that allow C++ code to be validated. You can't write a template function that makes use of the requirement "either binary plus or binary minus will compile, but either may fail", not without first determining which and then dispatching to helper functions. Which is exactly how Proto grammars would be implemented. And that's just a very inefficient way to do it.
This is the crux of it, thanks. I need OR-constraints. Concepts don't have 'em. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 25/09/10 04:51, Eric Niebler wrote:
On 9/24/2010 9:37 PM, David Abrahams wrote:
The real challenge would be making it easy to write new concepts. Right now the usage requirements are simple to state, but if you wanted static_assert to fire, we'd need to use checks that don't cause errors, e.g. instead of:
same_type(*i++,v); // postincrement-dereference returning value_type
you'd have to write something like:
static_assert( has_postincrement<InputIterator>::value, "not postincrementable"); static_assert( has_postincrement_deref<InputIterator>::value, "not dereferenceable"); static_assert( is_same< postincrement_deref<InputIterator>::type, InputIterator::value_type >::value, "postincrement dereference doesn't return value_type");
Well, in all, I guess I like the rigor but it's not very compatible with the loosey-goosey C++03 way of specifying concepts <pines for real concept support>.
Blech. Can C++0x SFINAE on expressions make this any nicer? The idea is to be able to text expressions for well-formedness without causing a compile error. Then we'd need a way to chain these expressions to make lists of requirements. Maybe PP sequences consisting of valid expressions? Is there a compiler that implements this yet so we can play with it?
I decided to have a first stab at something along these lines (with g++ 4.4 in C++0x mode). I think my experiment shows promise. I tweaked a cut-down version of the Boost.ConceptCheck InputIterator concept thus: template<typename X> class InputIterator { private: typedef std::iterator_traits<X> t; public: typedef typename t::value_type value_type; ~InputIterator() { verify::check(verify::same_type(*i++, v)); } private: verify::example<X> i; verify::example<value_type> v; }; Here verify::same_type, i, and v are all Proto terminals, so the expression verify::same_type(*i++, v) is a Proto expression tree. the verify::check function traverses this tree and pulls out the first problematic thing and causes a pertinent static_assertion to fail about it. So, for example, using the following classes: struct A { typedef std::random_access_iterator_tag iterator_category; typedef int value_type; typedef ptrdiff_t difference_type; typedef int* pointer; typedef int& reference; }; struct B : A { B operator++(int); }; struct C : A { C operator++(int); double const& operator*(); }; struct D : A { D operator++(int); int const& operator*(); }; and checking each of them with a statement like: VERIFY_MODELS_CONCEPT(InputIterator<A>); yields the following compiler errors, which I think are pretty succinct and pertinent: For A: ../../verify/ensure_postincrementable.hpp: In destructor ‘verify::not_postincrementable_problem<ShouldBePostIncrementable>::~not_postincrementable_problem() [with ShouldBePostIncrementable = A]’: inputiterator.hpp:17: instantiated from ‘InputIterator<X>::~InputIterator() [with X = A]’ ../../verify/models_concept.hpp:13: instantiated from ‘void verify::models_concept() [with Concept = InputIterator<A>]’ check-a.cpp:8: instantiated from here ../../verify/ensure_postincrementable.hpp:13: error: static assertion failed: "given type should be postincremantable but is not" For B: ../../verify/ensure_dereferencable.hpp: In destructor ‘verify::not_dereferencable_problem<ShouldBeDereferencable>::~not_dereferencable_problem() [with ShouldBeDereferencable = B]’: inputiterator.hpp:17: instantiated from ‘InputIterator<X>::~InputIterator() [with X = B]’ ../../verify/models_concept.hpp:13: instantiated from ‘void verify::models_concept() [with Concept = InputIterator<B>]’ check-b.cpp:8: instantiated from here ../../verify/ensure_dereferencable.hpp:9: error: static assertion failed: "given type should be dereferencable but is not" For C: ../../verify/ensure_same_type.hpp: In destructor ‘verify::different_type_problem<ShouldBeSame1, ShouldBeSame2>::~different_type_problem() [with ShouldBeSame1 = double, ShouldBeSame2 = int]’: inputiterator.hpp:17: instantiated from ‘InputIterator<X>::~InputIterator() [with X = C]’ ../../verify/models_concept.hpp:13: instantiated from ‘void verify::models_concept() [with Concept = InputIterator<C>]’ check-c.cpp:8: instantiated from here ../../verify/ensure_same_type.hpp:9: error: static assertion failed: "given types should be the same but are not" For D there is no error because everything is valid. I think the first two errors here are as good as what David was suggesting above. The last is not so good because it lacks the context to see where these two types (int and double) arose from. This particular example is probably OK given that it points you to the relevant line in the concept definition, but it's something to watch out for as things become more complicated. Of course, this particular implementation is not obviously better (and is possibly worse) than the existing Boost.ConceptCheck techniques. But I just wanted to prove the concept; I think it should be possible to define concepts using a Proto DSEL (and no doubt an unhealthy dose of preprocessor nonsense) and to use concepts so defined both to assert that types model concepts (giving helpful compile errors when they don't) and to test whether types model concepts for function dispatch, etc. without compiler errors. Probably it would be better to try to mimic the proposed standard syntax in this DSEL rather than Boost.ConceptCheck's syntax (although it would be nice to ease migration from Boost.ConceptCheck too). Obviously there are some tricky aspects (two that spring to mind are how to handle named associated types or member functions and getting const correctness right) but I think it's doable. Does this sound like a worthwhile thing to attempt? I'd be willing to give it a try. John Bytheway

On 9/26/2010 2:24 PM, John Bytheway wrote:
On 25/09/10 04:51, Eric Niebler wrote:
On 9/24/2010 9:37 PM, David Abrahams wrote:
The real challenge would be making it easy to write new concepts. Right now the usage requirements are simple to state, but if you wanted static_assert to fire, we'd need to use checks that don't cause errors, e.g. instead of:
same_type(*i++,v); // postincrement-dereference returning value_type
you'd have to write something like:
static_assert( has_postincrement<InputIterator>::value, "not postincrementable"); static_assert( has_postincrement_deref<InputIterator>::value, "not dereferenceable"); static_assert( is_same< postincrement_deref<InputIterator>::type, InputIterator::value_type >::value, "postincrement dereference doesn't return value_type");
Well, in all, I guess I like the rigor but it's not very compatible with the loosey-goosey C++03 way of specifying concepts <pines for real concept support>.
Blech. Can C++0x SFINAE on expressions make this any nicer? The idea is to be able to text expressions for well-formedness without causing a compile error. Then we'd need a way to chain these expressions to make lists of requirements. Maybe PP sequences consisting of valid expressions? Is there a compiler that implements this yet so we can play with it?
I decided to have a first stab at something along these lines (with g++ 4.4 in C++0x mode). I think my experiment shows promise. I tweaked a cut-down version of the Boost.ConceptCheck InputIterator concept thus:
template<typename X> class InputIterator { private: typedef std::iterator_traits<X> t; public: typedef typename t::value_type value_type; ~InputIterator() { verify::check(verify::same_type(*i++, v)); } private: verify::example<X> i; verify::example<value_type> v; };
Here verify::same_type, i, and v are all Proto terminals, so the expression verify::same_type(*i++, v) is a Proto expression tree.
Wow, I never would have thought to use Proto for this!
the verify::check function traverses this tree and pulls out the first problematic thing and causes a pertinent static_assertion to fail about it. So, for example, using the following classes:
struct A { typedef std::random_access_iterator_tag iterator_category; typedef int value_type; typedef ptrdiff_t difference_type; typedef int* pointer; typedef int& reference; };
struct B : A { B operator++(int); };
struct C : A { C operator++(int); double const& operator*(); };
struct D : A { D operator++(int); int const& operator*(); };
and checking each of them with a statement like:
VERIFY_MODELS_CONCEPT(InputIterator<A>);
yields the following compiler errors, which I think are pretty succinct and pertinent:
<snip>
Does this sound like a worthwhile thing to attempt? I'd be willing to give it a try.
John, this sounds terrific. I don't think I grok in full what you're doing but it sounds worthwhile. One thing to consider is whether you can use SFINAE on expressions to validate concepts without causing hard errors. Then we might be able to get some approximation of concept-based overloading. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 27/09/10 16:05, Eric Niebler wrote:
On 9/26/2010 2:24 PM, John Bytheway wrote:
I decided to have a first stab at something along these lines (with g++ 4.4 in C++0x mode). I think my experiment shows promise. I tweaked a cut-down version of the Boost.ConceptCheck InputIterator concept thus:
template<typename X> class InputIterator { private: typedef std::iterator_traits<X> t; public: typedef typename t::value_type value_type; ~InputIterator() { verify::check(verify::same_type(*i++, v)); } private: verify::example<X> i; verify::example<value_type> v; };
Here verify::same_type, i, and v are all Proto terminals, so the expression verify::same_type(*i++, v) is a Proto expression tree.
Wow, I never would have thought to use Proto for this!
I'm glad I could stretch its application domain beyond even your imagination :). <snip>
Does this sound like a worthwhile thing to attempt? I'd be willing to give it a try.
John, this sounds terrific. I don't think I grok in full what you're doing but it sounds worthwhile.
Here's an example of the sort of thing I'm imagining (this is just the first plausible syntax that springs to mind): // Some nasty preprocessor stuff to allow use of a traits class TRAITS_CLASS( (std)(iterator_traits), (iterator_category)(value_type)...) // Define the concept class InputIterator : concept<InputIterator, refines<Assignable, EqualityComparable>> { private: // Make some sample values to define requirements with static model i; static std::iterator_traits<model>::value_type v; public: // State the requirements (which will be a big messy Proto type) static auto requirements = provide_traits<std::iterator_traits<model>>(), same_type(*i, v), same_type(i++, i), meaningful(++i), ...; }; // Use the concept for concept-based overloading template<typename T> void my_algorithm(T a, T b, boost::enable_if<models_concept<T, InputIterator>>::type*=0) { // Or use it inside the algorithm to verify that // requirements are satisfied (compile error otherwise) assert_models_concept<T, InputIterator>(); ... } // Or make an archetype of the concept and use it to verify // that the algorithm really does only use the required syntax archetype<InputIterator>::type t; ASSERT_COMPILES(my_algorithm(t, t)); Does that make it any clearer?
One thing to consider is whether you can use SFINAE on expressions to validate concepts without causing hard errors. Then we might be able to get some approximation of concept-based overloading.
That's roughly what I'm already doing, and exactly what I was proposing to do. In order to get those nice-ish error messages I had to avoid the analogous hard errors deep in the guts of the code. The exact same techniques would allow concept-based overloading. The last thing I mention above (using the concept to make an archetype) is something that has more recently occurred to me. I think it is in principle possible, although I suspect it could get very messy in practice. Nevertheless, it would be very nice to get all three aspects from a single definition, because that fills the three primary design goals of Concepts as I see them: 1. Early detection of syntax issues for those writing generic algorithms (archetypes provide this). 2. Early detection of syntax issues for those using generic algorithms (the ability to assert that types model concepts provides this). 3. Concept-based overloading. Of course, to do number 3 properly we need to cope with refinements properly; using overloading as I've written above would lead to ambiguities if types match multiple concepts. You can't do this: template<typename T> void my_algorithm(T a, boost::enable_if<models_concept<T, InputIterator>>::type*=0); template<typename T> void my_algorithm(T a, boost::enable_if<models_concept<T, ForwardIterator>>::type*=0); because if T is an InputIterator then it will match both overloads. However, at the cost of one indirection we can take advantage of C++'s existing most-derived-class overloading by constructing a hierarchy of tag types matching the hierarchy of concept refinements, thus: template<typename T> void my_algorithm_impl(T a, tag_of<InputIterator>::type const&); template<typename T> void my_algorithm_impl(T a, tag_of<ForwardIterator>::type const&); template<typename T> void my_algorithm(T a) { my_algorithm_impl(a, most_refined_tag_modelled<T, ForwardIterator>::type()) } which again I think is possible from constructions such as those above. John Bytheway

Eric Niebler wrote:
On 9/24/2010 8:22 PM, Robert Ramey wrote:
Eric Niebler wrote:
This is borderline self-promotion and I fretted posting this. But we a community of advanced C++ library developers and I'd really like feedback about these issues. And they really do involve Boost library development. If we could reach agreement about these techniques, we might want to integrate them into Boost's coding guidelines.
http://www.reddit.com/r/programming/comments/diddg/expressive_c_why_template...
Comments?
would all this boil down to the following guidlines.
a) All template libraries should use boost::concepts ?
Certainly Concept_check or something like it has a big role to play in this. But I wouldn't know how to use Concept_check to verify that an expression template matches a grammar, for instance. It's not the whole story. Though perhaps I don't know enough about the concept_check library to see how it can apply to proto expressions and grammars. Suggestions?
Also, I would prefer C++0x static_assert to the concept_check macros because the error messages can be much nicer. I think the answer is that the concept_check library badly needs a C++0x makeover.
Hmm - so the above guidline would be sufficient if the concept_check_library were subjected to a makeover? Your article makes a point that libraries should check template parameters. It's my understanding that this is the intention of boost concept checks. I'm not sure where "verify that an expression template matches a grammar" fits in here.
b) All template libraries should try to factor implemenations so that compile errors are not propagated?
Yes. And: detailed comments should be left near any static assertions or concept checks, since that's where the unlucky few will end up.
I've been doing ths all the time. It's more than a few that end up there. Just to sum up would the following be sufficent to implement your idea? a) makeover of concept_check b) wider usage of concept_check c) refactoring to truncate listings of compiler errors Obviously I'm trying to boil this down to something that can be verified in an objective way. Robert Ramey

On 9/24/2010 9:58 PM, Robert Ramey wrote:
Eric Niebler wrote:
On 9/24/2010 8:22 PM, Robert Ramey wrote:
would all this boil down to the following guidlines.
a) All template libraries should use boost::concepts ?
Certainly Concept_check or something like it has a big role to play in this. But I wouldn't know how to use Concept_check to verify that an expression template matches a grammar, for instance. It's not the whole story. Though perhaps I don't know enough about the concept_check library to see how it can apply to proto expressions and grammars. Suggestions?
Also, I would prefer C++0x static_assert to the concept_check macros because the error messages can be much nicer. I think the answer is that the concept_check library badly needs a C++0x makeover.
Hmm - so the above guidline would be sufficient if the concept_check_library were subjected to a makeover?
Possibly. Likely, even. I just don't know enough to say.
Your article makes a point that libraries should check template parameters. It's my understanding that this is the intention of boost concept checks. I'm not sure where "verify that an expression template matches a grammar" fits in here.
Expressions are interesting in that they have such rich compile-time structure. Think of XML documents. Grammars are like XML Schemas; they describe which XML documents are valid. I just don't know how to match the schema/data model on the concept/type model. What is the concept satisfied by a Spirit rule? Or a Phoenix lambda expression? I've never seen a concept like that. I guess what I'm wondering is if concepts are really powerful enough to describe types like that.
b) All template libraries should try to factor implemenations so that compile errors are not propagated?
Yes. And: detailed comments should be left near any static assertions or concept checks, since that's where the unlucky few will end up.
I've been doing ths all the time. It's more than a few that end up there.
Just to sum up would the following be sufficent to implement your idea?
a) makeover of concept_check b) wider usage of concept_check c) refactoring to truncate listings of compiler errors
Obviously I'm trying to boil this down to something that can be verified in an objective way.
With the above caveat about whether concepts are powerful enough, yes. -- Eric Niebler BoostPro Computing http://www.boostpro.com

Eric Niebler a écrit :
Certainly Concept_check or something like it has a big role to play in this. But I wouldn't know how to use Concept_check to verify that an expression template matches a grammar, for instance. It's not the whole story. Though perhaps I don't know enough about the concept_check library to see how it can apply to proto expressions and grammars. Suggestions?
The design of ConceptCheck doesn't allow to test at compile-time whether a set of types rightly model a concept. This is because concepts within ConceptCheck are defined as arbitrary statements, rather than expressions. We should redesign ConceptCheck for it to test expressions; then we could use the new extended SFINAE to test things. This however means we'd basically have to throw away and rewrite all the concept work that has already been done.

On 26/09/10 14:57, Mathias Gaunard wrote:
The design of ConceptCheck doesn't allow to test at compile-time whether a set of types rightly model a concept.
This is because concepts within ConceptCheck are defined as arbitrary statements, rather than expressions.
We should redesign ConceptCheck for it to test expressions; then we could use the new extended SFINAE to test things.
This however means we'd basically have to throw away and rewrite all the concept work that has already been done. I think it's worth the effort IMHO
participants (24)
-
Andrew Sutton
-
Daniel Walker
-
Dave Abrahams
-
David Abrahams
-
Dean Michael Berris
-
Doug Gregor
-
Edward Diener
-
Eric Niebler
-
Felipe Magno de Almeida
-
Jeremiah Willcock
-
Jeremy Maitin-Shepard
-
joaquin@tid.es
-
joel falcou
-
Joel Falcou
-
John B. Turpish
-
John Bytheway
-
Manfred Doudar
-
Mathias Gaunard
-
Robert Ramey
-
Roland Bock
-
Sebastian Redl
-
Smith, Jacob N
-
Steven Watanabe
-
Stewart, Robert