[contract] concepts: pseudo-signatures vs. usage patterns

On Tue, Oct 9, 2012 at 7:16 PM, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Oct 09 2012, Lorenzo Caminiti <lorcaminiti-AT-gmail.com> wrote:
I made a first attempt to sketch how N3351-like concepts will look into Boost.Contract (just for the find algorithm and its concepts for now): [...] Please tell me what you think now
Since you asked what I think,
"pseudo-signatures > usage patterns"
Hello all, Can we write down pros and cons for concepts implemented via pseudo-signatures (C++0x-like and Boost.Generic) vs. usage patterns (N3351 and Boost.Contract)? Who wants to start? Matt, Dave, Andrzej, ... I can compile a qbk table with what we discuss. Thanks. --Lorenzo

On Wed, Oct 10, 2012 at 1:08 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Can we write down pros and cons for concepts implemented via pseudo-signatures (C++0x-like and Boost.Generic) vs. usage patterns (N3351 and Boost.Contract)?
Who wants to start? Matt, Dave, Andrzej, ... I can compile a qbk table with what we discuss.
We should possibly take this discussion to https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/std-discussi... https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/std-proposal... as Dave suggested in the other thread. First, some initial properties -- some of these aren't really clear pros or cons, just differences as they come to mind, unorganized. I am not on the standards committee and am definitely not an expert concerning the usage-patterns approach, so other people would probably be able to produce much better lists than these (it'd be nice to have I.E. Doug Gregor and Andrew poke their head in here, but I know they're busy). I'm also biased toward pseudo-signatures, as is Dave at least (I don't know about Andrzej), so we really need someone more committed to usage-patterns to be able to highlight the benefits and drawbacks. Usage-patterns: - Usage-patterns show valid expressions and properties of expressions that deal with the concept's arguments (mostly subjective if this is pro or con) - Concept definitions are very different syntactically from other definitions in C++ (possible con) - An Individual required usage pattern may be more complicated than a single function call (possible pro) - A usage patterns cannot have a default implementation (con) - The use of givens more closely matches the way concepts were specified in previous standards (pro, though this only realistically affects people who deal with the standard) - Exactly what types of argument can be passed to associated functions may not be apparent to the user since since you have to deduce a function declaration based on the types of the givens (and consider convertibility, etc). (con) - Generation of sensible archetypes is not immediately apparent (at least to me), particularly when considering associated function parameter types since again, those types would have to be deduced from arguments being passed, not a function signature. (con) - N3351 doesn't mention archetypes at all (*cough* they're important *cough*) (con) Pseudo-signatures - A concept definition looks and "works" very similarly to the definition of a type that models the concept being defined (IMO big pro) - Writing and reading a generic algorithm that deals with a concept is very similar to writing and reading an algorithm that deals with a type definition since you are essentially writing to an archetype (pro) - Pseudo-signatures can have defaults (pro) - noexcept being applied to an associated function requirement is very simple to do with pseudo-signatures in the same way that you'd augment a regular function declaration (though N2914 didn't yet have noexcept) (pro) - Concept definitions do not look like the way concepts were specified in previous standards (con) - "Proper" archetype generation from pseudo-signatures is much more clear than with usage-patterns, though usage patterns here may not be as bad as I am imagining (pro) - Archetypes are fundamental to N2914 (pro) - This approach made it very far through the standardization process and its language/library components were chiseled into what ended up in N2914 Again, I am biased toward the pseudo-signature approach, so I may not be giving usage-patterns the representation that they deserve. Also, while I have a lot of experience emulating N2914 in C++11 and know it very well, I would not consider myself an expert. -- -Matt Calabrese

Le 10/10/12 21:17, Matt Calabrese a écrit :
On Wed, Oct 10, 2012 at 1:08 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Can we write down pros and cons for concepts implemented via pseudo-signatures (C++0x-like and Boost.Generic) vs. usage patterns (N3351 and Boost.Contract)?
Who wants to start? Matt, Dave, Andrzej, ... I can compile a qbk table with what we discuss.
We should possibly take this discussion to https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/std-discussi...
https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/std-proposal... as Dave suggested in the other thread.
First, some initial properties -- some of these aren't really clear pros or cons, just differences as they come to mind, unorganized. I am not on the standards committee and am definitely not an expert concerning the usage-patterns approach, so other people would probably be able to produce much better lists than these (it'd be nice to have I.E. Doug Gregor and Andrew poke their head in here, but I know they're busy). I'm also biased toward pseudo-signatures, as is Dave at least (I don't know about Andrzej), so we really need someone more committed to usage-patterns to be able to highlight the benefits and drawbacks.
Usage-patterns: - Usage-patterns show valid expressions and properties of expressions that deal with the concept's arguments (mostly subjective if this is pro or con) On simple cases this is not an advantage but there are cases that writing the pseudo-signatures is quite complex(or even impossible) if we don't want to say more than what the usage pattern says.
for example f(a) requires that there is an unique overload of f that can be found for f(a) With pseudo signatures we could state that there is a function f with a parameter T and that a is implicitly convertible to T. void f(T) requires (Convertible<decltype(a), T> Note that a model that defines int f(A) satisfies the usage patter f(a) but not the pseudo signature. I don't see how to introduce the type R if it is not part of the concept parameters.
- Concept definitions are very different syntactically from other definitions in C++ (possible con) - An Individual required usage pattern may be more complicated than a single function call (possible pro) - A usage patterns cannot have a default implementation (con) - The use of givens more closely matches the way concepts were specified in previous standards (pro, though this only realistically affects people who deal with the standard) - Exactly what types of argument can be passed to associated functions may not be apparent to the user since since you have to deduce a function declaration based on the types of the givens (and consider convertibility, etc). (con)
- Generation of sensible archetypes is not immediately apparent (at least to me), particularly when considering associated function parameter types since again, those types would have to be deduced from arguments being passed, not a function signature. (con) Well, this is always a problem for the concept developer as she needs to do this work once. - N3351 doesn't mention archetypes at all (*cough* they're important *cough*) (con)
Pseudo-signatures - A concept definition looks and "works" very similarly to the definition of a type that models the concept being defined (IMO big pro) - Writing and reading a generic algorithm that deals with a concept is very similar to writing and reading an algorithm that deals with a type definition since you are essentially writing to an archetype (pro) - Pseudo-signatures can have defaults (pro) I find this a big advantage. - noexcept being applied to an associated function requirement is very simple to do with pseudo-signatures in the same way that you'd augment a regular function declaration (though N2914 didn't yet have noexcept) (pro) - Concept definitions do not look like the way concepts were specified in previous standards (con) - "Proper" archetype generation from pseudo-signatures is much more clear than with usage-patterns, though usage patterns here may not be as bad as I am imagining (pro) - Archetypes are fundamental to N2914 (pro) - This approach made it very far through the standardization process and its language/library components were chiseled into what ended up in N2914
Again, I am biased toward the pseudo-signature approach, so I may not be giving usage-patterns the representation that they deserve. Also, while I have a lot of experience emulating N2914 in C++11 and know it very well, I would not consider myself an expert.
On the context of Boost.Contract, I would add a pros for the Pseudo signatures, that is, we can associate pre-conditions to the Pseudo signatures, which seems quite complex for usage patters . We could add even concept invariants and know exactly when these should be checked by the model. Has someone even think for one moment that we could use both mechanism, depending on the specific context. Just my 2cts. Vicente

On Wed, 10 Oct 2012, Vicente J. Botet Escriba wrote: (snip)
Usage-patterns: - Usage-patterns show valid expressions and properties of expressions that deal with the concept's arguments (mostly subjective if this is pro or con) On simple cases this is not an advantage but there are cases that writing the pseudo-signatures is quite complex(or even impossible) if we don't want to say more than what the usage pattern says.
for example
f(a)
requires that there is an unique overload of f that can be found for f(a)
With pseudo signatures we could state that there is a function f with a parameter T and that a is implicitly convertible to T.
void f(T) requires (Convertible<decltype(a), T>
Note that a model that defines
int f(A)
satisfies the usage patter f(a) but not the pseudo signature. I don't see how to introduce the type R if it is not part of the concept parameters.
Where would you get "a" from in this context in a pseudosignature? Assuming you had it (for example, it was the result type of an expression built using declval), you could write: void f(decltype(a)); and any f that can be called on a value of type decltype(a) would automatically be found (either in the outer scope or in a concept map). Additionally, if you called this f from a constrained template, you would always get the version that took decltype(a), even if what you called f with would need to be implicitly converted to decltype(a) to use that function (and you would not get any other overloads of f that are more specialized for the actual type you called f with in the constrained template). Similarly, if you write: bool operator==(T, T); in a pseudosignature, you can assume that the type you get back from a call to this function in a constrained call will be exactly bool, not some type convertible to bool. If the operator== found in the surrounding context (for an auto concept) had a different return type, the one in the generated concept map would have a compiler-inserted conversion to bool wrapped around it. This applies to the convertibility syntax in N3351 as well, I believe.
- Generation of sensible archetypes is not immediately apparent (at least to me), particularly when considering associated function parameter types since again, those types would have to be deduced from arguments being passed, not a function signature. (con) Well, this is always a problem for the concept developer as she needs to do this work once.
The use pattern syntax does not provide a way to write archetypes manually; if you wanted to do archetype-style checking of constrained templates outside particular uses, you would need to have a compiler that generated archetypes or used similar techniques for checking.
Has someone even think for one moment that we could use both mechanism, depending on the specific context.
It would be possible (you can translate automatically in either direction, especially with the return-type syntax for use patterns in N3351). -- Jeremiah Willcock

On Wed, Oct 10, 2012 at 5:06 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
Usage-patterns:
- Usage-patterns show valid expressions and properties of expressions that deal with the concept's arguments (mostly subjective if this is pro or con)
On simple cases this is not an advantage but there are cases that writing the pseudo-signatures is quite complex(or even impossible) if we don't want to say more than what the usage pattern says.
for example
f(a)
requires that there is an unique overload of f that can be found for f(a)
With pseudo signatures we could state that there is a function f with a parameter T and that a is implicitly convertible to T.
void f(T) requires (Convertible<decltype(a), T>
Note that a model that defines
int f(A)
satisfies the usage patter f(a) but not the pseudo signature. I don't see how to introduce the type R if it is not part of the concept parameters.
Actually, this all works with pseudo-signatures of N2914 and Boost.Generic. Note that when you make a pseudo-signature such as void f( T ); you are NOT requiring that the model in question has a signature that matches exactly, nor that the return type must be void. The match is loose, including convertibility of arguments and results, and if a return type is specified as void in a pseudo-signature, it will match fine with functions that do not return void (just don't try to use the return type as though it were something other than void when in a constrained context).
- Generation of sensible archetypes is not immediately apparent (at least
to me), particularly when considering associated function parameter types since again, those types would have to be deduced from arguments being passed, not a function signature. (con)
Well, this is always a problem for the concept developer as she needs to do this work once.
In N2914 this isn't really a problem even for someone creating a concept since archetypes are automatically generated/used internally by the compiler. The developer of a concept should not have to manually create an archetype on their own.
Has someone even think for one moment that we could use both mechanism, depending on the specific context.
I would love to have a chance to develop and use libraries built around both approaches. At the very least, if Lorenzo and I get archetype generation working with both approaches and coordinate a little bit, we can, in theory, also create a macro for writing constrained algorithms that works with either kind of concept. I'm not sure I'd like a library or the standard to have both approaches only because they seem too redundant at this point. I think I'd only be in favor of that if it can be shown both that usage patterns can do something much more concisely and effectively than pseudo-signatures as well as vice versa. If that can't be done, I'd rather just have one or the other. -- -Matt Calabrese

2012/10/10 Matt Calabrese <rivorus@gmail.com>
On Wed, Oct 10, 2012 at 5:06 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
- Usage-patterns show valid expressions and properties of expressions
Usage-patterns: that
deal with the concept's arguments (mostly subjective if this is pro or con)
On simple cases this is not an advantage but there are cases that writing the pseudo-signatures is quite complex(or even impossible) if we don't want to say more than what the usage pattern says.
for example
f(a)
requires that there is an unique overload of f that can be found for f(a)
With pseudo signatures we could state that there is a function f with a parameter T and that a is implicitly convertible to T.
void f(T) requires (Convertible<decltype(a), T>
Note that a model that defines
int f(A)
satisfies the usage patter f(a) but not the pseudo signature. I don't see how to introduce the type R if it is not part of the concept parameters.
Actually, this all works with pseudo-signatures of N2914 and Boost.Generic. Note that when you make a pseudo-signature such as
void f( T );
you are NOT requiring that the model in question has a signature that matches exactly, nor that the return type must be void. The match is loose, including convertibility of arguments and results, and if a return type is specified as void in a pseudo-signature, it will match fine with functions that do not return void (just don't try to use the return type as though it were something other than void when in a constrained context).
"The match is loose" -- I believe that this "loose match" may be a problem in itself. The pseudo-signatures look like any other signature so one might expect that they also work as any other signature. One would be surprised to learn they are only similar with other signatures on the surface, but differ in semantics due to this "loose match". In contrast, with usage patterns you are already alerted with the unusual syntax that the meaning of the declarations is unusual. My comment may be very superficial though. I do not know pseudo-signatures in-depth. And I do not find pseudo-signatures nferior. I only argue with this "loose match" idea. Regards, &rzej

on Thu Oct 11 2012, Andrzej Krzemienski <akrzemi1-AT-gmail.com> wrote:
2012/10/10 Matt Calabrese <rivorus@gmail.com>
you are NOT requiring that the model in question has a signature that matches exactly, nor that the return type must be void. The match is loose, including convertibility of arguments and results, and if a return type is specified as void in a pseudo-signature, it will match fine with functions that do not return void (just don't try to use the return type as though it were something other than void when in a constrained context).
"The match is loose" -- I believe that this "loose match" may be a problem in itself. The pseudo-signatures look like any other signature so one might expect that they also work as any other signature. One would be surprised to learn they are only similar with other signatures on the surface, but differ in semantics due to this "loose match". In contrast, with usage patterns you are already alerted with the unusual syntax that the meaning of the declarations is unusual.
My comment may be very superficial though. I do not know pseudo-signatures in-depth. And I do not find pseudo-signatures nferior. I only argue with this "loose match" idea.
Yes, it's counterintuitive, and at first, everyone has a superficial understanding and argues with it. But it's actually what you want. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

on Wed Oct 10 2012, Lorenzo Caminiti <lorcaminiti-AT-gmail.com> wrote:
On Tue, Oct 9, 2012 at 7:16 PM, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Oct 09 2012, Lorenzo Caminiti <lorcaminiti-AT-gmail.com> wrote:
I made a first attempt to sketch how N3351-like concepts will look into Boost.Contract (just for the find algorithm and its concepts for now): [...] Please tell me what you think now
Since you asked what I think,
"pseudo-signatures > usage patterns"
Hello all,
Can we write down pros and cons for concepts implemented via pseudo-signatures (C++0x-like and Boost.Generic) vs. usage patterns (N3351 and Boost.Contract)?
Who wants to start? Matt, Dave, Andrzej, ... I can compile a qbk table with what we discuss.
Matt C. already mentioned the ease of producing a new model of a given concept. I will mention that usage patterns tend to leave unstated assumptions about intermediate associated types that make it very hard to be sure you're saying what you really mean, and tend to lead to semantically-vague requirements. Look for example at http://www.sgi.com/tech/stl/find_if.html. I can easily satisfy all the stated requirements and produce an example that won't compile. I can also easily fail to satisfy the stated requirements and produce an example that *will* compile. That's because there's a great deal of potential mischief hiding in expressions such as f(*p) [For example, what is the relationship between the type of *p and the iterator's value type? What is the relationship between that type and the argument type of the function object? Even if the reference type of p is convertible to the argument type of f, the call can still be ambiguous... etc...] This is a perfect example of what you get when you deal in usage patterns, and the C++98 (and even the '03) standard is full of these problems. One of the most important features of the pseudo-signature approach is that it injects implicit conversions to (explicitly-stated) associated types, so these issues don't arise. I don't know how to make a tidy little bullet out of this issue for your list of pros and cons, but I think it's the most important one on the list. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Look for example at http://www.sgi.com/tech/stl/find_if.html. I can easily satisfy all the stated requirements and produce an example that won't compile. I can also easily fail to satisfy the stated requirements and produce an example that *will* compile. That's because there's a great deal of potential mischief hiding in expressions such as
f(*p)
Can you produce those examples? You've made me curious. Andrew

on Thu Oct 11 2012, Andrew Sutton <asutton.list-AT-gmail.com> wrote:
Look for example at http://www.sgi.com/tech/stl/find_if.html. I can easily satisfy all the stated requirements and produce an example that won't compile. I can also easily fail to satisfy the stated requirements and produce an example that *will* compile. That's because there's a great deal of potential mischief hiding in expressions such as
f(*p)
Can you produce those examples? You've made me curious.
Doesn't compile, but "should" --8<---------------cut here---------------start------------->8--- #include <algorithm> #include <vector> struct Predicate { typedef char argument_type; bool operator()(char) const; bool operator()(long) const; }; std::vector<int> v(10); std::vector<int>::iterator p = std::find_if(v.begin(), v.end(), Predicate()); --8<---------------cut here---------------end--------------->8--- If you take the standard definition of "input iterator" instead of the SGI one, the examples are even more compelling, because the result type of "*p" merely has to be "convertible to T", so it's easy to arrange a Predicate with no overloads that fails to accept the argument because too many implicit conversions are required. For the "shouldn't compile, but does" example I was presuming the standard definition of "input iterator;" just make an input iterator that returns a type X convertible to the value type, and make the predicate's operator() accept X but not the value type. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Doesn't compile, but "should"
struct Predicate { typedef char argument_type;
bool operator()(char) const; bool operator()(long) const; };
std::vector<int> v(10); std::vector<int>::iterator p = std::find_if(v.begin(), v.end(), Predicate());
I don't think it should compile. Predicate doesn't seem like it can be called with an int in any context, so a concept that catches that seems to be doing the right thing.
If you take the standard definition of "input iterator" instead of the SGI one, the examples are even more compelling, because the result type of "*p" merely has to be "convertible to T", so it's easy to arrange a Predicate with no overloads that fails to accept the argument because too many implicit conversions are required.
For the "shouldn't compile, but does" example I was presuming the standard definition of "input iterator;" just make an input iterator that returns a type X convertible to the value type, and make the predicate's operator() accept X but not the value type.
I can see how those might be problematic. I'll have to tinker with some concrete examples to get a better understanding. Andrew

on Thu Oct 11 2012, Andrew Sutton <asutton.list-AT-gmail.com> wrote:
Doesn't compile, but "should"
struct Predicate { typedef char argument_type;
bool operator()(char) const; bool operator()(long) const; };
std::vector<int> v(10); std::vector<int>::iterator p = std::find_if(v.begin(), v.end(), Predicate());
I don't think it should compile. Predicate doesn't seem like it can be called with an int in any context, so a concept that catches that seems to be doing the right thing.
You're missing the point. I claim the associated "argument type" of Predicate is char. The value type of the sequence (int) is convertible to the associated argument type. That's all that's required according to the text.
If you take the standard definition of "input iterator" instead of the SGI one, the examples are even more compelling, because the result type of "*p" merely has to be "convertible to T", so it's easy to arrange a Predicate with no overloads that fails to accept the argument because too many implicit conversions are required.
For the "shouldn't compile, but does" example I was presuming the standard definition of "input iterator;" just make an input iterator that returns a type X convertible to the value type, and make the predicate's operator() accept X but not the value type.
I can see how those might be problematic. I'll have to tinker with some concrete examples to get a better understanding.
You can work around these kinds of issues by forcing the implicit conversions in algorithms, but the result is that algorithms get ugly. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

You're missing the point. I claim the associated "argument type" of Predicate is char. The value type of the sequence (int) is convertible to the associated argument type. That's all that's required according to the text.
Sorry... you're right. If the requirement is that the value type is convertible to the argument type, then yes. Fortunately that's not what we required in n3351.

on Thu Oct 11 2012, Andrew Sutton <asutton.list-AT-gmail.com> wrote:
You're missing the point. I claim the associated "argument type" of Predicate is char. The value type of the sequence (int) is convertible to the associated argument type. That's all that's required according to the text.
Sorry... you're right. If the requirement is that the value type is convertible to the argument type, then yes. Fortunately that's not what we required in n3351.
OK. For examples of forced conversions inserted to deal with this issue, search for "pred" in http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/include/bits/stl_algo.h?vi... Looking at N3351, I wonder if you implemented these algorithms and threw strict archetypes at the implementations. It looks very much like you would need similar contortions. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

For examples of forced conversions inserted to deal with this issue, search for "pred" in http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/include/bits/stl_algo.h?vi...
Interesting. It's terrifying that somebody should have to do that.
Looking at N3351, I wonder if you implemented these algorithms and threw strict archetypes at the implementations. It looks very much like you would need similar contortions.
We did (omitting a handful), with constraints mostly matching what appeared in n3551, but the archetype framework wasn't in place until some time later. If I remember correctly, we assumed that a conversion requirement on the result of an operation, would actually convert (if necessary). You couldn't, for example, return tribool from == and expect the compiler to pick up overloads for &&, ||, and !. That would be a bad thing (unless your generic algorithm was parameterized over the underlying logic). I'm not sure if you can successfully guard against this using SFINAE-constraints. You'd need to write a ton of explicit conversions (like above). Or language support, of course.

on Thu Oct 11 2012, Andrew Sutton <asutton.list-AT-gmail.com> wrote:
For examples of forced conversions inserted to deal with this issue, search for "pred" in http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/include/bits/stl_algo.h?vi...
Interesting. It's terrifying that somebody should have to do that.
What scares me is that this issue was well-known during the development of concepts for C++0x, and yet somehow it didn't get communicated to the people currently working on the problem.
Looking at N3351, I wonder if you implemented these algorithms and threw strict archetypes at the implementations. It looks very much like you would need similar contortions.
We did (omitting a handful), with constraints mostly matching what appeared in n3551, but the archetype framework wasn't in place until some time later.
If I remember correctly, we assumed that a conversion requirement on the result of an operation, would actually convert (if necessary).
What does "if necessary" mean? I can think of at least two different ways to decide whether a conversion is necessary.
You couldn't, for example, return tribool from == and expect the compiler to pick up overloads for &&, ||, and !.
I don't understand quite how this can be an example of the previous statement (or what you're driving at overall).
That would be a bad thing (unless your generic algorithm was parameterized over the underlying logic).
-- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

If I remember correctly, we assumed that a conversion requirement on the result of an operation, would actually convert (if necessary).
What does "if necessary" mean? I can think of at least two different ways to decide whether a conversion is necessary.
If the expression returns bool, no conversion is necessary. If it does not, then a conversion is needed.
You couldn't, for example, return tribool from == and expect the compiler to pick up overloads for &&, ||, and !.
I don't understand quite how this can be an example of the previous statement (or what you're driving at overall).
Perhaps I misunderstood what you're aiming at. Like I said, I need to tinker with concrete examples to better understand the issues (I'm very much hands-on learner).

on Fri Oct 12 2012, Andrew Sutton <asutton.list-AT-gmail.com> wrote:
If I remember correctly, we assumed that a conversion requirement on the result of an operation, would actually convert (if necessary).
What does "if necessary" mean? I can think of at least two different ways to decide whether a conversion is necessary.
If the expression returns bool, no conversion is necessary. If it does not, then a conversion is needed.
Sorry, that's still too vague to tell me what you mean. Surely this has nothing to do with "bool-ness," so I have to assume bool shows up in an example you have in mind. Could you show a concrete example with specific requirements?
You couldn't, for example, return tribool from == and expect the compiler to pick up overloads for &&, ||, and !.
I don't understand quite how this can be an example of the previous statement (or what you're driving at overall).
Perhaps I misunderstood what you're aiming at.
Perhaps, but I think it's more likely that you simply have something specific in mind that you haven't spelled out, and I am unable to fill in the details without help. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

2012/10/12 Dave Abrahams <dave@boostpro.com>
on Fri Oct 12 2012, Andrew Sutton <asutton.list-AT-gmail.com> wrote:
If I remember correctly, we assumed that a conversion requirement on the result of an operation, would actually convert (if necessary).
What does "if necessary" mean? I can think of at least two different ways to decide whether a conversion is necessary.
If the expression returns bool, no conversion is necessary. If it does not, then a conversion is needed.
Sorry, that's still too vague to tell me what you mean. Surely this has nothing to do with "bool-ness," so I have to assume bool shows up in an example you have in mind. Could you show a concrete example with specific requirements?
Pardon me, if I completely misunderstood your concern. This is how I understand the issue you describe: We have a concept requirement like this: concept Operation<typename Op, typename T> = requires(Op op, T t) { //... bool = { op(t) }; // "convertible to bool" } We have the following function template template< typename Op, typename T > requires Operation<Op, T> bool fun(Op op, T t) { return true && op(t); } We have a following concrete type that models Operation: class FunnyBool{...}; // almost like bool class X{...}; // which will become T FunnyBool operation(X x) {...}; // which will become Op bool operator&& (bool a, FunnyBool b) {}; // which will do some unexpected thing When we instantiate: fun(operation, X{}); which operator&& will be picked? (bool, bool) or (bool, FunnyBool)? This is the question you ask. Am I right? If so, the answer I find the most logical would be is since the concept is only aware that the type is convertible to bool and can only rely on this information, when function fun() is instantiated with operation and X, it only sees a restricted version of operation and X and indirectly FunnyBool as though 'through concept Operation' and must convert FunnyBool to bool before applying operator &&. Of course this is achievable only in concepts as language feature. Any use-pattern-based concept library will not be able to implement this behavior, as far as I am aware. Regards, &rzej

on Sat Oct 13 2012, Andrzej Krzemienski <akrzemi1-AT-gmail.com> wrote:
2012/10/12 Dave Abrahams <dave@boostpro.com>
on Fri Oct 12 2012, Andrew Sutton <asutton.list-AT-gmail.com> wrote:
If I remember correctly, we assumed that a conversion requirement on the result of an operation, would actually convert (if necessary).
What does "if necessary" mean? I can think of at least two different ways to decide whether a conversion is necessary.
If the expression returns bool, no conversion is necessary. If it does not, then a conversion is needed.
Sorry, that's still too vague to tell me what you mean. Surely this has nothing to do with "bool-ness," so I have to assume bool shows up in an example you have in mind. Could you show a concrete example with specific requirements?
Pardon me, if I completely misunderstood your concern.
BTW, Doug Gregor is the one who explained all this to me, years ago, so it's his concern first.
This is how I understand the issue you describe:
We have a concept requirement like this:
concept Operation<typename Op, typename T> = requires(Op op, T t) { //... bool = { op(t) }; // "convertible to bool" }
I have to admit that I'm not entirely familiar with the new-fangled syntax being proposed, but I think I can muddle through your use of it here. Operation<Op,T> says that declval<Op>()(declval<T>()) is valid and convertible to bool, right?
We have the following function template
template< typename Op, typename T > requires Operation<Op, T> bool fun(Op op, T t) { return true && op(t); }
We have a following concrete type that models Operation:
class FunnyBool{...}; // almost like bool class X{...}; // which will become T FunnyBool operation(X x) {...}; // which will become Op bool operator&& (bool a, FunnyBool b) {}; // which will do some unexpected thing
Yes, you're on the right track here...
When we instantiate:
fun(operation, X{});
which operator&& will be picked? (bool, bool) or (bool, FunnyBool)?
This is the question you ask.
Well, I'm not sure that my entire concern can be captured in this one example, but this question is certainly an instance of my concern.
Am I right? If so, the answer I find the most logical would be is since the concept is only aware that the type is convertible to bool and can only rely on this information, when function fun() is instantiated with operation and X, it only sees a restricted version of operation and X and indirectly FunnyBool as though 'through concept Operation' and must convert FunnyBool to bool before applying operator &&.
That's what the pseudo-signature approach does, so you and I agree on that much. In fact, these are the "loose match" semantics you were objecting to earlier. :-) Notationally speaking, I think pseudo-signatures are *much* more suggestive of those semantics than are valid expressions. The bigger problem is that the valid expression approach also makes it possible/easy to specify requirements where the type corresponding to bool in this example is never explicitly named, e.g.: bool = { true && op(t) }; value_type = { *it++ }; etc. IIUC these sorts of things cause big problems for typechecking (exponential explosions?) and usability, but I'd have to defer to others to describe in detail what those issues are.
Of course this is achievable only in concepts as language feature. Any use-pattern-based concept library will not be able to implement this behavior, as far as I am aware.
On the other hand, a pseudo-signature-based library conceivably could. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

This is how I understand the issue you describe:
We have a concept requirement like this:
concept Operation<typename Op, typename T> = requires(Op op, T t) { //... bool = { op(t) }; // "convertible to bool" }
I have to admit that I'm not entirely familiar with the new-fangled syntax being proposed, but I think I can muddle through your use of it here. Operation<Op,T> says that declval<Op>()(declval<T>()) is valid and convertible to bool, right?
Right.
We have the following function template
template< typename Op, typename T > requires Operation<Op, T> bool fun(Op op, T t) { return true && op(t); }
We have a following concrete type that models Operation:
class FunnyBool{...}; // almost like bool class X{...}; // which will become T FunnyBool operation(X x) {...}; // which will become Op bool operator&& (bool a, FunnyBool b) {}; // which will do some unexpected thing
Yes, you're on the right track here...
When we instantiate:
fun(operation, X{});
which operator&& will be picked? (bool, bool) or (bool, FunnyBool)?
This is the question you ask.
Well, I'm not sure that my entire concern can be captured in this one example, but this question is certainly an instance of my concern.
Am I right? If so, the answer I find the most logical would be is since the concept is only aware that the type is convertible to bool and can only rely on this information, when function fun() is instantiated with operation and X, it only sees a restricted version of operation and X and indirectly FunnyBool as though 'through concept Operation' and must convert FunnyBool to bool before applying operator &&.
That's what the pseudo-signature approach does, so you and I agree on that much.
Yes. I would expect that either approach is capable of doing the same.
In fact, these are the "loose match" semantics you were objecting to earlier. :-)
I was not objecting to "loose match". I was trying to say that pseudo-signatures look like normal signatures and might imply that no "loose match" occurs. In contrast, usage patterns look like expressions, and you do expect implicit conversions. Although, I do not find it a major argument against pseudo-signatures.
Notationally speaking, I think pseudo-signatures are *much* more suggestive of those semantics than are valid expressions.
Could you show an example where this is the case? I may be missing something obvious, but I would say it is the other way around.
The bigger problem is that the valid expression approach also makes it possible/easy to specify requirements where the type corresponding to bool in this example is never explicitly named, e.g.:
bool = { true && op(t) };
value_type = { *it++ };
etc. IIUC these sorts of things cause big problems for typechecking (exponential explosions?) and usability, but I'd have to defer to others to describe in detail what those issues are.
Of course this is achievable only in concepts as language feature. Any use-pattern-based concept library will not be able to implement this behavior, as far as I am aware.
On the other hand, a pseudo-signature-based library conceivably could.
And since this discussion started because Lorenzo wonders which of the two approaches to take for his library, perhaps this limitation gives us the answer. Regards, &rzej

On Sat, Oct 13, 2012 at 2:40 PM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
Of course this is achievable only in concepts as language feature. Any use-pattern-based concept library will not be able to implement this behavior, as far as I am aware.
On the other hand, a pseudo-signature-based library conceivably could.
And since this discussion started because Lorenzo wonders which of the two approaches to take for his library, perhaps this limitation gives us the answer.
... which is sounding more and more like Lorenzo should leave N3351 alone, help Matt at least implementing a front-end macros like the ones below (s/CONTRACT_CONCEPT/BOOST_GENERIC_CONCEPT or similar), and use concepts defined using Boost.Generic in Boost.Contract requires clause. http://contractpp.svn.sourceforge.net/viewvc/contractpp/trunk/doc/html/contr... --Lorenzo

On Sat, Oct 13, 2012 at 5:52 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
... which is sounding more and more like Lorenzo should leave N3351 alone, help Matt at least implementing a front-end macros like the ones below (s/CONTRACT_CONCEPT/BOOST_GENERIC_CONCEPT or similar), and use concepts defined using Boost.Generic in Boost.Contract requires clause.
http://contractpp.svn.sourceforge.net/viewvc/contractpp/trunk/doc/html/contr...
That would be great. I still think it would be awesome if we had both approaches implemented to the extent that is feasible, but I definitely welcome an upgraded front-end if that's what you decide to do. I think I'm fine now with not parenthesizing each "statement", despite my earlier reservations. The only thing I think I'm not a fan of is the "keyword" extends -- I used the keywords "concept" and "requires" for the macro because they will likely be actual keywords in a future standard, so having users highlight them in their IDE makes sense now anyway, since if they use those words as identifiers elsewhere in their own code (separate from the macro) they'll probably want to change them regardless, otherwise their code will break with C++1y in the [hopefully] not TOO distant future. On the other hand, if we have users highlight on "extends" it would highlight their own valid identifiers named "extends", but that's a bit misleading since "extends" is only really a special word when used in the macro (this might not be as big a deal as I imagine, but if it can be avoided I think it's for the best). I hate that Visual Studio highlighted "array" as a keyword and I don't like the idea of encouraging people to manually do something similar. I used "public" in the current implementation for refinement just because it's already a keyword and I couldn't think of one that was a better fit. If the front-end MUST have its own special word for refinement as opposed to some other preprocessor-detectable syntax (such as something like (,) ), I'd rather a standard or future standard keyword be used than "extends" or "refines". Similarly, instead of using "as" for associated type defaults, I think "default" might be better, even though it's a bit more verbose. Ultimately it's obviously up to you pending you're volunteering to do the front-end overhaul, this is just my two cents. Also, is there a reason why you are using "void" instead of just nothing for empty parameter lists and concept/concept map bodies? I realize the limitations of detecting emptiness in the preprocessor, but those limitations generally don't matter for this type of library and I check for emptiness in lots of places in the current implementation. -- -Matt Calabrese

On Sat, Oct 13, 2012 at 4:42 PM, Matt Calabrese <rivorus@gmail.com> wrote:
On Sat, Oct 13, 2012 at 5:52 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
... which is sounding more and more like Lorenzo should leave N3351 alone, help Matt at least implementing a front-end macros like the ones below (s/CONTRACT_CONCEPT/BOOST_GENERIC_CONCEPT or similar), and use concepts defined using Boost.Generic in Boost.Contract requires clause.
http://contractpp.svn.sourceforge.net/viewvc/contractpp/trunk/doc/html/contr...
That would be great. I still think it would be awesome if we had both approaches implemented to the extent that is feasible, but I definitely welcome an upgraded front-end if that's what you decide to do. I think I'm fine now with not parenthesizing each "statement", despite my earlier reservations.
Also (like in Boost.Contract) the extra parenthesis are not required but accepted around known alphanumeric identifiers like bool, void, etc. If users for consistency wants to "parenthesize each statement" including known alphanumeric identifiers, they can freely do so (even if they are not required to).
The only thing I think I'm not a fan of is the "keyword" extends -- I used the keywords "concept" and "requires" for the macro because they will likely be actual keywords in a future standard, so having users highlight them in their IDE makes sense now anyway, since if they use those words as identifiers elsewhere in their own code (separate from the macro) they'll probably want to change them regardless, otherwise their code will break with C++1y in the [hopefully] not TOO distant future. On the other hand, if we have users highlight on "extends" it would highlight their own valid identifiers named "extends", but that's a bit misleading since "extends" is only really a special word when used in the macro (this might not be as big a deal as I imagine, but if it can be avoided I think it's for the best). I hate that Visual Studio highlighted "array" as a keyword and I don't like the idea of encouraging people to manually do something similar. I used "public" in the current implementation for refinement just because it's already a keyword and I couldn't think of one that was a better fit. If the front-end MUST have its own special word for refinement as opposed to some other preprocessor-detectable syntax (such as something like (,) ), I'd rather a standard or future standard keyword be used than "extends" or "refines". Similarly, instead of using "as" for associated type defaults, I think "default" might be better, even though it's a bit more verbose.
Ultimately it's obviously up to you pending you're volunteering to do the front-end overhaul, this is just my two cents.
I used extends because that's what Boost.Contract uses for inheritance (instead of the symbol ":" which cannot be used). I selected extends in Boost.Contract for inheritance because that's what Java uses. Refinements syntax uses ":" in the language so I'd us extends in the lib. "default" instead of "as" probably makes sense. But we can have detailed discussions about all these things (which are trivial to implement after all) if/when I start implementing the font-end macros.
Also, is there a reason why you are using "void" instead of just nothing for empty parameter lists and concept/concept map bodies?
Because MSVC pp sucks :( and gives errors (sometimes) if you use ( ) instead of ( void ). The lib actually accepts both ( ) and ( void ), ( void ) is more portable (it consistently works on MSVC) but users can use ( ) if that's not a concern (e.g., GCC handle ( ) just fine and so should Wave). That's the same story for Boost.Contract... I'm planning to go back and double check exactly why MSVC breaks on ( ) before Boost.Contract release (I might be able to impl a workaround).
I realize the limitations of detecting emptiness in the preprocessor, but those limitations generally don't matter for this type of library and I check for emptiness in lots of places in the current implementation.
Does it consistently work on MSVC? Thanks, --Lorenzo

On Sat, Oct 13, 2012 at 9:30 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Does it consistently work on MSVC?
I decided early on to not bother trying to support MSVC, at least for the time being. Not only does its broken preprocessor mean I'd have to make a lot of workarounds, but it also doesn't support some of the language features I use to emulate N2914 concepts (this may have changed a little since last year, but I have a feeling not by much). -- -Matt Calabrese

On Sat, Oct 13, 2012 at 10:23 PM, Matt Calabrese <rivorus@gmail.com> wrote:
On Sat, Oct 13, 2012 at 9:30 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Does it consistently work on MSVC?
I decided early on to not bother trying to support MSVC, at least for the time being.
Well, that simplifies things :) but it removes a large set of users :(
Not only does its broken preprocessor mean I'd have to make a lot of workarounds,
Boost.Preprocessor should already impl most of the workarounds needed. Some other workarounds (not too many) will be needed in Boost.Generic for empty (e.g., ( void ) instead of ( ) ) and to ensure proper macro expansion order (usually PP_EXPAND or similar but re-impl because of reentrancy). That's at least my experience with supporting both MSVC and GCC for Boost.Contract.
but it also doesn't support some of the language features I use to emulate N2914 concepts (this may have changed a little since last year, but I have a feeling not by much).
This is unfortunate because again, a large number of users have to use MSVC... what were the MSVC issues that caused the Boost.Generic concept emulation code not to work (pp a side)? Thanks, --Lorenzo

On Sun, Oct 14, 2012 at 4:32 AM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Boost.Preprocessor should already impl most of the workarounds needed. Some other workarounds (not too many) will be needed in Boost.Generic for empty (e.g., ( void ) instead of ( ) ) and to ensure proper macro expansion order (usually PP_EXPAND or similar but re-impl because of reentrancy). That's at least my experience with supporting both MSVC and GCC for Boost.Contract.
At the time I wrote most of Boost.Generic, the variadic extensions to Boost.Preprocessor didn't exist and neither did the variadic macro data library in the sandbox, so I had to make most of the facilities from scratch. Supporting MSVC with my hand-rolled macros was secondary to getting something standard-compliant working (which is a big feat on its own). This is unfortunate because again, a large number of users have to use
MSVC... what were the MSVC issues that caused the Boost.Generic concept emulation code not to work (pp a side)?
Off-hand, here are some of the features I require, not including a compliant preprocessor (I believe VC++ supports many of these now): the extended sfinae rules variadic templates template aliases relaxed typename rule rvalue references static_assert decltype default template arguments for function templates template aliases compliant <type_traits> compliant <utility> -- -Matt Calabrese

On Sun, Oct 14, 2012 at 4:32 AM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Well, that simplifies things :) but it removes a large set of users :(
If you are really interested in this, you can try building the tests with MSVC. Commit any changes you want if you can come up with work-arounds, as long as you don't break GCC or Clang support in the process. If you decide to try this, it may help to follow the instructions for enabling preprocessed headers, which will likely avoid a plethora of preprocessor issues : http://generic.nfshost.com/generic/getting_started.html I haven't even bothered attempting to build with MSVC, so it will likely not be fun. -- -Matt Calabrese

On Sun, Oct 14, 2012 at 2:18 AM, Matt Calabrese <rivorus@gmail.com> wrote:
On Sun, Oct 14, 2012 at 4:32 AM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Well, that simplifies things :) but it removes a large set of users :(
If you are really interested in this, you can try building the tests with MSVC. Commit any changes you want if you can come up with work-arounds, as long as you don't break GCC or Clang support in the process. If you decide to try this, it may help to follow the instructions for enabling preprocessed headers, which will likely avoid a plethora of preprocessor issues : http://generic.nfshost.com/generic/getting_started.html
I haven't even bothered attempting to build with MSVC, so it will likely not be fun.
Yeah, from what you say it sound like MSVC is not currently supported but it should be possible to support it. I think if we were to release the lib in Boost, we should definitely try hard to support MSVC (but such an effort could even follow the lib Boost review and approval, IMO). Thanks, --Lorenzo

On Sun, Oct 14, 2012 at 1:32 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Yeah, from what you say it sound like MSVC is not currently supported but it should be possible to support it. I think if we were to release the lib in Boost, we should definitely try hard to support MSVC (but such an effort could even follow the lib Boost review and approval, IMO).
Agreed, but review is still a ways off, if it even gets there. -- -Matt Calabrese

On Sat, Oct 13, 2012 at 5:52 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
http://contractpp.svn.sourceforge.net/viewvc/contractpp/trunk/doc/html/contr...
Another minor comment. For associated types and constrained parameters such as: template< ObjectType T> I see you are using the syntax template( typename(ObjectType) T ) I am currently using the following syntax in Boost.Generic: template( ((ObjectType)) T ) This is for a couple of reasons. First, it's concise and closer to the standard syntax, but also because in the end that syntax should be able to be used for unary concepts whose parameter is not a type, in which case "typename" doesn't really make sense. -- -Matt Calabrese

On Sat, Oct 13, 2012 at 4:55 PM, Matt Calabrese <rivorus@gmail.com> wrote:
On Sat, Oct 13, 2012 at 5:52 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
http://contractpp.svn.sourceforge.net/viewvc/contractpp/trunk/doc/html/contr...
Another minor comment. For associated types and constrained parameters such as:
template< ObjectType T>
I see you are using the syntax
template( typename(ObjectType) T )
I am currently using the following syntax in Boost.Generic:
template( ((ObjectType)) T )
My (Boost.Contract) syntax uses this for a value template parameter named T and of type ObjectType (leaving the double parenthesis a side). So I need the typename prefix in order to distinguish between template template parameters and value template parameters. On this topic, while reading N3351 I was thinking to not support this syntax and use requires instead: template( typename T ) requires( ObjectType<T> ) concept_map (RandomAccessIterator) ( T* ) Because I don't know how to support the following: template( typename(ObjectType) T, typename(MapType<T>) U ) concept_map (RandomAccessIterator) ( T* ) Where U is of constrained by MapType<U, T>. The only way I can think to do this is typename(MapType<mpl::_, T>) U (then it'd be up to Boost.Generic to deal with mpl::_ used this way). Does this use case apply to associated types?
This is for a couple of reasons. First, it's concise and closer to the standard syntax, but also because in the end that syntax should be able to be used for unary concepts whose parameter is not a type, in which case "typename" doesn't really make sense.
Thanks, --Lorenzo

On Sat, Oct 13, 2012 at 9:38 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Because I don't know how to support the following:
template( typename(ObjectType) T, typename(MapType<T>) U ) concept_map (RandomAccessIterator) ( T* )
Where U is of constrained by MapType<U, T>. The only way I can think to do this is typename(MapType<mpl::_, T>) U (then it'd be up to Boost.Generic to deal with mpl::_ used this way). Does this use case apply to associated types?
This applies to using "auto" in the parameter list in N2914. While I don't support it yet, I was planning on doing it via the preprocessor rather than trying to do it with template metaprogramming, though I haven't thought about it in depth yet: template( ((ObjectType)) T, (((MapType)( auto, T )) U) This corresponds to the N2914 syntax mentioned in [temp.param] on page 312 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2914.pdf -- -Matt Calabrese

On Sat, Oct 13, 2012 at 10:33 PM, Matt Calabrese <rivorus@gmail.com> wrote:
On Sat, Oct 13, 2012 at 9:38 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Because I don't know how to support the following:
template( typename(ObjectType) T, typename(MapType<T>) U ) concept_map (RandomAccessIterator) ( T* )
Where U is of constrained by MapType<U, T>. The only way I can think to do this is typename(MapType<mpl::_, T>) U (then it'd be up to Boost.Generic to deal with mpl::_ used this way). Does this use case apply to associated types?
This applies to using "auto" in the parameter list in N2914. While I don't support it yet, I was planning on doing it via the preprocessor rather than trying to do it with template metaprogramming,
This is also better because the pp parser can get all the "traits" of the declaration (instead than leaving the substitution of auto/mpl::_ to the back-end). This way different back-ends can do different things with the parsed pp "traits".
though I haven't thought about it in depth yet:
template( ((ObjectType)) T, (((MapType)( auto, T )) U)
It will have to be something like that, maybe with typename to distinguish from value template parameters as I mentioned before: template( typename(ObjectType) T, typename(MapType)(auto, T) U ) // (1) ... Or, as I was saying, can't we just require the use of requires ;) instead: template( typename T, typename U ) requires( ObjectType<T>, (MapType<U, T>) ) // (2) ... Does Boost.Generic need ObjectType and MapType separately (as per 1) or it can use their instantiations Object<T> and MapType<U, T> (as per 2)?
This corresponds to the N2914 syntax mentioned in [temp.param] on page 312 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2914.pdf
--Lorenzo

On Sun, Oct 14, 2012 at 4:40 AM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
It will have to be something like that, maybe with typename to distinguish from value template parameters as I mentioned before:
I have a little bit of magic that should allow me to handle this, relying on template magic. I can detect whether a concept name corresponds to a unary template with a type parameter as opposed to a non-type template parameter via an overloading of certain function templates that occurs when you create the initial concept. This isn't in yet, but proof-of-concepts seem to work. The only complication with this is that since the check occurs at compile-time (but not during preprocessing), the generated code needs to be... complicated. It should in theory get rid of requiring the user to make that specification, though. Anyway, since probably 99% of the time the user is working with concepts that have type parameters, I think we should aim to support that in a concise manner. I.E. instead of requiring the user to write "typename" for those concepts with type parameters, support it without typename and instead require some alternative syntax for concepts with non-type parameters, pending I can't get it all to work behind-the-scenes with the same preprocessor syntax.
Or, as I was saying, can't we just require the use of requires ;) instead:
template( typename T, typename U ) requires( ObjectType<T>, (MapType<U, T>) ) // (2)
Yeah, and that should be the main focus since it's easier to implement.
Does Boost.Generic need ObjectType and MapType separately (as per 1) or it can use their instantiations Object<T> and MapType<U, T> (as per 2)?
In most contexts you can use their instantiations directly, and you should be able to do so there. One place that may need to change is when dealing with refinement (due to the changes I'm currently making, but I may be able to come up with a hack). For example, in this concept: http://generic.nfshost.com/generic/standard_concepts/concepts/arithmeticlike... I refer to less-refined concepts I.E. HasNegate<T>, but in order to make explicit concept map templates for more-refined concepts work for less-refined concepts in a truly N2914-compliant way, I may need to change the syntax when doing refinement to something like (HasNegate)(T). -- -Matt Calabrese

On Sun, Oct 14, 2012 at 2:07 AM, Matt Calabrese <rivorus@gmail.com> wrote:
On Sun, Oct 14, 2012 at 4:40 AM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
It will have to be something like that, maybe with typename to distinguish from value template parameters as I mentioned before:
I have a little bit of magic that should allow me to handle this, relying on template magic. I can detect whether a concept name corresponds to a unary template with a type parameter as opposed to a non-type template parameter via an overloading of certain function templates that occurs when you create the initial concept. This isn't in yet, but proof-of-concepts seem to work. The only complication with this is that since the check occurs at compile-time (but not during preprocessing), the generated code needs to be... complicated. It should in theory get rid of requiring the user to make that specification, though.
I'd prefer the syntax to allow the pp to distinguish between the different kind of template params (that might not be required for Boost.Generic but it might be required by other back-ends). In any case, I understand your point and let's continue this discussion if/when I start impl the macros -- at that time we will finalize all the details of the syntax including this one.
Anyway, since probably 99% of the time the user is working with concepts that have type parameters, I think we should aim to support that in a concise manner. I.E. instead of requiring the user to write "typename" for those concepts with type parameters, support it without typename and instead require some alternative syntax for concepts with non-type parameters, pending I can't get it all to work behind-the-scenes with the same preprocessor syntax.
Or, as I was saying, can't we just require the use of requires ;) instead:
template( typename T, typename U ) requires( ObjectType<T>, (MapType<U, T>) ) // (2)
Yeah, and that should be the main focus since it's easier to implement.
And maybe we just support the requires clause and we don't have to worry about typename or no typename in the template parameter signature...
Does Boost.Generic need ObjectType and MapType separately (as per 1) or it can use their instantiations Object<T> and MapType<U, T> (as per 2)?
In most contexts you can use their instantiations directly, and you should be able to do so there. One place that may need to change is when dealing with refinement (due to the changes I'm currently making, but I may be able to come up with a hack). For example, in this concept:
http://generic.nfshost.com/generic/standard_concepts/concepts/arithmeticlike...
I refer to less-refined concepts I.E. HasNegate<T>, but in order to make explicit concept map templates for more-refined concepts work for less-refined concepts in a truly N2914-compliant way, I may need to change the syntax when doing refinement to something like (HasNegate)(T).
--Lorenzo

On Sun, Oct 14, 2012 at 1:30 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
And maybe we just support the requires clause and we don't have to worry about typename or no typename in the template parameter signature...
I already support the syntax for associated types and it works fine for unary type concepts ( see http://generic.nfshost.com/generic/standard_concepts/container_concepts/cont... ), so I don't think it's that much of a stretch to eventually apply it to template/concept parameter lists, I just don't think it should be a priority at this time. As for disambiguation, we could always just use "constexpr" to mean "value" as opposed to typename, and we could use "template" for template templates. "typename" should probably just always be implied since most users would never use anything other than type concepts in that manner. Anyway, all of this is stuff to worry about later. -- -Matt Calabrese

On Sun, Oct 14, 2012 at 1:56 PM, Matt Calabrese <rivorus@gmail.com> wrote:
On Sun, Oct 14, 2012 at 1:30 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
And maybe we just support the requires clause and we don't have to worry about typename or no typename in the template parameter signature...
I already support the syntax for associated types and it works fine for unary type concepts ( see http://generic.nfshost.com/generic/standard_concepts/container_concepts/cont... ),
Woops, I just noticed I'm missing "empty" and "size," not sure how I managed that. -- -Matt Calabrese

on Sat Oct 13 2012, Lorenzo Caminiti <lorcaminiti-AT-gmail.com> wrote:
On Sat, Oct 13, 2012 at 2:40 PM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
Of course this is achievable only in concepts as language feature. Any use-pattern-based concept library will not be able to implement this behavior, as far as I am aware.
On the other hand, a pseudo-signature-based library conceivably could.
And since this discussion started because Lorenzo wonders which of the two approaches to take for his library, perhaps this limitation gives us the answer.
... which is sounding more and more like Lorenzo should leave N3351 alone, help Matt at least implementing a front-end macros like the ones below (s/CONTRACT_CONCEPT/BOOST_GENERIC_CONCEPT or similar), and use concepts defined using Boost.Generic in Boost.Contract requires clause.
http://contractpp.svn.sourceforge.net/viewvc/contractpp/trunk/doc/html/contr...
Caveat: I do doubt that you can do the forced conversions in a library without loss of efficiency, but it's worth a try anyhow. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On Sat, Oct 13, 2012 at 9:06 PM, Dave Abrahams <dave@boostpro.com> wrote:
Caveat: I do doubt that you can do the forced conversions in a library without loss of efficiency, but it's worth a try anyhow.
Yeah, there's likely going to be a lot of caveats concerning library-emulated constrained templates since it generally would imply wrapping the types and probably explicitly qualifying calls to associated functions. It may end up not really being feasible, but we'll figure that out when we get there. At the very least, we can get automatic archetype generation and some checking, which is still useful. Properly constrained templates may end up being like my approach to library-emulated concept overloads (that is, technically feasible, but ultimately cumbersome and possibly not worth it). -- -Matt Calabrese

On Sat, Oct 13, 2012 at 6:26 PM, Matt Calabrese <rivorus@gmail.com> wrote:
On Sat, Oct 13, 2012 at 9:06 PM, Dave Abrahams <dave@boostpro.com> wrote:
Caveat: I do doubt that you can do the forced conversions in a library without loss of efficiency, but it's worth a try anyhow.
Yeah, there's likely going to be a lot of caveats concerning library-emulated constrained templates since it generally would imply wrapping the types and probably explicitly qualifying calls to associated functions. It may end up not really being feasible, but we'll figure that out when we get there. At the very least, we can get automatic archetype generation and some checking, which is still useful. Properly constrained templates may end up being like my approach to library-emulated concept overloads (that is, technically feasible, but ultimately cumbersome and possibly not worth it).
Well, it just means that even if we are able to impl the lib, there are still reasons to put concepts into the language (and the same goes for contracts!). Hopefully having the lib will boost ;) people to use concepts in their code so it'd be easier to push for their standardization. --Lorenzo

On Sat, Oct 13, 2012 at 9:45 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Well, it just means that even if we are able to impl the lib, there are still reasons to put concepts into the language (and the same goes for contracts!). Hopefully having the lib will boost ;) people to use concepts in their code so it'd be easier to push for their standardization.
It also means that there will be a somewhat portable, usable solution for compilers that implement C++11 before they get around to supporting the concepts of the next standard (look at how far behind MSVC is with C++11 compared to GCC and Clang...). Perhaps when C++1y is a standard, MSVC will be able to handle Boost.Generic, and by the time C++2whatever is a standard, they will finally support C++1y concepts. I figure that gives us a good 10 years of usability at least :p -- -Matt Calabrese

on Sat Oct 13 2012, Andrzej Krzemienski <akrzemi1-AT-gmail.com> wrote:
I was trying to say that pseudo-signatures look like normal signatures and might imply that no "loose match" occurs. In contrast, usage patterns look like expressions, and you do expect implicit conversions. Although, I do not find it a major argument against pseudo-signatures.
Notationally speaking, I think pseudo-signatures are *much* more suggestive of those semantics than are valid expressions.
Could you show an example where this is the case?
The example you gave illustrates it, IMO. This is obviously subjective, but when you read the requirements as saying "this expression must be convertible to bool" there's no obvious reason that when the expression appears in a larger context, that conversion necessarily happens.
I may be missing something obvious, but I would say it is the other way around.
All you have to do is think of the concept and its pseudo-signatures as conceptually defining a wrapper interface over the concrete model of the concept, through which the constrained function has all interactions with the model, and the implicit conversions fall out as a consequence of regular language rules. If you follow this mental model for the concepts mechanism, many things (some outside the scope of this discussion) fall into place logically... or at least they do for me. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

2012/10/14 Dave Abrahams <dave@boostpro.com>
on Sat Oct 13 2012, Andrzej Krzemienski <akrzemi1-AT-gmail.com> wrote:
I was trying to say that pseudo-signatures look like normal signatures and might imply that no "loose match" occurs. In contrast, usage patterns look like expressions, and you do expect implicit conversions. Although, I do not find it a major argument against pseudo-signatures.
Notationally speaking, I think pseudo-signatures are *much* more suggestive of those semantics than are valid expressions.
Could you show an example where this is the case?
The example you gave illustrates it, IMO. This is obviously subjective, but when you read the requirements as saying "this expression must be convertible to bool" there's no obvious reason that when the expression appears in a larger context, that conversion necessarily happens.
I may be missing something obvious, but I would say it is the other way around.
All you have to do is think of the concept and its pseudo-signatures as conceptually defining a wrapper interface over the concrete model of the concept, through which the constrained function has all interactions with the model, and the implicit conversions fall out as a consequence of regular language rules. If you follow this mental model for the concepts mechanism, many things (some outside the scope of this discussion) fall into place logically... or at least they do for me.
Can you help me understand one thing about pseudo-signatures? If I have the following concept: concept MyIter<typename It> { It& operator++(It&); bool It::is_valid(); } Does this say preincrement returns exactly reference to It or only something convertible to It? If it is the latter, it would mean that the concept model where pre-increment only returns something convertible to It& satisfies the concept, but makes the following usage invalid: template <MyIter It> void test_next(It i) { return (++it).is_valid(); } Or am I wrong? Regards, &rzej

On Sun, Oct 14, 2012 at 12:23 AM, Andrzej Krzemienski <akrzemi1@gmail.com>wrote: [...]
Can you help me understand one thing about pseudo-signatures? If I have the following concept:
concept MyIter<typename It> { It& operator++(It&); bool It::is_valid(); }
Does this say preincrement returns exactly reference to It or only something convertible to It? If it is the latter, it would mean that the concept model where pre-increment only returns something convertible to It& satisfies the concept, but makes the following usage invalid:
template <MyIter It> void test_next(It i) { return (++it).is_valid(); }
Or am I wrong?
My understanding, based only on reading this thread, is that, since It is declared as a model of MyIter (I'm not sure what the correct terminology is to express the relationship between It and MyIter within the scope of a "template <MyIter It>" declaration, but that's what I mean by "is declared as a model of"), ++it in the above context refers to the (pseudo-?)signature declared in the MyIter concept definition (hence has return type It&). The operator++ within the MyIter concept definition implicitly (by default?) uses the operator++ of It, plus it adds a (implicit) conversion of the result to an It&, if necessary. So I would think the body of test_next would be entirely valid (no pun intended)...modulo the attempted bool -> void conversion :) Aside: This discussion, of which I've only been a casual observer, has definitely been interesting. - Jeff

on Sun Oct 14 2012, "Jeffrey Lee Hellrung, Jr." <jeffrey.hellrung-AT-gmail.com> wrote:
On Sun, Oct 14, 2012 at 12:23 AM, Andrzej Krzemienski <akrzemi1@gmail.com>wrote: [...]
Can you help me understand one thing about pseudo-signatures? If I have the following concept:
concept MyIter<typename It> { It& operator++(It&); bool It::is_valid(); }
Does this say preincrement returns exactly reference to It or only something convertible to It? If it is the latter, it would mean that the concept model where pre-increment only returns something convertible to It& satisfies the concept, but makes the following usage invalid:
template <MyIter It> void test_next(It i) { return (++it).is_valid(); }
Or am I wrong?
My understanding, based only on reading this thread, is that, since It is declared as a model of MyIter (I'm not sure what the correct terminology is to express the relationship between It and MyIter within the scope of a "template <MyIter It>" declaration, but that's what I mean by "is declared as a model of"), ++it in the above context refers to the (pseudo-?)signature declared in the MyIter concept definition (hence has return type It&).
Precisely.
The operator++ within the MyIter concept definition implicitly (by default?) uses the operator++ of It, plus it adds a (implicit) conversion of the result to an It&, if necessary.
Precisely again.
So I would think the body of test_next would be entirely valid (no pun intended)...modulo the attempted bool -> void conversion :)
:-)
Aside: This discussion, of which I've only been a casual observer, has definitely been interesting.
Glad to help. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

on Sun Oct 14 2012, Andrzej Krzemienski <akrzemi1-AT-gmail.com> wrote:
2012/10/14 Dave Abrahams <dave@boostpro.com>
on Sat Oct 13 2012, Andrzej Krzemienski <akrzemi1-AT-gmail.com> wrote:
I was trying to say that pseudo-signatures look like normal signatures and might imply that no "loose match" occurs. In contrast, usage patterns look like expressions, and you do expect implicit conversions. Although, I do not find it a major argument against pseudo-signatures.
Notationally speaking, I think pseudo-signatures are *much* more suggestive of those semantics than are valid expressions.
Could you show an example where this is the case?
The example you gave illustrates it, IMO. This is obviously subjective, but when you read the requirements as saying "this expression must be convertible to bool" there's no obvious reason that when the expression appears in a larger context, that conversion necessarily happens.
I may be missing something obvious, but I would say it is the other way around.
All you have to do is think of the concept and its pseudo-signatures as conceptually defining a wrapper interface over the concrete model of the concept, through which the constrained function has all interactions with the model, and the implicit conversions fall out as a consequence of regular language rules. If you follow this mental model for the concepts mechanism, many things (some outside the scope of this discussion) fall into place logically... or at least they do for me.
Can you help me understand one thing about pseudo-signatures? If I have the following concept:
concept MyIter<typename It> { It& operator++(It&); bool It::is_valid(); }
Does this say preincrement returns exactly reference to It or only something convertible to It?
It says that * the preincrement of a /model/ of MyIter must return something convertible to It& * WHen a function constrained to using the MyIter concept preincrements an instance of MyIter, it sees exactly It& returned, regardless of what is actually returned by the model's preincrement operation.
If it is the latter, it would mean that the concept model where pre-increment only returns something convertible to It& satisfies the concept, but makes the following usage invalid:
template <MyIter It> void test_next(It i) { return (++it).is_valid(); }
Or am I wrong?
By the 2nd bullet, test_next sees (++it) as having type It&, regardless of what is actually returned by the model's preincrement operator. That is, a conversion is forced if the return type is not exactly It&. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

2012/10/14 Dave Abrahams <dave@boostpro.com>
on Sun Oct 14 2012, Andrzej Krzemienski <akrzemi1-AT-gmail.com> wrote:
Can you help me understand one thing about pseudo-signatures? If I have the following concept:
concept MyIter<typename It> { It& operator++(It&); bool It::is_valid(); }
Does this say preincrement returns exactly reference to It or only something convertible to It?
It says that
* the preincrement of a /model/ of MyIter must return something convertible to It&
* WHen a function constrained to using the MyIter concept preincrements an instance of MyIter, it sees exactly It& returned, regardless of what is actually returned by the model's preincrement operation.
Thanks for this explanation. Now I feel I finally understood the pseudo-signatures. Regards, &rzej

On Thu, Oct 11, 2012 at 5:25 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
Look for example at http://www.sgi.com/tech/stl/find_if.html. I can easily satisfy all the stated requirements and produce an example that won't compile. I can also easily fail to satisfy the stated requirements and produce an example that *will* compile. That's because there's a great deal of potential mischief hiding in expressions such as
f(*p)
Can you produce those examples? You've made me curious.
Andrew, what were the reasons in N3351 to move away from pseudo-signatures and toward usage-patterns? I red N3351 but if I had to summarize the rationale for the usage-pattern approach instead of C++0x concept's pseudo-signature then I'd say that N3351's argument is that its concept design for STL algorithms is "simpler" using usage-pattern than using C++0x pseudo-signatures. However, "simpler" is a subjective metric... I'm sure I'm missing something maybe reading N3351 one more time will clarify my thinking. Thanks a lot! --Lorenzo P.S. IMO, this discussion now belongs in the Boost ML given that there are two potential Boost libs that are considering implementing concepts. N3351 and other concept proposal authors should join the discussion on the Boost ML if at all possible so they will be able to see their ideas and proposals implemented in the language (even if just in the form a lib).

on Thu Oct 11 2012, Lorenzo Caminiti <lorcaminiti-AT-gmail.com> wrote:
On Thu, Oct 11, 2012 at 5:25 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
Look for example at http://www.sgi.com/tech/stl/find_if.html. I can easily satisfy all the stated requirements and produce an example that won't compile. I can also easily fail to satisfy the stated requirements and produce an example that *will* compile. That's because there's a great deal of potential mischief hiding in expressions such as
f(*p)
Can you produce those examples? You've made me curious.
Andrew, what were the reasons in N3351 to move away from pseudo-signatures and toward usage-patterns?
I red N3351 but if I had to summarize the rationale for the usage-pattern approach instead of C++0x concept's pseudo-signature then I'd say that N3351's argument is that its concept design for STL algorithms is "simpler" using usage-pattern than using C++0x pseudo-signatures. However, "simpler" is a subjective metric... I'm sure I'm missing something maybe reading N3351 one more time will clarify my thinking.
One other thing about this rationale: it doesn't appear to consider what's simple for someone trying to write a generic algorithm. It's easy to maintain that rationale when you don't have an implementation of algorithm type-checking to force rigor on you. When you start working with an actual implementation, and it refuses to compile "obviously right" things like f(*p) because in fact, the requirements as stated do not guarantee that it *should* compile, then your idea of what's simple may change. A usable system has to be simple both for programmers modeling concepts and for programmers using those concepts in generic code. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On Thu, Oct 11, 2012 at 5:15 PM, Andrew Sutton <asutton.list@gmail.com> wrote:
Andrew, what were the reasons in N3351 to move away from pseudo-signatures and toward usage-patterns?
Not really. That decision was made before I started working on it.
OK, thanks :) Who shall I ask? Stroustrup? I'm in a very neutral position myself, I just want to understand pros and cons for both methods. --Lorenzo

2012/10/11 Dave Abrahams <dave@boostpro.com>
on Wed Oct 10 2012, Lorenzo Caminiti <lorcaminiti-AT-gmail.com> wrote:
On Tue, Oct 9, 2012 at 7:16 PM, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Oct 09 2012, Lorenzo Caminiti <lorcaminiti-AT-gmail.com> wrote:
I made a first attempt to sketch how N3351-like concepts will look into Boost.Contract (just for the find algorithm and its concepts for now): [...] Please tell me what you think now
Since you asked what I think,
"pseudo-signatures > usage patterns"
Hello all,
Can we write down pros and cons for concepts implemented via pseudo-signatures (C++0x-like and Boost.Generic) vs. usage patterns (N3351 and Boost.Contract)?
Who wants to start? Matt, Dave, Andrzej, ... I can compile a qbk table with what we discuss.
Matt C. already mentioned the ease of producing a new model of a given concept. I will mention that usage patterns tend to leave unstated assumptions about intermediate associated types that make it very hard to be sure you're saying what you really mean, and tend to lead to semantically-vague requirements. Look for example at http://www.sgi.com/tech/stl/find_if.html. I can easily satisfy all the stated requirements and produce an example that won't compile. I can also easily fail to satisfy the stated requirements and produce an example that *will* compile. That's because there's a great deal of potential mischief hiding in expressions such as
f(*p)
[For example, what is the relationship between the type of *p and the iterator's value type? What is the relationship between that type and the argument type of the function object? Even if the reference type of p is convertible to the argument type of f, the call can still be ambiguous... etc...]
This is a perfect example of what you get when you deal in usage patterns, and the C++98 (and even the '03) standard is full of these problems. One of the most important features of the pseudo-signature approach is that it injects implicit conversions to (explicitly-stated) associated types, so these issues don't arise.
Note that N3351 addressed this problem of specifying expression type. You can express this as: requires(Iter it) { Iter& == {++it}; // is_same<decltype(++it), Iter&>::value ValueType<Iter>& == {*++it}; // is_same<decltype(*++it), ValueType<Iter>&>::value } Regards, &rzej

"pseudo-signatures > usage patterns"
Hello all,
Can we write down pros and cons for concepts implemented via pseudo-signatures (C++0x-like and Boost.Generic) vs. usage patterns (N3351 and Boost.Contract)?
I think such comparison looks different when you want to provide the best solution for STL and different when you want to provide the solution for *every* generic library. STL is particular in two ways: first, it has been in use for years, and any solution needs to fit seamlessly into the existing code (even if the code doesn't adhere to the best practices of generic programming); second, STL uses operators heavily. I expect no other generic library to make such extensive use of operators. I have never tried to implement support for any type of concepts, so I am only looking at the problem from end user's perspective; arguments like "pseudo-signatures make it easier to generate archetypes" do not appeal to me that much (which admittedly may be an ignorant approach). Informally, I can say that I expect of an iterator a post- and pre-increment, comparison, and dereference. With pseudo signatures I express it as: Iter& operator++(Iter); Iter operator++(Iter, int); // note the dummy int bool operator==(Iter const&, Iter const&); // bool or convertible to bool? ValueType<Iter>& operator*(Iter const&); But how do I say "convertible to bool"? With usage-patterns, I type: Iter& == { ++it }; Iter == { it++ }; bool = { it == jt }; // convertible ValueType<Iter>& == { *it }; To me, the latter notation appears more appealing: it is shorter, it gives me a concise way of specifying "has exactly this type" or "is implicitly convertible to" or "is explicitly convertible to". But this elegance may be only due to the fact that I am using operators. For other concepts that do not want operators it might have been uglier: Socket == { get_socket(env, params) }; vs. Socket get_params(Env, Params); Next, when I add two numbers with pseudo signatures I type: T operator+(Tconst&, Tconst&); vs: T == { a + b }; But does the following function: X operator+(X& a, X&b); // mutable references Satisfy the pseudo-signature requirement or not? I know it does satisfy the usage pattern, and not every programmer is (has the luxury to be) const-correct. Next, for OutputIterator, I want operation (*it = v) to be valid. But I never intend to use the assignment alone or the indirection alone. With usage-patterns, I can write: *it = v; // cannot use the result of the assignment With pseudo signatures, I need to specify at least two declarations, and I need to specify what the operator* returns, even though it should be an implementation detail: typename DerefResult; DerefResult operator*(Iter&); void operator=(DerefResult, ValueType<Iter>); But again, maybe this operation should be output(it, v), and it is STL's limitation that it uses a combination of operators for a sort-of 'atomic' operation. Regards, &rzej

Next, for OutputIterator, I want operation (*it = v) to be valid. But I never intend to use the assignment alone or the indirection alone. With usage-patterns, I can write:
*it = v; // cannot use the result of the assignment
With pseudo signatures, I need to specify at least two declarations, and I need to specify what the operator* returns, even though it should be an implementation detail:
typename DerefResult; DerefResult operator*(Iter&); void operator=(DerefResult, ValueType<Iter>);
But again, maybe this operation should be output(it, v), and it is STL's limitation that it uses a combination of operators for a sort-of 'atomic' operation.
Neither pro nor con, and I'm not sure if this will add anything to the debate, but Marcin and I had a long discussion, as we were writing the TR about requirements involving compound expressions. I think we ended up with the idea that you can always split a compound expression into sub-expressions using auto and &&. auto&& r = *i; r = x; I believe that this would essentially equivalent to what would be declared using pseudo-signatures. That is, an unspecified, deduced type name whose only requirement was to support assignment.

On Thu, Oct 11, 2012 at 7:09 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
Next, for OutputIterator, I want operation (*it = v) to be valid. But I never intend to use the assignment alone or the indirection alone. With usage-patterns, I can write:
*it = v; // cannot use the result of the assignment
With pseudo signatures, I need to specify at least two declarations, and I need to specify what the operator* returns, even though it should be an implementation detail:
typename DerefResult; DerefResult operator*(Iter&); void operator=(DerefResult, ValueType<Iter>);
But again, maybe this operation should be output(it, v), and it is STL's limitation that it uses a combination of operators for a sort-of 'atomic' operation.
Neither pro nor con, and I'm not sure if this will add anything to the debate, but Marcin and I had a long discussion, as we were writing the TR about requirements involving compound expressions. I think we ended up with the idea that you can always split a compound expression into sub-expressions using auto and &&.
auto&& r = *i; r = x;
I believe that this would essentially equivalent to what would be declared using pseudo-signatures. That is, an unspecified, deduced type name whose only requirement was to support assignment.
This rough equivalence has been known and understood since the initial concepts proposals were discussed. There are some fiddly details with parameter passing: is it "as if" we're passing by lvalue reference? const lvalue reference? rvalue reference? Pseudo-signatures make these details explicit, but usage patterns have always been underspecified in this regard. Back to the intermediate type of *i... with pseudo-signatures, you need to name this intermediate type, while usage patterns allow the type to remain anonymous. I think there is significant value in having to name this intermediate type: for one, the compiler can use this type in error messages that occur while type-checking constrained templates, rather than having to spew out something like 'decltype(*i)' or 'the type of the expression *i". Moreover, giving the type a name helps both the concept author and the concept user understand the role of this intermediate type. I find it instructive that, even in the original, usage-pattern-based description of the STL, the intermediate types are still called out and given names (the result of *i is the reference type). - Doug

on Thu Oct 11 2012, Andrzej Krzemienski <akrzemi1-AT-gmail.com> wrote:
"pseudo-signatures > usage patterns"
Hello all,
Can we write down pros and cons for concepts implemented via pseudo-signatures (C++0x-like and Boost.Generic) vs. usage patterns (N3351 and Boost.Contract)?
I think such comparison looks different when you want to provide the best solution for STL and different when you want to provide the solution for *every* generic library. STL is particular in two ways: first, it has been in use for years, and any solution needs to fit seamlessly into the existing code (even if the code doesn't adhere to the best practices of generic programming); second, STL uses operators heavily. I expect no other generic library to make such extensive use of operators.
That's a very surprising statement. Many applications of genericity seem to fall into mathematical domains.
I have never tried to implement support for any type of concepts, so I am only looking at the problem from end user's perspective; arguments like "pseudo-signatures make it easier to generate archetypes" do not appeal to me that much (which admittedly may be an ignorant approach).
Informally, I can say that I expect of an iterator a post- and pre-increment, comparison, and dereference. With pseudo signatures I express it as:
Iter& operator++(Iter); Iter operator++(Iter, int); // note the dummy int bool operator==(Iter const&, Iter const&); // bool or convertible to bool? ValueType<Iter>& operator*(Iter const&);
But how do I say "convertible to bool"?
You just said it :-)
With usage-patterns, I type:
Iter& == { ++it }; Iter == { it++ }; bool = { it == jt }; // convertible ValueType<Iter>& == { *it };
To me, the latter notation appears more appealing: it is shorter, it gives me a concise way of specifying "has exactly this type" or "is implicitly convertible to" or "is explicitly convertible to".
But you don't want to say those things. When you say "is implicitly convertible" and you don't have the pseudo-signature loose/tight mechanism behind it, the obvious, clean way to express your algorithms doesn't compile, and you need to uglify them by inserting explicit conversions. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Can we write down pros and cons for concepts implemented via pseudo-signatures (C++0x-like and Boost.Generic) vs. usage patterns (N3351 and Boost.Contract)?
I think it would be more productive to start writing concepts in whatever system seems appropriate. Pick a library; write the concepts and their requirements in a way that feels natural to you. Don't forget the semantics. Figure out what you need to say, and try to find the most effective and least verbose way to say it. Try writing it another way. Maybe you'll end up with different ideas that either use-patterns or pseudo-signatures. It's more important to know the range of things that need to be said than how to say them.

On Thu, Oct 11, 2012 at 6:01 PM, Andrew Sutton <asutton.list@gmail.com> wrote:
Can we write down pros and cons for concepts implemented via pseudo-signatures (C++0x-like and Boost.Generic) vs. usage patterns (N3351 and Boost.Contract)?
I think it would be more productive to start writing concepts in whatever system seems appropriate.
Hasn't this already been done for the STL (algorithms) by both N2914 and N3351? If we can't settle on a couple of approaches to experiment with and list their advantages/disadvantages, how can we hope to ever get concepts standardized? Right now I see two ways forward: 1. I implement N3351 in Boost.Contract and Matt implements N2914 in Boost.Generic. 2. Or, I help Matt implementing N2914 in Boost.Generic (and Boost.Contract's requires clause will use concepts defined using Boost.Generic). Then we all use the lib(s) to experiment with concepts before (re)proposing concepts (and hopefully contracts) for standardization in C++1x.
Pick a library; write the concepts and their requirements in a way that feels natural to you. Don't forget the semantics. Figure out what you need to say, and try to find the most effective and least verbose way to say it. Try writing it another way.
Maybe you'll end up with different ideas that either use-patterns or pseudo-signatures.
It's more important to know the range of things that need to be said than how to say them.
Thanks, --Lorenzo

On Thu, Oct 11, 2012 at 9:51 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Right now I see two ways forward:
1. I implement N3351 in Boost.Contract and Matt implements N2914 in Boost.Generic. 2. Or, I help Matt implementing N2914 in Boost.Generic (and Boost.Contract's requires clause will use concepts defined using Boost.Generic).
I'm all for either of these ideas. Also, I've finally started documenting the concepts in Boost.Generic (they are all of the concepts of N2914, not all working though). It's in the sandbox and uploaded at http://generic.nfshost.com/ , but don't try to build the docs from the sandbox because they rely on some code changes that I have yet to commit. Fixing explicit concept maps that deal with refinement is proving to be a little tricky. I'll prioritize making a simple tutorial so that people have some idea as to how to work with the concepts without looking at the boostcon slides or the library's tests. At the very least, if you click through the concepts that I have documented (all of concepts, support_concepts, and container_concepts), you can see the syntax... albeit littered with little comments, workarounds, and TODOs since the code just references the source. Then we all use the lib(s) to experiment with concepts before
(re)proposing concepts (and hopefully contracts) for standardization in C++1x.
It would be very interesting to do something like implement BGL concepts and constrained algorithms with both N2914 and N3351 approaches through Boost.Generic and Boost.Contract, then encourage people to experiment with the results. -- -Matt Calabrese

Hasn't this already been done for the STL (algorithms) by both N2914 and N3351? If we can't settle on a couple of approaches to experiment with and list their advantages/disadvantages, how can we hope to ever get concepts standardized?
Yes, I know of published descriptions of concepts for the standard library (n2914) and a subset of the standard library (n3351). The standard library is expansive, but I'd like to see serious attempts at deriving concepts for libraries outside the standard. Using C++11, of course.
Right now I see two ways forward:
1. I implement N3351 in Boost.Contract and Matt implements N2914 in Boost.Generic. 2. Or, I help Matt implementing N2914 in Boost.Generic (and Boost.Contract's requires clause will use concepts defined using Boost.Generic).
Then we all use the lib(s) to experiment with concepts before (re)proposing concepts (and hopefully contracts) for standardization in C++1x.
Experimenting is great. This is why I have Origin (https://code.google.com/p/origin/). I've been experimenting with concepts-as-a-library in various forms since 2009, and it only gets you so far. It's a very helpful if you want to develop a first pass at concepts for a library, and sometimes it pays off if you need to reason about some language feature interactions. But what aspect of standardization are you hoping to influence through your experiments?

On Fri, Oct 12, 2012 at 6:25 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
Right now I see two ways forward:
1. I implement N3351 in Boost.Contract and Matt implements N2914 in Boost.Generic. 2. Or, I help Matt implementing N2914 in Boost.Generic (and Boost.Contract's requires clause will use concepts defined using Boost.Generic).
Then we all use the lib(s) to experiment with concepts before (re)proposing concepts (and hopefully contracts) for standardization in C++1x.
Experimenting is great. This is why I have Origin (https://code.google.com/p/origin/). I've been experimenting with concepts-as-a-library in various forms since 2009, and it only gets you so far. It's a very helpful if you want to develop a first pass at concepts for a library, and sometimes it pays off if you need to reason about some language feature interactions.
From the standardization perspective, we'll make zero progress until someone gets working on a real implementation. Just having a concept
This is an extremely important point: emulating the concepts language feature with a library has its limits. Most of the hard problems with concepts, including the hard problems of making those concepts that we right actually model what we want---involve the type checking of template definitions. That type checking can be simulated with archetypes, but it's very hard to write archetypes that are as picky as what a compiler would come up with. That means that the concepts we write can't actually be validated against implementations, so it's hard to have any confidence in those concepts. parser + archetype generator (which then instantiates template definitions based on those archetypes) would be a huge win. - Doug

On Fri, Oct 12, 2012 at 3:05 PM, Doug Gregor <doug.gregor@gmail.com> wrote:
On Fri, Oct 12, 2012 at 6:25 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
Right now I see two ways forward:
1. I implement N3351 in Boost.Contract and Matt implements N2914 in Boost.Generic. 2. Or, I help Matt implementing N2914 in Boost.Generic (and Boost.Contract's requires clause will use concepts defined using Boost.Generic).
Then we all use the lib(s) to experiment with concepts before (re)proposing concepts (and hopefully contracts) for standardization in C++1x.
Experimenting is great. This is why I have Origin (https://code.google.com/p/origin/). I've been experimenting with concepts-as-a-library in various forms since 2009, and it only gets you so far. It's a very helpful if you want to develop a first pass at concepts for a library, and sometimes it pays off if you need to reason about some language feature interactions.
This is an extremely important point: emulating the concepts language feature with a library has its limits. Most of the hard problems with concepts, including the hard problems of making those concepts that we right actually model what we want---involve the type checking of template definitions. That type checking can be simulated with archetypes, but it's very hard to write archetypes that are as picky as what a compiler would come up with. That means that the concepts we write can't actually be validated against implementations, so it's hard to have any confidence in those concepts.
From the standardization perspective, we'll make zero progress until someone gets working on a real implementation. Just having a concept parser + archetype generator (which then instantiates template definitions based on those archetypes) would be a huge win.
This is intriguing. Based on my current experience with ConceptClang (and I could be missing something), I would actually argue that concept model archetypes (CMA) are the most essential component of any implementation of concepts -- N2914 or N3351 or else -- in that they are the essential glue to all components: 1. They link the requirements specified in concept definitions with their uses in the definitions of generic components. 2. They tell constraints satisfaction which concept model to look for or generate. Note that: 1. concept model checking reuses constraints satisfaction, and 2. entity reference rebuilding, at instantiation time, is not always necessary---since the need varies based on the design and implementation model. 3. When entity reference rebuilding is supported -- as in ConceptClang's current implementation of N2914 and some flavors of the N3351 implementation, then concrete concept models become particularly essential as well. In some sense -- and I'm still working on this idea, an implementation of CMAs can indicate the extend to which explicit concept models are needed to bind to customized implementations. That being said, independently of the concepts design, any parsing and checking of concept definitions should make the implementation of CMAs as easy as possible. There are other issues that I am currently finding with the semantics of checking entity references in restricted scope, in N2914, but that is something that I should perhaps leave for another discussion. In fact, I would like to run this by people who would be interest at the next committee meeting (next week). The bottom line is that, if I am right, then implementing the checking properly is a complex task -- either due to the implementation structure of Clang or just the nature of C++ grammar. (I'm not sure which one yet.) That complexity is, unfortunately, one of the main reasons for the hold up on deploying an updated version of ConceptClang. Thanks, -- Larisse.
- Doug
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sat, Oct 13, 2012 at 9:53 AM, Larisse Voufo <lvoufo@indiana.edu> wrote:
On Fri, Oct 12, 2012 at 3:05 PM, Doug Gregor <doug.gregor@gmail.com>wrote:
On Fri, Oct 12, 2012 at 6:25 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
Right now I see two ways forward:
1. I implement N3351 in Boost.Contract and Matt implements N2914 in Boost.Generic. 2. Or, I help Matt implementing N2914 in Boost.Generic (and Boost.Contract's requires clause will use concepts defined using Boost.Generic).
Then we all use the lib(s) to experiment with concepts before (re)proposing concepts (and hopefully contracts) for standardization in C++1x.
Experimenting is great. This is why I have Origin (https://code.google.com/p/origin/). I've been experimenting with concepts-as-a-library in various forms since 2009, and it only gets you so far. It's a very helpful if you want to develop a first pass at concepts for a library, and sometimes it pays off if you need to reason about some language feature interactions.
This is an extremely important point: emulating the concepts language feature with a library has its limits. Most of the hard problems with concepts, including the hard problems of making those concepts that we right actually model what we want---involve the type checking of template definitions. That type checking can be simulated with archetypes, but it's very hard to write archetypes that are as picky as what a compiler would come up with. That means that the concepts we write can't actually be validated against implementations, so it's hard to have any confidence in those concepts.
From the standardization perspective, we'll make zero progress until someone gets working on a real implementation. Just having a concept parser + archetype generator (which then instantiates template definitions based on those archetypes) would be a huge win.
This is intriguing. Based on my current experience with ConceptClang (and I could be missing something), I would actually argue that concept model archetypes (CMA) are the most essential component of any implementation of concepts -- N2914 or N3351 or else -- in that they are the essential glue to all components:
1. They link the requirements specified in concept definitions with their uses in the definitions of generic components. 2. They tell constraints satisfaction which concept model to look for or generate. Note that: 1. concept model checking reuses constraints satisfaction, and 2. entity reference rebuilding, at instantiation time, is not always necessary---since the need varies based on the design and implementation model. 3. When entity reference rebuilding is supported -- as in ConceptClang's current implementation of N2914 and some flavors of the N3351 implementation, then concrete concept models become particularly essential as well.
In some sense -- and I'm still working on this idea, an implementation of CMAs can indicate the extend to which explicit concept models are needed to bind to customized implementations.
That being said, independently of the concepts design, any parsing and checking of concept definitions should make the implementation of CMAs as easy as possible.
There are other issues that I am currently finding with the semantics of checking entity references in restricted scope, in N2914, but that is something that I should perhaps leave for another discussion. In fact, I would like to run this by people who would be interest at the next committee meeting (next week). The bottom line is that, if I am right, then implementing the checking properly is a complex task -- either due to the implementation structure of Clang or just the nature of C++ grammar. (I'm not sure which one yet.)
That complexity is, unfortunately, one of the main reasons for the hold up on deploying an updated version of ConceptClang.
Just to clarify my point above, I find the idea of automatically generating archetype classes intriguing in the sense that they offer an alternative implementation for CMAs, in contrast to what ConceptClang is currently doing. However, I still wonder if the class template / class template specialization approach to representing concept definitions / concept maps is as powerful as the approach of treating concepts as first class entities of the language. On the other hand, the class template approach removes the need for to rebuild entity references at instantiation time. An alternative that I see that still uses archetypes is for archetype classes to be generated as in the Caramel system. I particularly find this useful in checking constrained template definitions because they allow to do that non-intrusively (or at least less intrusively than the current implementation). That is, instead of checking every dependent entity reference in the body of a template (and all the complexity that comes with it as I'm finding), appropriate archetype classes can simply be instantiated and used on the template upon parsing the template. However, I am not exactly sure how this approach couples up with the checking of the remaining components (concept defns, models, template uses, etc...). I suspect that either it will only be useful if concept maps are not supported; and constraints are only intended to serve as predicates and not provide a scope for name resolution (hence, no need to rebuild entity references). Otherwise, it will be unnecessary with any effort to complement its use. I hope this makes sense somehow...
Thanks, -- Larisse.
- Doug
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

on Sat Oct 13 2012, Larisse Voufo <lvoufo-AT-indiana.edu> wrote:
On Sat, Oct 13, 2012 at 9:53 AM, Larisse Voufo <lvoufo@indiana.edu> wrote:
On Fri, Oct 12, 2012 at 3:05 PM, Doug Gregor <doug.gregor@gmail.com>wrote:
From the standardization perspective, we'll make zero progress until someone gets working on a real implementation. Just having a concept parser + archetype generator (which then instantiates template definitions based on those archetypes) would be a huge win.
This is intriguing. Based on my current experience with ConceptClang (and I could be missing something), I would actually argue that
^^^^^^^^
This word seems to imply that you think you are contradicting Doug, but to me it sounds like what you say here reinforces his statement. The only way I can make sense of this is to assume you mean something different by "concept model archetypes" than Doug and I mean when we say "archetypes." Unfortunately, although you're wielding it as though it were a term of art, you haven't defined it, so it's hard to be sure...
concept model archetypes (CMA) are the most essential component of any implementation of concepts -- N2914 or N3351 or else -- in that they are the essential glue to all components:
1. They link the requirements specified in concept definitions with their uses in the definitions of generic components. 2. They tell constraints satisfaction which concept model to look for or generate. Note that: 1. concept model checking reuses constraints satisfaction, and 2. entity reference rebuilding, at instantiation time, is not always necessary---since the need varies based on the design and implementation model. 3. When entity reference rebuilding is supported -- as in ConceptClang's current implementation of N2914 and some flavors of the N3351 implementation, then concrete concept models become particularly essential as well.
Other terms of art used without definition here: "constraints satisfaction" (one can guess, but you seem to be referring to something very specific, so one could be wrong) and "entity reference rebuilding"
In some sense -- and I'm still working on this idea, an implementation of CMAs can indicate the extend to which explicit concept models are needed to bind to customized implementations.
"explicit concept models?" "bind to?" "customized implementations?"
That being said, independently of the concepts design, any parsing and checking of concept definitions should make the implementation of CMAs as easy as possible.
There are other issues that I am currently finding with the semantics of checking entity references in restricted scope, in N2914, but that is something that I should perhaps leave for another discussion. In fact, I would like to run this by people who would be interest at the next committee meeting (next week). The bottom line is that, if I am right, then implementing the checking properly is a complex task -- either due to the implementation structure of Clang or just the nature of C++ grammar. (I'm not sure which one yet.)
That complexity is, unfortunately, one of the main reasons for the hold up on deploying an updated version of ConceptClang.
Just to clarify my point above, I find the idea of automatically generating archetype classes intriguing in the sense that they offer an alternative implementation for CMAs, in contrast to what ConceptClang is currently doing.
I don't know what ConceptClang is currently doing, but if (as you seem to imply) you're not doing checking by generating concrete archetypes and instantiating constrained templates with them, then a. You are setting yourself up for a great deal more work than necessary, because you have to implement a complex algorithm that closely parallels what C++ already does. b. The semantics of your checking are much more likely to "not make sense" to C++ users today, because the rules will be subtly different.
However, I still wonder if the class template / class template specialization approach to representing concept definitions / concept maps is as powerful as the approach of treating concepts as first class entities of the language.
They're not mutually exclusive. The "first class entities in the language" might intentionally be defined (as they were in N2914) to have the same semantics as existing language features, and thus be implementable in terms of the same code.
On the other hand, the class template approach removes the need for to rebuild entity references at instantiation time.
There's that term of art again.
An alternative that I see that still uses archetypes is for archetype classes to be generated as in the Caramel system. I particularly find this useful in checking constrained template definitions because they allow to do that non-intrusively (or at least less intrusively than the current implementation). That is, instead of checking every dependent entity reference in the body of a template (and all the complexity that comes with it as I'm finding), appropriate archetype classes can simply be instantiated and used on the template upon parsing the template.
I'm truly surprised to hear (if I'm understanding correctly) that you didn't go down that road in the first place. I think it was well-understood to be the obvious approach.
However, I am not exactly sure how this approach couples up with the checking of the remaining components (concept defns, models, template uses, etc...). I suspect that either it will only be useful if concept maps are not supported; and constraints are only intended to serve as predicates and not provide a scope for name resolution (hence, no need to rebuild entity references). Otherwise, it will be unnecessary with any effort to complement its use.
You mostly lost me with the last paragraph. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Thanks for the feedback. Hopefully I'm addressing your comments well below. On Sat, Oct 13, 2012 at 5:10 PM, Dave Abrahams <dave@boostpro.com> wrote:
on Sat Oct 13 2012, Larisse Voufo <lvoufo-AT-indiana.edu> wrote:
On Sat, Oct 13, 2012 at 9:53 AM, Larisse Voufo <lvoufo@indiana.edu> wrote:
On Fri, Oct 12, 2012 at 3:05 PM, Doug Gregor <doug.gregor@gmail.com
wrote:
From the standardization perspective, we'll make zero progress until someone gets working on a real implementation. Just having a concept parser + archetype generator (which then instantiates template definitions based on those archetypes) would be a huge win.
This is intriguing. Based on my current experience with ConceptClang (and I could be missing something), I would actually argue that
^^^^^^^^
This word seems to imply that you think you are contradicting Doug, but to me it sounds like what you say here reinforces his statement.
I don't think I'm contradicting Doug in terms of archetypes and template definitions being important. I am just not sure that his proposed implementation is sufficient, hence the "something".
The only way I can make sense of this is to assume you mean something different by "concept model archetypes" than Doug and I mean when we say "archetypes."
I think that when you guys say "archetypes", you mean both "type archetypes" and "concept model archetypes", and I find the term sometimes ambiguous to me. I also think you think of them in terms of how they are implemented in ConceptGCC and N2914, that is, as embedded with the checking of template definitions. In other words, there seems to be an implicit assumption that declarations in concept model archetypes corresponding to the template's constraints ought to be injected into the associated template parameter scope. Therefore, the notion of concept model archetype is implicit and doesn't have an explicit representation in the compiler. Further, constraints satisfaction and the binding of entity references, at instantiation time, with respect to model implementations comes for free with the treatment of concept models as class template specializations. In the WGP ConceptClang paper, we made the distinction that: 1. We didn't need type archetypes in ConceptClang since Clang already does a good enough job treating template type parameters as types. 2. We gave concept model archetypes explicit representations as simply concept models satisfied by substituting declarations over from their mapped concepts. This way it is clear to see the two main roles that constraints are supposed to serve: Acting as predicates and providing scope for name resolution. 3. In the body of template definitions, names no longer bind to declarations inside class template specializations. Rather, to declarations in the concept model archetypes, which are basically placeholders for when we have concrete maps at the point of template use. 4. At the point of template use, constraints satisfaction explicitly performs concept model lookup and needs to rebuild entity references that were bound to declarations in archetypes so that they now bind to declarations in the concrete concept models. I think the paper goes further on key distinctions, but the basic idea is that we have decoupled the notion of concept model archetypes with their implementation, which provides more genericity and allows us to really understand how the design decisions we make affect the different components of concepts: concept definitions, concept models (templates, archetypes, or concrete), constrained template definitions -- w/ constraints specification, and constrained templates uses -- with constraints satisfaction, followed by a rebuilding of the references at instantiation time. Given this distinction, I hope it is now clear how the implementation that Doug suggests is simply a special case of a larger infrastructure. Whether it is the right approach or not, I think some parameters still need to be investigated further. If I am not mistaking, I read a paper at some point that addressed how c++ concepts where not completely expressible in terms class templates and their specializations. But I have to double-check on that.
Unfortunately, although you're wielding it as though it were a term of art, you haven't defined it, so it's hard to be sure...
I'm sorry for not clarifying myself well earlier. It is definitely in the ConceptClang paper though. Thanks for pointing out my differing terminology. I am updating it as I get more accustomed with C++ terminology.
concept model archetypes (CMA) are the most essential component of any implementation of concepts -- N2914 or N3351 or else -- in that they are the essential glue to all components:
1. They link the requirements specified in concept definitions with their uses in the definitions of generic components. 2. They tell constraints satisfaction which concept model to look for or generate. Note that: 1. concept model checking reuses constraints satisfaction, and 2. entity reference rebuilding, at instantiation time, is not always necessary---since the need varies based on the design and implementation model. 3. When entity reference rebuilding is supported -- as in ConceptClang's current implementation of N2914 and some flavors of the N3351 implementation, then concrete concept models become particularly essential as well.
Other terms of art used without definition here: "constraints satisfaction" (one can guess, but you seem to be referring to something very specific, so one could be wrong) and "entity reference rebuilding"
I don't believe "constraints satisfaction" is a term of my making. I borrowed it from previous litterature on concepts. I did make up "rebuilding", but not "entity reference" (I don't think). Either way, I am not sure what the right C++-compatible terminology would be. Any idea? I tried to explain the concept above as the rebinding of names from concept model archetypes to concrete concept models. Please let me know if I can be any clearer.
In some sense -- and I'm still working on this idea, an implementation of CMAs can indicate the extend to which explicit concept models are needed to bind to customized implementations.
"explicit concept models?" -- i.e. explicit modeling mechanism for concepts, as in "explicit concepts". "bind to?" -- e.g. a function call binds arguments to a function declaration. "customized implementations?" -- i.e. implementations in concrete concept models.
That being said, independently of the concepts design, any parsing and checking of concept definitions should make the implementation of CMAs as easy as possible.
There are other issues that I am currently finding with the semantics of checking entity references in restricted scope, in N2914, but that is something that I should perhaps leave for another discussion. In fact, I would like to run this by people who would be interest at the next committee meeting (next week). The bottom line is that, if I am right, then implementing the checking properly is a complex task -- either due to the implementation structure of Clang or just the nature of C++ grammar. (I'm not sure which one yet.)
That complexity is, unfortunately, one of the main reasons for the hold up on deploying an updated version of ConceptClang.
Just to clarify my point above, I find the idea of automatically generating archetype classes intriguing in the sense that they offer an alternative implementation for CMAs, in contrast to what ConceptClang is currently doing.
I don't know what ConceptClang is currently doing, but if (as you seem to imply) you're not doing checking by generating concrete archetypes and instantiating constrained templates with them, then
a. You are setting yourself up for a great deal more work than necessary, because you have to implement a complex algorithm that closely parallels what C++ already does.
Actually, I'd like to argue that the algorithm actually helps find some incompleteness with previous work. That's the point of ConceptClang to begin with.
b. The semantics of your checking are much more likely to "not make sense" to C++ users today, because the rules will be subtly different.
The semantics are just the same, but more implemented more abstractly, i.e. I think of components of concepts separately from previously proposed implementations.
However, I still wonder if the class template / class template specialization approach to representing concept definitions / concept maps is as powerful as the approach of treating concepts as first class entities of the language.
They're not mutually exclusive. The "first class entities in the language" might intentionally be defined (as they were in N2914) to have the same semantics as existing language features, and thus be implementable in terms of the same code.
That's what I'm not too sure about, just yet... And like I said earlier, I could be missing something.
On the other hand, the class template approach removes the need for to rebuild entity references at instantiation time.
There's that term of art again.
Sorry. :)
An alternative that I see that still uses archetypes is for archetype classes to be generated as in the Caramel system. I particularly find this useful in checking constrained template definitions because they allow to do that non-intrusively (or at least less intrusively than the current implementation). That is, instead of checking every dependent entity reference in the body of a template (and all the complexity that comes with it as I'm finding), appropriate archetype classes can simply be instantiated and used on the template upon parsing the template.
I'm truly surprised to hear (if I'm understanding correctly) that you didn't go down that road in the first place. I think it was well-understood to be the obvious approach.
This is a different alternative which is not like Doug's suggestion above, but rather like BCCL's archetypes.
However, I am not exactly sure how this approach couples up with the checking of the remaining components (concept defns, models, template uses, etc...). I suspect that either it will only be useful if concept maps are not supported; and constraints are only intended to serve as predicates and not provide a scope for name resolution (hence, no need to rebuild entity references). Otherwise, it will be unnecessary with any effort to complement its use.
You mostly lost me with the last paragraph.
Let's take the example of constrained function templates. Simply instantiating archetypes, constructing a new call expression with them, and checking that call only checks for concept coverage. (I'm borrowing the terminology from the BCCL paper). When are the calls in the template definitions bound to the concept model archetypes?
-- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sat, Oct 13, 2012 at 6:07 PM, Larisse Voufo <lvoufo@indiana.edu> wrote:
Thanks for the feedback. Hopefully I'm addressing your comments well below.
On Sat, Oct 13, 2012 at 5:10 PM, Dave Abrahams <dave@boostpro.com> wrote:
on Sat Oct 13 2012, Larisse Voufo <lvoufo-AT-indiana.edu> wrote:
On Sat, Oct 13, 2012 at 9:53 AM, Larisse Voufo <lvoufo@indiana.edu> wrote:
On Fri, Oct 12, 2012 at 3:05 PM, Doug Gregor <doug.gregor@gmail.com
wrote:
From the standardization perspective, we'll make zero progress until someone gets working on a real implementation. Just having a concept parser + archetype generator (which then instantiates template definitions based on those archetypes) would be a huge win.
This is intriguing. Based on my current experience with ConceptClang (and I could be missing something), I would actually argue that
^^^^^^^^
This word seems to imply that you think you are contradicting Doug, but to me it sounds like what you say here reinforces his statement.
I don't think I'm contradicting Doug in terms of archetypes and template definitions being important. I am just not sure that his proposed implementation is sufficient, hence the "something".
The only way I can make sense of this is to assume you mean something different by "concept model archetypes" than Doug and I mean when we say "archetypes."
I think that when you guys say "archetypes", you mean both "type archetypes" and "concept model archetypes", and I find the term sometimes ambiguous to me. I also think you think of them in terms of how they are implemented in ConceptGCC and N2914, that is, as embedded with the checking of template definitions.
In other words, there seems to be an implicit assumption that declarations in concept model archetypes corresponding to the template's constraints ought to be injected into the associated template parameter scope. Therefore, the notion of concept model archetype is implicit and doesn't have an explicit representation in the compiler.
Further, constraints satisfaction and the binding of entity references, at instantiation time, with respect to model implementations comes for free with the treatment of concept models as class template specializations.
In the WGP ConceptClang paper, we made the distinction that:
1. We didn't need type archetypes in ConceptClang since Clang already does a good enough job treating template type parameters as types.
Addendum: Since the paper, when the prototype focused on simple function calls, I have found a specific need for type archetypes, but it is only to express and satisfy associated members (incl member functions, constructors, etc...), as well as to distinguish their uses from those of members not associated to a concept.
1. We gave concept model archetypes explicit representations as simply concept models satisfied by substituting declarations over from their mapped concepts. This way it is clear to see the two main roles that constraints are supposed to serve: Acting as predicates and providing scope for name resolution. 2. In the body of template definitions, names no longer bind to declarations inside class template specializations. Rather, to declarations in the concept model archetypes, which are basically placeholders for when we have concrete maps at the point of template use. 3. At the point of template use, constraints satisfaction explicitly performs concept model lookup and needs to rebuild entity references that were bound to declarations in archetypes so that they now bind to declarations in the concrete concept models.
I think the paper goes further on key distinctions, but the basic idea is that we have decoupled the notion of concept model archetypes with their implementation,
Correction: "we have decoupled the notion of concept model archetypes *from* their implementation".
which provides more genericity and allows us to really understand how the design decisions we make affect the different components of concepts: concept definitions, concept models (templates, archetypes, or concrete), constrained template definitions -- w/ constraints specification, and constrained templates uses -- with constraints satisfaction, followed by a rebuilding of the references at instantiation time.
Given this distinction, I hope it is now clear how the implementation that Doug suggests is simply a special case of a larger infrastructure. Whether it is the right approach or not, I think some parameters still need to be investigated further. If I am not mistaking, I read a paper at some point that addressed how c++ concepts where not completely expressible in terms class templates and their specializations. But I have to double-check on that.
Unfortunately, although you're wielding it as though it were a term of art, you haven't defined it, so it's hard to be sure...
I'm sorry for not clarifying myself well earlier. It is definitely in the ConceptClang paper though.
Thanks for pointing out my differing terminology. I am updating it as I get more accustomed with C++ terminology.
concept model archetypes (CMA) are the most essential component of any implementation of concepts -- N2914 or N3351 or else -- in that they are the essential glue to all components:
1. They link the requirements specified in concept definitions with their uses in the definitions of generic components. 2. They tell constraints satisfaction which concept model to look for or generate. Note that: 1. concept model checking reuses constraints satisfaction, and 2. entity reference rebuilding, at instantiation time, is not always necessary---since the need varies based on the design and implementation model. 3. When entity reference rebuilding is supported -- as in ConceptClang's current implementation of N2914 and some flavors of the N3351 implementation, then concrete concept models become particularly essential as well.
Other terms of art used without definition here: "constraints satisfaction" (one can guess, but you seem to be referring to something very specific, so one could be wrong) and "entity reference rebuilding"
I don't believe "constraints satisfaction" is a term of my making. I borrowed it from previous litterature on concepts. I did make up "rebuilding", but not "entity reference" (I don't think). Either way, I am not sure what the right C++-compatible terminology would be. Any idea? I tried to explain the concept above as the rebinding of names from concept model archetypes to concrete concept models. Please let me know if I can be any clearer.
In some sense -- and I'm still working on this idea, an implementation of CMAs can indicate the extend to which explicit concept models are needed to bind to customized implementations.
"explicit concept models?" -- i.e. explicit modeling mechanism for concepts, as in "explicit concepts". "bind to?" -- e.g. a function call binds arguments to a function declaration. "customized implementations?" -- i.e. implementations in concrete concept models.
That being said, independently of the concepts design, any parsing and checking of concept definitions should make the implementation of CMAs as easy as possible.
There are other issues that I am currently finding with the semantics of checking entity references in restricted scope, in N2914, but that is something that I should perhaps leave for another discussion. In fact, I would like to run this by people who would be interest at the next committee meeting (next week). The bottom line is that, if I am right, then implementing the checking properly is a complex task -- either due to the implementation structure of Clang or just the nature of C++ grammar. (I'm not sure which one yet.)
That complexity is, unfortunately, one of the main reasons for the hold up on deploying an updated version of ConceptClang.
Just to clarify my point above, I find the idea of automatically generating archetype classes intriguing in the sense that they offer an alternative implementation for CMAs, in contrast to what ConceptClang is currently doing.
I don't know what ConceptClang is currently doing, but if (as you seem to imply) you're not doing checking by generating concrete archetypes and instantiating constrained templates with them, then
a. You are setting yourself up for a great deal more work than necessary, because you have to implement a complex algorithm that closely parallels what C++ already does.
Actually, I'd like to argue that the algorithm actually helps find some incompleteness with previous work. That's the point of ConceptClang to begin with.
b. The semantics of your checking are much more likely to "not make sense" to C++ users today, because the rules will be subtly different.
The semantics are just the same, but more implemented more abstractly, i.e. I think of components of concepts separately from previously proposed implementations.
However, I still wonder if the class template / class template specialization approach to representing concept definitions / concept maps is as powerful as the approach of treating concepts as first class entities of the language.
They're not mutually exclusive. The "first class entities in the language" might intentionally be defined (as they were in N2914) to have the same semantics as existing language features, and thus be implementable in terms of the same code.
That's what I'm not too sure about, just yet... And like I said earlier, I could be missing something.
On the other hand, the class template approach removes the need for to rebuild entity references at instantiation time.
There's that term of art again.
Sorry. :)
An alternative that I see that still uses archetypes is for archetype classes to be generated as in the Caramel system. I particularly find this useful in checking constrained template definitions because they allow to do that non-intrusively (or at least less intrusively than the current implementation). That is, instead of checking every dependent entity reference in the body of a template (and all the complexity that comes with it as I'm finding), appropriate archetype classes can simply be instantiated and used on the template upon parsing the template.
I'm truly surprised to hear (if I'm understanding correctly) that you didn't go down that road in the first place. I think it was well-understood to be the obvious approach.
This is a different alternative which is not like Doug's suggestion above, but rather like BCCL's archetypes.
However, I am not exactly sure how this approach couples up with the checking of the remaining components (concept defns, models, template uses, etc...). I suspect that either it will only be useful if concept maps are not supported; and constraints are only intended to serve as predicates and not provide a scope for name resolution (hence, no need to rebuild entity references). Otherwise, it will be unnecessary with any effort to complement its use.
You mostly lost me with the last paragraph.
Let's take the example of constrained function templates. Simply instantiating archetypes, constructing a new call expression with them, and checking that call only checks for concept coverage. (I'm borrowing the terminology from the BCCL paper). When are the calls in the template definitions bound to the concept model archetypes?
-- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

on Sat Oct 13 2012, Larisse Voufo <lvoufo-AT-indiana.edu> wrote:
Thanks for the feedback. Hopefully I'm addressing your comments well below.
On Sat, Oct 13, 2012 at 5:10 PM, Dave Abrahams <dave@boostpro.com> wrote:
on Sat Oct 13 2012, Larisse Voufo <lvoufo-AT-indiana.edu> wrote:
On Sat, Oct 13, 2012 at 9:53 AM, Larisse Voufo <lvoufo@indiana.edu> wrote:
On Fri, Oct 12, 2012 at 3:05 PM, Doug Gregor <doug.gregor@gmail.com
wrote:
From the standardization perspective, we'll make zero progress until someone gets working on a real implementation. Just having a concept parser + archetype generator (which then instantiates template definitions based on those archetypes) would be a huge win.
This is intriguing. Based on my current experience with ConceptClang (and I could be missing something), I would actually argue that
^^^^^^^^
This word seems to imply that you think you are contradicting Doug, but to me it sounds like what you say here reinforces his statement.
I don't think I'm contradicting Doug in terms of archetypes and template definitions being important. I am just not sure that his proposed implementation is sufficient, hence the "something".
The only way I can make sense of this is to assume you mean something different by "concept model archetypes" than Doug and I mean when we say "archetypes."
I think that when you guys say "archetypes", you mean both "type archetypes" and "concept model archetypes",
I don't know what either of those things are, so saying that doesn't mean much to me.
and I find the term sometimes ambiguous to me. I also think you think of them in terms of how they are implemented in ConceptGCC
No; I've never looked at the implementation. And, by the way, baseless assumptions that perspective has been distorted by too much familiarity with a particular implementation are a sore point for some of us. I guess you picked them up from part of the culture around this project, so I don't blame you, but I suggest that they are unhelpful when trying to understand my point of view.
and N2914, that is, as embedded with the checking of template definitions.
I don't know what you mean by that. When I say "archetypes" I mean what's described in http://www.boost.org/doc/libs/1_51_0/libs/concept_check/concept_covering.htm. In the context of a language feature, I expect them to be generated automatically by the compiler based on concept declarations and constraints. I do expect them to be used in checking template definitions.
In other words, there seems to be an implicit assumption that declarations in concept model archetypes ^^^^^^^^^^^^^^^^^^^^^^^^ Here's that undefined term again.
corresponding to the template's constraints ought to be injected into the associated template parameter scope.
Until you define your terms, I'm not going to be able to respond usefully to the above. An example would be very helpful here also.
Therefore, the notion of concept model archetype is implicit and doesn't have an explicit representation in the compiler.
Just based on the grammar of what you said, without fully grasping the meaning, I don't see how that sentence could be a logical conclusion of what comes before.
Further, constraints satisfaction and the ^^^^^^^^^^^^^^^^^^^^^^^^ binding of entity references, at instantiation time, with respect to ^^^^^^^ ^^^^^^^^^^^^^^^^^ model implementations comes for free with the treatment of concept models as class template specializations.
In the WGP ConceptClang paper,
By which you mean http://www.generic-programming.org/software/ConceptClang/papers/wgp06v-voufo... ? I note that most of these terms you're throwing around don't appear in that paper.
we made the distinction that:
1. We didn't need type archetypes in ConceptClang since Clang ^^^^^^^^^^^^^^^ already does a good enough job treating template type parameters as types.
2. We gave concept model archetypes explicit representations as simply ^^^^^^^^^^^^^^^^^^^^^^^^ concept models satisfied by substituting declarations over from their mapped concepts. This way it is clear to see the two main roles that constraints are supposed to serve: Acting as predicates and providing scope for name resolution. 3. In the body of template definitions, names no longer bind to declarations inside class template specializations. Rather, to declarations in the concept model archetypes, which are basically placeholders for when ^^^^^^^^^^^^^^^^^^^^^^^^ we have concrete maps at the point of template use.
4. At the point of template use, constraints satisfaction explicitly performs concept model lookup and needs to rebuild ^^^^^^^^^^^^^^^^^^^^ entity references that were bound to declarations in archetypes so ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ that they now bind to declarations in the concrete concept models.
I think the paper goes further on key distinctions, but the basic idea is that we have decoupled the notion of concept model archetypes with ^^^^^^^^^^^^^^^^^^^^^^^^ their implementation, which provides more genericity and allows us to really understand how the design decisions we make affect the different components of concepts: concept definitions, concept models (templates, archetypes, or concrete), constrained template definitions -- w/ constraints specification, and constrained templates uses -- with constraints satisfaction, followed by a rebuilding of the references at instantiation time.
Given this distinction, I hope it is now clear how the implementation that Doug suggests is simply a special case of a larger infrastructure.
Unfortunately, nothing is now clear, because you haven't clearified your terms and distinctions.
Whether it is the right approach or not, I think some parameters still need to be investigated further. If I am not mistaking, I read a paper at some point that addressed how c++ concepts where not completely expressible in terms class templates and their specializations. But I have to double-check on that.
They are indeed not. Early on, they were nearly so IIUC, but rvalue-erasing and the creation of unnecessary copies put an end to that. Of course there are always also new lookup rules involved in a concepts implementation, but I don't think you are referring to that dimension of things.
Unfortunately, although you're wielding it as though it were a term of art, you haven't defined it, so it's hard to be sure...
I'm sorry for not clarifying myself well earlier. It is definitely in the ConceptClang paper though.
Umm, which one? There are three listed at http://www.generic-programming.org/software/ConceptClang/. As I mentioned, the first one doesn't define many of the terms you use.
Thanks for pointing out my differing terminology. I am updating it as I get more accustomed with C++ terminology.
concept model archetypes (CMA) are the most essential component of any implementation of concepts -- N2914 or N3351 or else -- in that they are the essential glue to all components:
1. They link the requirements specified in concept definitions with their uses in the definitions of generic components. 2. They tell constraints satisfaction which concept model to look for or generate. Note that: 1. concept model checking reuses constraints satisfaction, and 2. entity reference rebuilding, at instantiation time, is not always necessary---since the need varies based on the design and implementation model. 3. When entity reference rebuilding is supported -- as in ConceptClang's current implementation of N2914 and some flavors of the N3351 implementation, then concrete concept models become particularly essential as well.
Other terms of art used without definition here: "constraints satisfaction" (one can guess, but you seem to be referring to something very specific, so one could be wrong) and "entity reference rebuilding"
I don't believe "constraints satisfaction" is a term of my making. I borrowed it from previous litterature on concepts. I did make up "rebuilding", but not "entity reference" (I don't think). Either way, I am not sure what the right C++-compatible terminology would be. Any idea? I tried to explain the concept above as the rebinding of names from concept model archetypes to concrete concept models. Please let me know if I can be any clearer.
You can be clearer. I don't yet know what you mean by "concept model archetypes," so it's hard to make head or tail of any of this. From the context, I am beginning to suspect that my "archetype" == your "concept model archetype," but I am not yet sure.
In some sense -- and I'm still working on this idea, an implementation of CMAs can indicate the extend to which explicit concept models are needed to bind to customized implementations.
"explicit concept models?" -- i.e. explicit modeling mechanism for concepts, as in "explicit concepts".
This is unclear. I don't think this term can mean a mechanism at all. By "explicit concept model" I think you mean "one or more types that have been explicitly declared by the user to model a concept." However, you're using it in the sentence as though you mean the declaration itself.
"bind to?" -- e.g. a function call binds arguments to a function declaration.
"bind to" will probably make plenty of sense once you clarify the other terms.
"customized implementations?" -- i.e. implementations in concrete concept models.
Sounds like you just mean the extent to which explicit model declarations are needed and you don't need "to bind to customized implementations" which probably really means "to bind concepts to concrete models" because, after all, an explicit model declaration binds a given concrete concept model to the concept; I mean, that's *all* it does!
That being said, independently of the concepts design, any parsing and checking of concept definitions should make the implementation of CMAs as easy as possible.
There are other issues that I am currently finding with the semantics of checking entity references in restricted scope, in N2914, but that is something that I should perhaps leave for another discussion. In fact, I would like to run this by people who would be interest at the next committee meeting (next week). The bottom line is that, if I am right, then implementing the checking properly is a complex task -- either due to the implementation structure of Clang or just the nature of C++ grammar. (I'm not sure which one yet.)
That complexity is, unfortunately, one of the main reasons for the hold up on deploying an updated version of ConceptClang.
Just to clarify my point above, I find the idea of automatically generating archetype classes intriguing in the sense that they offer an alternative implementation for CMAs, in contrast to what ConceptClang is currently doing.
I don't know what ConceptClang is currently doing, but if (as you seem to imply) you're not doing checking by generating concrete archetypes and instantiating constrained templates with them, then
a. You are setting yourself up for a great deal more work than necessary, because you have to implement a complex algorithm that closely parallels what C++ already does.
Actually, I'd like to argue that the algorithm actually helps find some incompleteness with previous work. That's the point of ConceptClang to begin with.
Seriously, to find problems with previous work?
b. The semantics of your checking are much more likely to "not make sense" to C++ users today, because the rules will be subtly different.
The semantics are just the same, but more implemented more abstractly, i.e. I think of components of concepts separately from previously proposed implementations.
As noted above, please drop that line of assumption about others' thinking.
However, I still wonder if the class template / class template specialization approach to representing concept definitions / concept maps is as powerful as the approach of treating concepts as first class entities of the language.
They're not mutually exclusive. The "first class entities in the language" might intentionally be defined (as they were in N2914) to have the same semantics as existing language features, and thus be implementable in terms of the same code.
That's what I'm not too sure about, just yet... And like I said earlier, I could be missing something.
Could be.
On the other hand, the class template approach removes the need for to rebuild entity references at instantiation time.
There's that term of art again.
Sorry. :)
An alternative that I see that still uses archetypes is for archetype classes to be generated as in the Caramel system. I particularly find this useful in checking constrained template definitions because they allow to do that non-intrusively (or at least less intrusively than the current implementation). That is, instead of checking every dependent entity reference in the body of a template (and all the complexity that comes with it as I'm finding), appropriate archetype classes can simply be instantiated and used on the template upon parsing the template.
I'm truly surprised to hear (if I'm understanding correctly) that you didn't go down that road in the first place. I think it was well-understood to be the obvious approach.
This is a different alternative which is not like Doug's suggestion above, but rather like BCCL's archetypes.
I seriously doubt Doug means something different from BCCL's archetypes in any important respect. Of course, I could be wrong, but when Doug and I talk and one of us says "archetype," there's never any confusion about what's meant.
However, I am not exactly sure how this approach couples up with the checking of the remaining components (concept defns, models, template uses, etc...). I suspect that either it will only be useful if concept maps are not supported; and constraints are only intended to serve as predicates and not provide a scope for name resolution (hence, no need to rebuild entity references). Otherwise, it will be unnecessary with any effort to complement its use.
You mostly lost me with the last paragraph.
Let's take the example of constrained function templates. Simply instantiating archetypes, constructing a new call expression with them, and checking that call only checks for concept coverage. (I'm borrowing the terminology from the BCCL paper).
Yes. What else do you want to check?
When are the calls in the template definitions bound to the concept model archetypes?
Example please. I can guess at what you mean about "binding a call to a concept model archetype", but it would be better if you'd spell it out. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On Fri, Oct 12, 2012 at 3:05 PM, Doug Gregor <doug.gregor@gmail.com> wrote:
On Fri, Oct 12, 2012 at 6:25 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
Right now I see two ways forward:
1. I implement N3351 in Boost.Contract and Matt implements N2914 in Boost.Generic. 2. Or, I help Matt implementing N2914 in Boost.Generic (and Boost.Contract's requires clause will use concepts defined using Boost.Generic).
Then we all use the lib(s) to experiment with concepts before (re)proposing concepts (and hopefully contracts) for standardization in C++1x.
Experimenting is great. This is why I have Origin (https://code.google.com/p/origin/). I've been experimenting with concepts-as-a-library in various forms since 2009, and it only gets you so far. It's a very helpful if you want to develop a first pass at concepts for a library, and sometimes it pays off if you need to reason about some language feature interactions.
This is an extremely important point: emulating the concepts language feature with a library has its limits. Most of the hard problems with concepts, including the hard problems of making those concepts that we right actually model what we want---involve the type checking of template definitions. That type checking can be simulated with archetypes, but it's very hard to write archetypes that are as picky as what a compiler would come up with. That means that the concepts we write can't actually be validated against implementations, so it's hard to have any confidence in those concepts.
From the standardization perspective, we'll make zero progress until someone gets working on a real implementation. Just having a concept parser + archetype generator (which then instantiates template definitions based on those archetypes) would be a huge win.
Incidentally, it would be interesting to see if we one can implement a truly complete archetype generator. The following paper, on the Caramel system, hints at a few issues with this alternative: http://www.osl.iu.edu/publications/prints/2001/TMPW01:_Willcock.pdf I'd be interested in learning about experiences using this system (or extensions of it) on various generic libraries. Any idea? Thanks, -- Larisse.
- Doug
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (10)
-
Andrew Sutton
-
Andrzej Krzemienski
-
Dave Abrahams
-
Doug Gregor
-
Jeffrey Lee Hellrung, Jr.
-
Jeremiah Willcock
-
Larisse Voufo
-
Lorenzo Caminiti
-
Matt Calabrese
-
Vicente J. Botet Escriba