[contract] toward N3351 concepts

Hello all, I'm reading N3351: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3351.pdf It's a bottom-up approach to define concepts starting from what's needed by STL algorithms. So far so good--I like it probably more than C++0x concept proposals--N331 is simpler :) However, N3351 STL design is not fully compatible with the current STL specifications--maybe N3351 is better, more mathematically sound? In N3351, concepts are boolean constants: ``We can define concepts as simple (constant) Boolean expressions. This is useful when building constraints predicated on non-syntactic and non-semantics properties of types.`` So I can implement concepts in C++11 as boolean meta-functions using expression SFINAE to check the requirements and enable_if to disable generic algorithms when the instantiated types don't model the concepts (i.e., the concept's boolean meta-function is false). For example, that's the case for the bool{a== b} and bool{a!=b} expressions of the EqualityComparable concept below. However, N3351 says that axioms should not be checked by the compiler... ``Axioms express the semantics of a concept’s required syntax; they are assumed to be true and must not be checked by the compiler beyond conformance to the C++ syntax. Any additional checking is beyond the scope of the compiler’s translation requirements. The compiler must not generate code that evaluates axioms as preconditions, either. This could lead to program errors if the evaluated assertions have side effects. For example, asserting that distance(first, last) > 1 when the type of those iterators is istream_iterator will consume the first element of the range, causing the assertion to pass, but the algorithm to have undefined behavior.'' So what am I supposed to do with axioms? Do nothing? Axioms are just "comments" that document the semantic? For example: `` concept EqualityComparable<typename T> = requires (T a, T b, T c) { bool { a == b }; bool { a != b }; axiom { a == b <=> eq(a, b); } // true equality axiom { a == a; // reflexive a == b => b == a; // symmetric a == b && b == c => a == c; // transitive } axiom { a != b <=> !(a == b); } // complement }; So, EqualityComparable<T> is true if T 1. has == and != with result types that can be converted to bool 2. == compares for true equality 3. == is reflexive, symmetric, and transitive 4. != is the complement of == However, the compiler can only check the first of these conditions. The rest must be verified through other means (i.e. manually). '' Thanks, --Lorenzo

On Wed, Sep 26, 2012 at 9:00 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
So what am I supposed to do with axioms? Do nothing? Axioms are just "comments" that document the semantic?
Hi, I am far from being a specialist, but so far my understanding was that Axioms helps the compiler to generate useful code (maybe to unlock optimization too?) Like if A + B <=> B + A, then you can be sure that A+= B <=> B += A . Then if it's written as an axiom, the compiler can generate the +=operator automatically where you use the concept and provide a type that only have the + operator. That being said, I can't say where I got this understanding. Certainly in one of the recent video with Stroustrup and other people explaining Concepts. Joel Lamotte

According to this paper: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2887.pdf, which is exclusively devoted to axioms, the ideal default implementation (i.e. compiler) should just check if they are syntactically correct (if they compile) and do nothing else. This would also be, IMO, a good thing to do for your framework. On the other hand the very existence of axioms as language feature is controversial, so not supporting them at all would probably also be a reasonable choice. For potential application of axioms, see also this blog post: http://akrzemi1.wordpress.com/2012/01/11/concept-axioms-what-for/ Regards, &rzej 2012/9/26 Lorenzo Caminiti <lorcaminiti@gmail.com>
Hello all,
I'm reading N3351: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3351.pdf
It's a bottom-up approach to define concepts starting from what's needed by STL algorithms. So far so good--I like it probably more than C++0x concept proposals--N331 is simpler :) However, N3351 STL design is not fully compatible with the current STL specifications--maybe N3351 is better, more mathematically sound?
In N3351, concepts are boolean constants:
``We can define concepts as simple (constant) Boolean expressions. This is useful when building constraints predicated on non-syntactic and non-semantics properties of types.``
So I can implement concepts in C++11 as boolean meta-functions using expression SFINAE to check the requirements and enable_if to disable generic algorithms when the instantiated types don't model the concepts (i.e., the concept's boolean meta-function is false). For example, that's the case for the bool{a== b} and bool{a!=b} expressions of the EqualityComparable concept below.
However, N3351 says that axioms should not be checked by the compiler...
``Axioms express the semantics of a concept’s required syntax; they are assumed to be true and must not be checked by the compiler beyond conformance to the C++ syntax. Any additional checking is beyond the scope of the compiler’s translation requirements. The compiler must not generate code that evaluates axioms as preconditions, either. This could lead to program errors if the evaluated assertions have side effects. For example, asserting that distance(first, last) > 1 when the type of those iterators is istream_iterator will consume the first element of the range, causing the assertion to pass, but the algorithm to have undefined behavior.''
So what am I supposed to do with axioms? Do nothing? Axioms are just "comments" that document the semantic?
For example:
`` concept EqualityComparable<typename T> = requires (T a, T b, T c) { bool { a == b }; bool { a != b }; axiom { a == b <=> eq(a, b); } // true equality axiom { a == a; // reflexive a == b => b == a; // symmetric a == b && b == c => a == c; // transitive } axiom { a != b <=> !(a == b); } // complement };
So, EqualityComparable<T> is true if T 1. has == and != with result types that can be converted to bool 2. == compares for true equality 3. == is reflexive, symmetric, and transitive 4. != is the complement of == However, the compiler can only check the first of these conditions. The rest must be verified through other means (i.e. manually). ''
Thanks, --Lorenzo
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Wed, Sep 26, 2012 at 11:10 PM, Andrzej Krzemienski <akrzemi1@gmail.com>wrote:
On the other hand the very existence of axioms as language feature is controversial, so not supporting them at all would probably also be a reasonable choice.
Was checking expressions the only reason they were introduced in the first place? Joel Lamotte

Was checking expressions the only reason they were introduced in the first place?
There were more reasons: 1. The axioms have been with concepts since the inception of concepts. Wherever you see concept StrictWeakOrder defined it is accompained by axioms in this or other form. See for instance: http://www.sgi.com/tech/stl/StrictWeakOrdering.html (section "Invariants"). In this sense axioms try to standardize the existing practise (and are somewhat similar to contract-programming invariants 2. They (would) provide a standardised description of certain properties of the code that could be used by tools for code analysis or generation. For instance tools could auto-generate unit tests. Or certain optimizations could be performed based on the knowledge contained in axioms. Regards, &rzej

On Wed, Sep 26, 2012 at 2:38 PM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
2. They (would) provide a standardised description of certain properties of the code that could be used by tools for code analysis or generation. For instance tools could auto-generate unit tests.
This is true for pre/post/inv as well.
Or certain optimizations could be performed based on the knowledge contained in axioms.
*In theory*, this is also true for pre/pot/inv. P.S. I just wanted to point out similar properties of contracts but I'm not sure if comments I made in this email are relevant for the axiom discussion. --Lorenzo

On Wed, Sep 26, 2012 at 6:38 PM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
Was checking expressions the only reason they were introduced in the first place?
There were more reasons:
[snip]
2. They (would) provide a standardised description of certain properties of the code that could be used by tools for code analysis or generation. For instance tools could auto-generate unit tests. Or certain optimizations could be performed based on the knowledge contained in axioms.
I was a while back trying to integrate Isabelle/HOL with C and C++ through emacs. It would be really cool if that was possible, and with a standardized syntax for axioms it would be probably easier to interoperate between tools. Though for this task axioms as-is may not be enough for isabelle and comments would still be required.
Regards, &rzej
-- Felipe Magno de Almeida

On Wed, Sep 26, 2012 at 12:00 PM, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
However, N3351 says that axioms should not be checked by the compiler...
``Axioms express the semantics of a concept’s required syntax; they are assumed to be true and must not be checked by the compiler beyond conformance to the C++ syntax. Any additional checking is beyond the scope of the compiler’s translation requirements. The compiler must not generate code that evaluates axioms as preconditions, either. This could lead to program errors if the evaluated assertions have side effects. For example, asserting that distance(first, last) > 1 when the type of those iterators is istream_iterator will consume the first element of the range, causing the assertion to pass, but the algorithm to have undefined behavior.''
For now, I find this argument for not generating preconditions from axioms weak. How about you can only use constant expressions within axioms because axioms (as contracts and specifically preconditions) are not supposed to alter the program's stats? Then I can automatically generate preconditions from axioms and check them at run-time--why would this be a bad idea? Then you need to implement a constant-correct version of distance() in order to program the axiom but that's a good thing IMO... (it reflects the fact that axioms should not alter the program's state). In any case, I wanted to start a bit of a discussion about this topic in Boost. At the moment I'm still learning N3351 so too early for any design decision but I thing two things can potentially be done with axioms: 1. Check they syntactically compile. 2. Automatically generate preconditions form them that are checked at run-time. (Any more??) I'm keeping an open mind for now and considering both options. Thanks. --Lorenzo

For now, I find this argument for not generating preconditions from axioms weak. How about you can only use constant expressions within axioms because axioms (as contracts and specifically preconditions) are not supposed to alter the program's stats? Then I can automatically generate preconditions from axioms and check them at run-time--why would this be a bad idea? Then you need to implement a constant-correct version of distance() in order to program the axiom but that's a good thing IMO... (it reflects the fact that axioms should not alter the program's state).
I believe that N3351 is not a good place to learn axioms from. I would rather recommend N2887: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2887.pdf N3351 just did not focus on axioms. Axioms do not necessarily state a boolean predicate. Consider these examples: *axiom ( Dictionary d, Record r ) { is_sorted(d) => linear_search(d, r) <=> binary_search(d, r); }* * axiom Erasure( Container c, Size n ) { n == 0 => c.resize(n) <=> c.clear(); }* * axiom Erasure2( Container c ) { c.resize(0) <=> c.clear(); }* Especially the last one: it states that two expressions, each with side effects, are equivalent.

On Wed, Sep 26, 2012 at 2:56 PM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
For now, I find this argument for not generating preconditions from axioms weak. How about you can only use constant expressions within axioms because axioms (as contracts and specifically preconditions) are not supposed to alter the program's stats? Then I can automatically generate preconditions from axioms and check them at run-time--why would this be a bad idea? Then you need to implement a constant-correct version of distance() in order to program the axiom but that's a good thing IMO... (it reflects the fact that axioms should not alter the program's state).
I believe that N3351 is not a good place to learn axioms from. I would rather recommend N2887: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2887.pdf N3351 just did not focus on axioms. Axioms do not necessarily state a boolean predicate. Consider these examples:
*axiom ( Dictionary d, Record r ) { is_sorted(d) => linear_search(d, r) <=> binary_search(d, r); }*
* axiom Erasure( Container c, Size n ) { n == 0 => c.resize(n) <=> c.clear(); }*
* axiom Erasure2( Container c ) { c.resize(0) <=> c.clear(); }*
Especially the last one: it states that two expressions, each with side effects, are equivalent.
I see. Just thinking out loud: If c is a dummy variable, like a local variable within the preconditions, then it should be OK if it gets modified by the preconditions... shouldn't it? --Lorenzo

I believe that N3351 is not a good place to learn axioms from. I would rather recommend N2887: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2887.pdf N3351 just did not focus on axioms. Axioms do not necessarily state a boolean predicate. Consider these examples:
*axiom ( Dictionary d, Record r ) { is_sorted(d) => linear_search(d, r) <=> binary_search(d, r); }*
* axiom Erasure( Container c, Size n ) { n == 0 => c.resize(n) <=> c.clear(); }*
* axiom Erasure2( Container c ) { c.resize(0) <=> c.clear(); }*
Especially the last one: it states that two expressions, each with side effects, are equivalent.
I see. Just thinking out loud: If c is a dummy variable, like a local variable within the preconditions, then it should be OK if it gets modified by the preconditions... shouldn't it?
I see (I think) what you are up to, but I see a couple of further issues with going this way: (1). Note that in the third example both expressions return nothing (void). This is valid and desired for axioms. What would you be comparing to check if the axiom holds? (2). Axioms need not hold for all values of the given type, but only for those values that a given algorithm (constrained by a concept with axioms), and only in expressions that are really stated in the executed algorithm. For instance, axiom EqualityComparable::Reflexivity states that some (rather than every) objects must compare equal to themselves. If you consider type float, objects with value NaN (not-a-number) compare unequal. Yet, it is still ok to use float as a model of concept EqualityComparable if the algorithm (function) that makes the use of concept never operates on NaN-s. For even more information on axioms see this link: http://kojot.sggw.waw.pl/~akrzemi1/cpp/axioms/axioms_index.html

For even more information on axioms see this link: http://kojot.sggw.waw.pl/~akrzemi1/cpp/axioms/axioms_index.html

I'm glad to see this discussion.
I believe that N3351 is not a good place to learn axioms from.
It's a reasonable place to start as long as you don't expect too much from them. The axioms presented in that paper were primarily intended to document the semantics of a type; we had not planned (at that time) any particular ideas about how they should or could be used by the compiler.
(2). Axioms need not hold for all values of the given type, but only for those values that a given algorithm (constrained by a concept with axioms), and only in expressions that are really stated in the executed algorithm.
Axioms, in the context of a concept, are intended to hold for all values of type. A precondition is a property of specific values or objects and is not an axiom.
For instance, axiom EqualityComparable::Reflexivity states that some (rather than every) objects must compare equal to themselves. If you consider type float, objects with value NaN (not-a-number) compare unequal.
No; all well-formed values of a type can be compared for equality (if the type is EqualityComparable). NaN is not a well-formed value. It is literally *not a number*. Andrew

2012/9/27 Andrew Sutton <asutton.list@gmail.com>
I'm glad to see this discussion.
I believe that N3351 is not a good place to learn axioms from.
It's a reasonable place to start as long as you don't expect too much from them. The axioms presented in that paper were primarily intended to document the semantics of a type; we had not planned (at that time) any particular ideas about how they should or could be used by the compiler.
The point I tried to stress is that equivalences and implications in axioms were not intended (at least in N2887) to be operations on boolean values, but on expression trees. This difference is negligible when describing immutable functions that return values (common in STL) like: *op(a, op(b, c)) <=> op(op(a, b), c)*; But becomes important when dealing with functions with side effects and functions returning nothing: * ++i <=> i += 1 c.resize(0) <=> c.clear();* N3351 describes axioms as operations on boolean values, but it was my impression that the authors still intended the behavior similar to that of N2887, but chose not to describe it at that time. Or am I wrong? Regards, &rzej

N3351 describes axioms as operations on boolean values, but it was my impression that the authors still intended the behavior similar to that of N2887, but chose not to describe it at that time. Or am I wrong?
I don't remember if we said that axioms could be evaluated (as predicates) or not. The axioms presented in n3551 are primarily intended to document the semantics; we made no serious effort to describe language features related to their interpretation. A better answer (arrived at since writing that paper) is that axioms are simply declarations of a truth about the meaning of an expression. There is work being done on language features related to this idea, but it's not mine so I won't make any effort to describe it -- I'd probably get it wrong. Andrew

On Thu, Sep 27, 2012 at 7:29 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
I'm glad to see this discussion.
I'm really glad to see N3351 author(s) part of it :) BTW, do you know if a clang-based compiler supporting N3351 is actually being implemented? `` At the same time, we plan to build a compiler (based on Clang) for the emerging language. This prototype will support the full range features described in this report. We hope to have an initial version ready by early February, 2012. '' Just to give a pick into a possible future of what Boost.Contract _could_ do: // Header: boost/contract/std/concept/equality_comparable.hpp namespace contract { namespace std { CONTRACT_CONCEPT( concept (EqualityComparable) ( typename T ) ( requires( (T) a, (T) b, (T) c ) ( bool{a == b}, bool{a != b}, axiom( iff(a == b) eq(a, b) ), axiom( a == a, if(a == b) b == a, if(a == b and b == c) a == c ), axiom( iff(a != b) not(a == b) ) ) ) ) CONTRACT_CONCEPT( concept (EqualityComparable) ( typename T1, typename T2 ) ( EqualityComparable<T1>, EuqalityComparable<T2>, (Common<T1, T2>), (EqualityComparable<CommonType<T1, T2> >), requires( (T1) a, (T2) b ) ( bool{a == b}, bool{b == a}, bool{a != b}, bool{b != a}, axiom( typedef (CommonType<T1, T2>) C, iff(a == b) C{a} == C{b}, iff(a != b) C{a} != C{b}, iff(b == a) C{b} == C{a}, iff(b != a) C{b} != C{a} ) ) ) ) } } // namespace // Header: boost/contract/std/algorithm/find.hpp namespace contract { namespace std { // Not fully compatible. CONTRACT_FUNCTION( template( typename I, typename T ) requires( InputIterator<I>, (EqualityComparable<T, ValueType<I> >) ) (I) (find) ( (I) first, (I) last, (T const&) value ) ) { return std::find(first, last, value); } } } // namespace BTW, why do we declare concepts like: concept EqualityComparable< typename T > = ... // (1) Instead of: template< typename T > concept EqualityComparable = ... // (2) I personally prefer (2) because the template parameters are declared with a syntax that is more consistent with the syntax used to declare them in class/function templates. For similar reasons, I'd prefer { ... } instead of = ... for the concept definition part. Thanks. --Lorenzo

On Thu, Sep 27, 2012 at 10:21 AM, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
BTW, why do we declare concepts like:
concept EqualityComparable< typename T > = ... // (1)
Instead of:
template< typename T > concept EqualityComparable = ... // (2)
I personally prefer (2) because the template parameters are declared with a syntax that is more consistent with the syntax used to declare them in class/function templates. For similar reasons, I'd prefer { ... } instead of = ... for the concept definition part.
Does anyone have an opinion/rationale about the syntax of (1) vs. the one of (2)? Thanks, --Lorenzo

BTW, why do we declare concepts like:
concept EqualityComparable< typename T > = ... // (1)
Instead of:
template< typename T > concept EqualityComparable = ... // (2)
I personally prefer (2) because the template parameters are declared with a syntax that is more consistent with the syntax used to declare them in class/function templates. For similar reasons, I'd prefer { ... } instead of = ... for the concept definition part.
Does anyone have an opinion/rationale about the syntax of (1) vs. the one of (2)?
At least at first glance, it looks like either syntax has similar capabilities and would work. The latter one though, looks as any other template declaration and might incorrectly imply that I can change it into a non-template declaration: // no template parameters concept EqualityComparable = // But I am not sure if this is a relevant concern. Regards, &rzej

BTW, why do we declare concepts like:
concept EqualityComparable< typename T > = ... // (1)
Instead of:
template< typename T > concept EqualityComparable = ... // (2)
A concept isn't a template. You don't instantiate a concept template to get specialized concepts. There is just the concept, which has type
On 02.10.2012 09:26, Andrzej Krzemienski wrote: parameters. This would be more obvious if we wrote it thus: concept <typename T> EqualityComparable ... Sebastian

On Tue, Oct 2, 2012 at 12:35 AM, Sebastian Redl <sebastian.redl@getdesigned.at> wrote:
On 02.10.2012 09:26, Andrzej Krzemienski wrote:
BTW, why do we declare concepts like:
concept EqualityComparable< typename T > = ... // (1)
Instead of:
template< typename T > concept EqualityComparable = ... // (2)
A concept isn't a template.
Of course.
You don't instantiate a concept template to get specialized concepts.
However, syntactically you use it similarly to a template: MyConcept<T1, T2, T3> my_template<T1, T2, T3>
There is just the concept, which has type parameters.
This would be more obvious if we wrote it thus:
concept <typename T> EqualityComparable = ... // (3)
Actually, this would make the most sense to me, why wouldn't we use this syntax (3)? Thanks, --Lorenzo

However, syntactically you use it similarly to a template:
MyConcept<T1, T2, T3>
my_template<T1, T2, T3>
Syntactically yes, but they are allowed to mean very different things.
concept <typename T> EqualityComparable = ... // (3)
Actually, this would make the most sense to me, why wouldn't we use this syntax (3)?
Alex wanted the syntax to reflect, to some extent, the style that he used to define concepts in Elements of Programming. A concept is a type predicate (function) defined by a conjunction of syntactic and semantic requirements. Compare: // EoP (with pseudo-C++ syntax) Relation(Op) = Predicate(Op) && HomogeneousFunction(Op) && Arity(Op) == 2 // N3351 concept Relation<typename Op, typename T> = Predicate<Op, T, T>; In the writing, we wanted to emphasize the idea that a concept was "just" a predicate on template arguments. That made it easier to focus on the algorithm's requirements rather than to figure out the extensive set of language rules that would make it work.

On Tue, Oct 2, 2012 at 8:07 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
However, syntactically you use it similarly to a template:
MyConcept<T1, T2, T3>
my_template<T1, T2, T3>
Syntactically yes, but they are allowed to mean very different things.
concept <typename T> EqualityComparable = ... // (3)
Actually, this would make the most sense to me, why wouldn't we use this syntax (3)?
Alex wanted the syntax to reflect, to some extent, the style that he used to define concepts in Elements of Programming. A concept is a type predicate (function) defined by a conjunction of syntactic and semantic requirements. Compare:
// EoP (with pseudo-C++ syntax) Relation(Op) = Predicate(Op) && HomogeneousFunction(Op) && Arity(Op) == 2
// N3351 concept Relation<typename Op, typename T> = Predicate<Op, T, T>;
In the writing, we wanted to emphasize the idea that a concept was "just" a predicate on template arguments. That made it easier to focus on the algorithm's requirements rather than to figure out the extensive set of language rules that would make it work.
But already the fact that the first argument of a concept is automatically replaced with the type when concepts are used in template parameter declarations: template< typename T, Relation<T> Op > void func ( ... ) Already requires to know something special about the language that makes concept work. In any case, why is concept EqualityComparable = ... // (1) concept <typename T> EqualityComparable = ... // (3) concept <typename T> EqualityComparable = ... // (3) --Lorenzo

Sorry, I sent my previous email before completing it by mistake--I'm re-sending it. On Tue, Oct 2, 2012 at 9:27 AM, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
On Tue, Oct 2, 2012 at 8:07 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
However, syntactically you use it similarly to a template:
MyConcept<T1, T2, T3>
my_template<T1, T2, T3>
Syntactically yes, but they are allowed to mean very different things.
concept <typename T> EqualityComparable = ... // (3)
Actually, this would make the most sense to me, why wouldn't we use this syntax (3)?
Alex wanted the syntax to reflect, to some extent, the style that he used to define concepts in Elements of Programming.
In any case, why is (2) better than (3)? What was the rationale for such a syntax in Elements of Programming? (At the end I can adopt whatever syntax but I will need to justify the choice in a "rationale" section.) template< typename T > concept EqualityComparable = ... // (1) no good because concepts are not templates concept EqualityComparable< typename T > = ... // (2) concept <typename T> EqualityComparable = ... // (3)
A concept is a
type predicate (function) defined by a conjunction of syntactic and semantic requirements. Compare:
// EoP (with pseudo-C++ syntax) Relation(Op) = Predicate(Op) && HomogeneousFunction(Op) && Arity(Op) == 2
// N3351 concept Relation<typename Op, typename T> = Predicate<Op, T, T>;
In the writing, we wanted to emphasize the idea that a concept was "just" a predicate on template arguments. That made it easier to focus on the algorithm's requirements rather than to figure out the extensive set of language rules that would make it work.
But already the fact that the first argument of a concept is automatically replaced with the type when concepts are used in template parameter declarations:
template< typename T, Relation<T> Op > void func ( ... )
Already requires to know something special about the language that makes concept work.
Thanks, --Lorenzo

In any case, why is (2) better than (3)? What was the rationale for such a syntax in Elements of Programming? (At the end I can adopt whatever syntax but I will need to justify the choice in a "rationale" section.)
template< typename T > concept EqualityComparable = ... // (1) no good because concepts are not templates
concept EqualityComparable< typename T > = ... // (2)
concept <typename T> EqualityComparable = ... // (3)
I really can't say where the concept syntax in EoP came from, but the notation is traditionally mathematical. If you buy into the idea that a concept is a function on template arguments, then I think that 2 fits with existing C++ syntax better than 3. After all, we don't write the type of functions ahead of their declarator. That is we don't declare min() as "T (T, T) min", where "T (T, T)" is the function type of min. I tend to prefer 2. It seems more consistent with existing syntax. Andrew

On Tue, Oct 2, 2012 at 10:49 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
In any case, why is (2) better than (3)? What was the rationale for such a syntax in Elements of Programming? (At the end I can adopt whatever syntax but I will need to justify the choice in a "rationale" section.)
template< typename T > concept EqualityComparable = ... // (1) no good because concepts are not templates
concept EqualityComparable< typename T > = ... // (2)
concept <typename T> EqualityComparable = ... // (3)
I really can't say where the concept syntax in EoP came from, but the notation is traditionally mathematical.
If you buy into the idea that a concept is a function on template arguments, then I think that 2 fits with existing C++ syntax better than 3. After all, we don't write the type of functions ahead of their declarator. That is we don't declare min() as "T (T, T) min", where "T (T, T)" is the function type of min.
I think this is a reasonable rationale for (2) over (3) and (1). Unless someone can point out a reason against it, I will adopt (2) with the given rationale.
I tend to prefer 2. It seems more consistent with existing syntax.
Thanks, --Lorenzo

In any case, why is (2) better than (3)? What was the rationale for such a syntax in Elements of Programming? (At the end I can adopt whatever syntax but I will need to justify the choice in a "rationale" section.)
template< typename T > concept EqualityComparable = ... // (1) no good because concepts are not templates
concept EqualityComparable< typename T > = ... // (2)
concept <typename T> EqualityComparable = ... // (3)
I really can't say where the concept syntax in EoP came from, but the notation is traditionally mathematical.
If you buy into the idea that a concept is a function on template arguments, then I think that 2 fits with existing C++ syntax better than 3. After all, we don't write the type of functions ahead of their declarator. That is we don't declare min() as "T (T, T) min", where "T (T, T)" is the function type of min.
I think this is a reasonable rationale for (2) over (3) and (1). Unless someone can point out a reason against it, I will adopt (2) with the given rationale.
I tend to prefer 2. It seems more consistent with existing syntax.
I find following the notation of EoP a sufficient rationale itself. You are choosing the syntax that by now many people in the industry is familiar with. Regards, &rzej

On Thu, Sep 27, 2012 at 1:21 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
On Thu, Sep 27, 2012 at 7:29 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
I'm glad to see this discussion.
I'm really glad to see N3351 author(s) part of it :) BTW, do you know if a clang-based compiler supporting N3351 is actually being implemented?
ConceptClang is attempting to implement both the N2914 and N3351 concepts: http://www.crest.iu.edu/projects/conceptcpp/ The website also has a link to the C++Now talk I recently gave on the nature of concepts and the current state of the ConceptClang implementation. Please note that, as of now, the implementation is a work in progress. I'm working tirelessly to make it stable soon, while I extract theoretical findings towards completing my PhD thesis (also soon). Thanks, -- Larisse.
`` At the same time, we plan to build a compiler (based on Clang) for the emerging language. This prototype will support the full range features described in this report. We hope to have an initial version ready by early February, 2012. ''
Just to give a pick into a possible future of what Boost.Contract _could_ do:
// Header: boost/contract/std/concept/equality_comparable.hpp namespace contract { namespace std {
CONTRACT_CONCEPT( concept (EqualityComparable) ( typename T ) ( requires( (T) a, (T) b, (T) c ) ( bool{a == b}, bool{a != b},
axiom( iff(a == b) eq(a, b) ), axiom( a == a, if(a == b) b == a, if(a == b and b == c) a == c ), axiom( iff(a != b) not(a == b) ) ) ) )
CONTRACT_CONCEPT( concept (EqualityComparable) ( typename T1, typename T2 ) ( EqualityComparable<T1>, EuqalityComparable<T2>, (Common<T1, T2>), (EqualityComparable<CommonType<T1, T2> >),
requires( (T1) a, (T2) b ) ( bool{a == b}, bool{b == a}, bool{a != b}, bool{b != a},
axiom( typedef (CommonType<T1, T2>) C,
iff(a == b) C{a} == C{b}, iff(a != b) C{a} != C{b}, iff(b == a) C{b} == C{a}, iff(b != a) C{b} != C{a} ) ) ) )
} } // namespace
// Header: boost/contract/std/algorithm/find.hpp namespace contract { namespace std {
// Not fully compatible. CONTRACT_FUNCTION( template( typename I, typename T ) requires( InputIterator<I>, (EqualityComparable<T, ValueType<I> >) ) (I) (find) ( (I) first, (I) last, (T const&) value ) ) { return std::find(first, last, value); }
} } // namespace
BTW, why do we declare concepts like:
concept EqualityComparable< typename T > = ... // (1)
Instead of:
template< typename T > concept EqualityComparable = ... // (2)
I personally prefer (2) because the template parameters are declared with a syntax that is more consistent with the syntax used to declare them in class/function templates. For similar reasons, I'd prefer { ... } instead of = ... for the concept definition part.
Thanks. --Lorenzo
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

AMDG On 09/27/2012 07:29 AM, Andrew Sutton wrote:
For instance, axiom EqualityComparable::Reflexivity states that some (rather than every) objects must compare equal to themselves. If you consider type float, objects with value NaN (not-a-number) compare unequal.
No; all well-formed values of a type can be compared for equality (if the type is EqualityComparable). NaN is not a well-formed value. It is literally *not a number*.
Unless you mean to say that any program that uses a NaN in any way has undefined behavior, this notion of a "well-formed value" seems arbitrary and capricious. In Christ, Steven Watanabe

For instance, axiom EqualityComparable::Reflexivity states that some (rather than every) objects must compare equal to themselves. If you consider type float, objects with value NaN (not-a-number) compare unequal.
No; all well-formed values of a type can be compared for equality (if the type is EqualityComparable). NaN is not a well-formed value. It is literally *not a number*.
Unless you mean to say that any program that uses a NaN in any way has undefined behavior, this notion of a "well-formed value" seems arbitrary and capricious.
This entry at Bjarne Stroustrup's FAQ: http://www.stroustrup.com/C++11FAQ.html#axioms presents a reasonable "semantics" of axioms. If an axiom states an equivalence between two expressions, compilers and other tools are allowed to make an assumption that one expression can be substituted for another and perform optimizations or other code transformation or analysis based on this assumption regardless if the assumption holds or not. When you instantiate template FUN with type X, you are responsible for guaranteeeing that this assumption will hold. Under this interpretation we do not care (while doing transformations) that we get NaN or not. In fact, the following axiom: x / y * y <=> x; could be used to avoid the generation of NaN value for some values of x and y. Of course, such "allowance of substitution" would only make sense inside concept-constrained function templates (or member functions of class templates) because in "free code" (i.e, with no templates, you may want to rely on the semantics of NaN's.Also, with the previous concept design that used concept maps, a type could model some concept in more than one way. For instance type int models concept Monoid in two ways: using addition and identity element 0 as well as using multiplication and element 1. I am not sure how the making use of axiom would work in such situations unless we are in a constrained template. Regards, &rzej

Unless you mean to say that any program that uses a NaN in any way has undefined behavior, this notion of a "well-formed value" seems arbitrary and capricious.
The notion of "well-formed value" comes for Elements of Programming. NaN is given as an example of something that is not well-formed. It's defined in the first couple of pages, IIRC. Any state of an object that does represent a value (e.g., NaN does not represent a real number) is not a well-formed value. I can assure that neither Alex nor Paul would consider the definition to be arbitrary or capricious. A program that checks for NaN as the result of an invalid computation does not have undefined behavior because it is explicitly checking for that state. A program that uses NaN as if it were a real number should have undefined behavior. It is easy to construct examples of programs where NaN breaks the postconditions of STL algorithms: double s[] { NaN }; assert(find(s, s + 1, NaN) == s); // will fail double s[] { ..., NaN, ... }; sort(s, s + N); assert(is_sorted(s, s + N)); // probably fails Comparisons on NaN do not conform to the usual semantics of equality and ordering, and so the programs have undefined behavior. This is because the algorithms are generic and are only concerned with the comparison of values, not the checking of those computations for invalid results. There's nothing wrong with having a type or set of types with a representation that includes an ill-formed value. It can be very useful for checking error states. However, such states may not be included in the set of values represented by that type. Andrew

AMDG On 10/01/2012 03:43 AM, Andrew Sutton wrote:
Unless you mean to say that any program that uses a NaN in any way has undefined behavior, this notion of a "well-formed value" seems arbitrary and capricious.
The notion of "well-formed value" comes for Elements of Programming. NaN is given as an example of something that is not well-formed. It's defined in the first couple of pages, IIRC. Any state of an object that does represent a value (e.g., NaN does not represent a real number) is not a well-formed value. I can assure that neither Alex nor Paul would consider the definition to be arbitrary or capricious.
Well, I beg to differ.
A program that checks for NaN as the result of an invalid computation does not have undefined behavior because it is explicitly checking for that state. A program that uses NaN as if it were a real number should have undefined behavior.
That's no different from saying that a program that uses 2 + 3i as if it were a real number has undefined behavior. If you make an assumption about the value of an object, then it's likely to be undefined behavior if that assumption doesn't hold.
It is easy to construct examples of programs where NaN breaks the postconditions of STL algorithms:
double s[] { NaN }; assert(find(s, s + 1, NaN) == s); // will fail
double s[] { ..., NaN, ... }; sort(s, s + N); assert(is_sorted(s, s + N)); // probably fails
Comparisons on NaN do not conform to the usual semantics of equality and ordering, and so the programs have undefined behavior.
Some algorithms obviously won't work, but others like copy that don't depend on comparison will work fine. You can even use find or is_sorted if you pass a custom predicate that handles NaN's in a way consistent with what the algorithm expects. In short, the fact that NaN's cause undefined behavior is a property of the concept map of comparison for double, not a property of double in itself.
This is because the algorithms are generic and are only concerned with the comparison of values, not the checking of those computations for invalid results.
There's nothing wrong with having a type or set of types with a representation that includes an ill-formed value. It can be very useful for checking error states. However, such states may not be included in the set of values represented by that type.
As far as I'm concerned, any value that can exist at all is a member of the set of values represented by the type. The fact that some values are forbidden in certain contexts is irrelevant. A NaN: - Can have various basic operations applied to it with well-defined results. - Can appear in program input and output. - Can be used safely in some generic algorithms. Therefore, I maintain that a NaN is a legitimate value. Saying that it is ill-formed depends solely on the intent of the programmer in specific cases and is not a universal truth. In Christ, Steven Watanabe

That's no different from saying that a program that uses 2 + 3i as if it were a real number has undefined behavior.
I'm not sure that this is the same thing. What you have here is a potential a type error. The previous question was whether NaN is a well-formed value. I should amend that statement: NaN is not a well-formed value when interpreted as a real number. It is a perfectly reasonable state for a floating point object. I had previously omitted that last part ("when interpreted as a real number"), and I shouldn't have. It's actually very important to understanding when the use NaN can lead to undefined behavior and when it cannot. And why.
If you make an assumption about the value of an object, then it's likely to be undefined behavior if that assumption doesn't hold.
That is certainly true.
Some algorithms obviously won't work, but others like copy that don't depend on comparison will work fine. You can even use find or is_sorted if you pass a custom predicate that handles NaN's in a way consistent with what the algorithm expects. In short, the fact that NaN's cause undefined behavior is a property of the concept map of comparison for double, not a property of double in itself.
Algorithms where NaN is interpreted as a real number, whether through equality, ordering, or numeric operations, result in undefined behavior in the sense that the postconditions of the algorithm are broken. The previous examples of find() and sort() fall into that category. Copy does not require NaN to be interpreted as a real number, so a program copy()ing sequences of doubles containing NaNs will not result in undefined behavior. Custom predicates change the interpretation the floating point state. The find_if() algorithm does not impose any interpretation on the iterator's value type; the predicate does. Passing isnan() as a predicate does not interpret the value type as a real number, just a bit pattern. A lambda doing this: [](double n) { return n < 0.0; } does. It may result in undefined behavior. Similarly, a custom relation can impose alternative rules for NaN, making it act like optional<double>. But this is not an interpretation of NaN as a real number. It is an interpretation that extends real numbers with an optional null, missing, or maybe state. This has nothing to do with concept maps. That was a language feature. This is far more fundamental.
As far as I'm concerned, any value that can exist at all is a member of the set of values represented by the type.
The meaning of "value" is not arbitrary. The C++ community seems to have converged on the idea the set of states we associate with a type's "value" as those elements which can be distinguished by equality comparison. That set of states tends to conform to some abstraction: naturals, integers, reals, iterators (or positions), sequences, and sets. The notion of "value" shouldn't simply be taken as the set of valid states encoded by a type.
The fact that some values are forbidden in certain contexts is irrelevant.
I am not disagreeing that programs written using NaN are necessarily wrong, but I do disagree that this is irrelevant. Understanding how a type's states are interpreted, in different contexts, is important to understanding the behavior of our programs and if (or more likely when) we invoke undefined behavior by breaking assumptions. Andrew

on Mon Oct 01 2012, Steven Watanabe <watanabesj-AT-gmail.com> wrote:
AMDG
On 10/01/2012 03:43 AM, Andrew Sutton wrote:
Unless you mean to say that any program that uses a NaN in any way has undefined behavior, this notion of a "well-formed value" seems arbitrary and capricious.
The notion of "well-formed value" comes for Elements of Programming. NaN is given as an example of something that is not well-formed. It's defined in the first couple of pages, IIRC. Any state of an object that does represent a value (e.g., NaN does not represent a real number) is not a well-formed value. I can assure that neither Alex nor Paul would consider the definition to be arbitrary or capricious.
Well, I beg to differ.
Yeah, I have problems with the concept of "well-formedness" as well, because it adds an exception to every statement. It's not that it's an invalid way to look at things, but it merely complicates reasoning about code with no offsetting benefit.
A program that checks for NaN as the result of an invalid computation does not have undefined behavior because it is explicitly checking for that state. A program that uses NaN as if it were a real number should have undefined behavior.
That's no different from saying that a program that uses 2 + 3i as if it were a real number has undefined behavior. If you make an assumption about the value of an object, then it's likely to be undefined behavior if that assumption doesn't hold.
+1
It is easy to construct examples of programs where NaN breaks the postconditions of STL algorithms:
double s[] { NaN }; assert(find(s, s + 1, NaN) == s); // will fail
double s[] { ..., NaN, ... }; sort(s, s + N); assert(is_sorted(s, s + N)); // probably fails
Comparisons on NaN do not conform to the usual semantics of equality and ordering, and so the programs have undefined behavior.
Zero breaks the postconditions of division. So?
A NaN: - Can have various basic operations applied to it with well-defined results. - Can appear in program input and output. - Can be used safely in some generic algorithms.
Therefore, I maintain that a NaN is a legitimate value. Saying that it is ill-formed depends solely on the intent of the programmer in specific cases and is not a universal truth.
+1 -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Zero breaks the postconditions of division. So?
Not always. Dividing by 0.0 for floating point types is a valid operation resulting in inf. But that only applies to a IEEE floating point types. Perform the same computation with rational number and you might get an assertion or exception. But you might not. It depends on your choice of implementation. One of the goals of looking at semantics in this way is to help programmers document and better characterize the requirements of their algorithms. An algorithm that computes generically over Fields necessarily excludes NaN as an input and must guard against division by 0. An algorithm that computes over, say, ExtendedReals may allow division by 0, but also excludes NaN. A generic algorithm designed specifically for floating point types allows division by 0 and NaN as inputs. The point isn't to change the meaning of existing programs by excluding certain values or expressions. Axioms should present a consistent logical framework that precisely but generally describe the semantics our existing programs. This is not easily done.

Hello, 06.10.2012 0:28, Andrew Sutton wrote:
Axioms should present a consistent logical framework that precisely but generally describe the semantics our existing programs. This is not easily done.
Is it legal "to state" with axioms, something that is not true in general case? For instance, let say we have some class representing floating-point arithmetic (IEEE 754). Is it legal "to state" with axioms that it's addition operation is associative (which is not true)? If yes, then nor compiler nor any other tools MUST not do any checks on axioms besides syntactic. Best Regards, Evgeny

On 10/5/12, Evgeny Panasyuk <evgeny.panasyuk@gmail.com> wrote:
Hello,
Axioms should present a consistent logical framework that precisely but generally describe the semantics our existing programs. This is not easily done.
Is it legal "to state" with axioms, something that is not true in general case?
That's an open question--for me. There may be a good answer to that question in the literature or addressed in other domains. It may be sufficient to simply assume that floating point operations satisfy the abstract mathematical requirements. We generally do anyways.

06.10.2012 5:06, Andrew Sutton wrote:
Is it legal "to state" with axioms, something that is not true in general case? That's an open question--for me. There may be a good answer to that question in the literature or addressed in other domains.
Just to clarify: question was about C++ axioms. Is it illegal (from formal point of view of C++ axioms), to have: axiom { a.size() == a.length(); } and: int T:size() { return 0;} int T:length() { return 1;} ? Does compiler have the authority (from formal point of view of C++ axioms) to stop compilation with error in such case(of course, if it able to diagnose that)?
It may be sufficient to simply assume that floating point operations satisfy the abstract mathematical requirements. We generally do anyways.
For most cases - we may assume floating-point addition associativity. For some cases not. Currently, we may control this by compiler-dependent flags. For instance /fp:fast /fp:precise flags of MSVC compiler - with /fp:fast MSVC2012 does auto-vectorization (SSE) of floating-point accumulation. If compiler has right to reject wrong axioms when he able to diagnose, then we can't use C++ axioms to express in code our assumption about floating-point addition associativity. n3351: 1. "axioms, which express semantics and must not be checked by the compiler." 2. "Axioms express the semantics of a concept’s required syntax; they are assumed to be true and must not be checked by the compiler beyond conformance to the C++ syntax." n2887: 3. "Obviously, an implementation may try to check the validity of the assumption and give us a suitable error message if it is violated." 4. "As for the “assumptions” that we have had for decades, an implementation may (optionally as a quality of implementation issue) check some of our uses that can be considered constrained by axiom." So, I am confused about real intention. Best Regards, Evgeny

Does compiler have the authority (from formal point of view of C++ axioms) to stop compilation with error in such case(of course, if it able to diagnose that)?
I believe it should, but I do not know the mechanism that it should use to diagnose the error.
If compiler has right to reject wrong axioms when he able to diagnose, then we can't use C++ axioms to express in code our assumption about floating-point addition associativity.
Whatever mechanism is eventually designed to check axioms would have to accommodate floating point issues. This is an area of active investigation. What you had in n3351 and 2887 were initial attempts to say, in general, what was desired, but not exactly how it should work. In n3351, we didn't know what we wanted the compiler to do with axioms; describing language mechanics wasn't the primary focus of that work. In n2887, the authors had a better idea of what they wanted, but didn't (if I'm recalling correctly) specify the mechanics of how they could be checked.
So, I am confused about real intention.
The intent is to state requirements. Translating that to a working language specification is a work in progress. Andrew

06.10.2012 17:11, Andrew Sutton wrote:
Does compiler have the authority (from formal point of view of C++ axioms) to stop compilation with error in such case(of course, if it able to diagnose that)? I believe it should, but I do not know the mechanism that it should use to diagnose the error.
There are several unpleasant aspects related to this path (giving "stop compilation" authority to compiler when axioms do not fit) : 1. Requiring type to always fulfil axioms, means that we can't use types which are "partial" to given axiom. I.e. types which fulfils only axioms only on some subset of values, or subset of cases. 2. I can't even imagine how compiler could do verification in general case. For instance, we have class that represents set in remote database and we have associative union operation defined on a such class. How compiler should check associative property of such operation? Obviously, It is impossible to do such check at compile time, objects even don't have actual data members, only handles. OK, even if compiler would have some data, how it would prove any property of operations? It can't even prove that these operations will halt! (halting problem) 3. Hardness of checking leads to situation when some compilers may check more cases than others. Even newer versions of same compiler can be smarter than predecessor. Does that mean that code which was compiled OK on older version of compiler will be broken on new, "smarter" one? Best Regards, Evgeny

On 7 October 2012 19:29, Evgeny Panasyuk <evgeny.panasyuk@gmail.com> wrote:
3. Hardness of checking leads to situation when some compilers may check more cases than others. Even newer versions of same compiler can be smarter than predecessor. Does that mean that code which was compiled OK on older version of compiler will be broken on new, "smarter" one?
I'd say yes. Is this really a different situation than a more aggressive optimizing compiler breaking old code that happened to be dependent on undefined behavior which just happened to work in the past? -- Nevin ":-)" Liber <mailto:nevin@eviloverlord.com> (847) 691-1404

08.10.2012 21:46, Nevin Liber wrote:
3. Hardness of checking leads to situation when some compilers may check more cases than others. Even newer versions of same compiler can be smarter than predecessor. Does that mean that code which was compiled OK on older version of compiler will be broken on new, "smarter" one?
I'd say yes.
Is this really a different situation than a more aggressive optimizing compiler breaking old code that happened to be dependent on undefined behavior which just happened to work in the past?
Good example. But, even with this example: in case of aggressive optimizing, code is broken at runtime - compiler does not reject it at compile time. If user states something with axioms, which is not true, more aggressive compiler have rights also to break code at runtime, i.e. by substituting something based on axioms knowledge. Real question is about compile-time code rejection. Or more generally, besides of compilers, is it legal(from formal point of view) to state with axioms something that is not true in all cases, even if we understand all side-effects? I inclined that it should be legal - users will take ALL responsibility regarding axioms in ANY CASE (for instance, because it is impossible automatically check them in general case ), compilers MUST not check/complain. Best Regards, Evgeny

Is it legal "to state" with axioms, something that is not true in
general case?
That's an open question--for me. There may be a good answer to that question in the literature or addressed in other domains.
Axioms so far have not been standardized, nor has anyone prepared a formal, unambiguous specification of how they would be handled. People have different views on and expectations of axioms. N2887 has been the best attempt so far to specify what axioms are supposed to mean to the compiler and other tools. So some answer could be given based on N2887. Basically, what it says is that axioms only reflect current practice of writing generic code, rather than inventing something new. So what do you think of STL algorithms that operate on doubles in C++11?: void find_me( std::vector<double> const& vec, double n ) { std::find(vec.begin(), vec.end(), 10.2); std::find(vec.begin(), vec.end(), n); } Is it legal? Is it correct? Is it good practice? can 'n' be NaN? My answer is: STL algorithms will make an assumption that operations on 'T' are symmetrical, that comparisons for 'T' meet the three relations (reflexivity, symmetry, transitivity). STL implementations will not in general verify these assumptions, but may use these assumptions in as many places as they want, and you will never know when. Some implementations may do some run-time checks (in debug versions), where they know they could not spoil the program, but this is QoI issue. Should the programmer instantiate STL algorithms for 'T' == double? My answer is: they know the risks (of potentially not meeting the assumptions), they should assess if the risk is low or make sure they eliminate the usage of NaNs, and they should make the call at their own risk. If the program has a bug due to type double not having met the assumptions on 'T' it is the fault of the programmer who requested the instantiation of the algorithm with type double. Axioms (according to N2887) do not change this situation. An assumption of type 'T' is never 'valid' or 'invalid'. It is just an assumption that one may use or not. When you request an instantiation of an algorithm constrained with an 'axiom-rich' concept with type X, you take the responsibility for X meeting or not the expectations expressed in the concept.
Just to clarify: question was about C++ axioms. Is it illegal (from formal point of view of C++ axioms), to have:
axiom { a.size() == a.length(); } and: int T:size() { return 0;} int T:length() { return 1;} ?
Does compiler have the authority (from formal point of view of C++ axioms) to stop compilation with error in such case(of course, if it able to diagnose that)?
It may be sufficient to simply assume that floating point operations
satisfy the abstract mathematical requirements. We generally do anyways.
For most cases - we may assume floating-point addition associativity. For some cases not. Currently, we may control this by compiler-dependent flags. For instance /fp:fast /fp:precise flags of MSVC compiler - with /fp:fast MSVC2012 does auto-vectorization (SSE) of floating-point accumulation.
If compiler has right to reject wrong axioms when he able to diagnose, then we can't use C++ axioms to express in code our assumption about floating-point addition associativity.
This question is somewhat similar to: can compiler refuse to compile the following code: void Terminator() noexcept { throw std::runtime_error(); } We know that the compiler must not in general check this, but in the above particular case it is obvious that someone has planted a bug. But the answer is still: the compiler must not reject this code. However, it is allowed to report a warning (compilation warnings are not in scope of the C++ standard); and it would be surprising if a good compiler didn't do it. And you are allowed to set the compiler option to treat warnings (at least of this type) as errors. The good strategy for compilers, regarding axioms, would be to make use of axioms only if they have some way of verifying if type 'X' meets them, (either by limited static code analysis or run-time checks). This might require selecting only a subset of all axioms that the compiler is able to check and utilize only this subset. Regards, &rzej

08.10.2012 14:06, Andrzej Krzemienski wrote:
Should the programmer instantiate STL algorithms for 'T' == double? My answer is: they know the risks (of potentially not meeting the assumptions), they should assess if the risk is low or make sure they eliminate the usage of NaNs, and they should make the call at their own risk. If the program has a bug due to type double not having met the assumptions on 'T' it is the fault of the programmer who requested the instantiation of the algorithm with type double.
Axioms (according to N2887) do not change this situation. An assumption of type 'T' is never 'valid' or 'invalid'. It is just an assumption that one may use or not. When you request an instantiation of an algorithm constrained with an 'axiom-rich' concept with type X, you take the responsibility for X meeting or not the expectations expressed in the concept.
So, all responsibility is on user, and in that case, compiler MUST not reject code if he founds that type does not fulfil axioms. Is it right?
The good strategy for compilers, regarding axioms, would be to make use of axioms only if they have some way of verifying if type 'X' meets them, (either by limited static code analysis or run-time checks). This might require selecting only a subset of all axioms that the compiler is able to check and utilize only this subset.
So, if type does not fulfil axioms, then compiler must not complain AND it must not make any assumptions and optimizations based on these axioms? So, within this approach if user pass some ieee754 floating point type to algorithm which expects types with associative addition operation, compiler MUST not assume associativity, can't do optimizations, is it right? Even if user want to take all responsibility, and he accepts all side-effects? That does not sounds - if all responsibility to check is on user, compiler should be free to exploit axioms knowledge. Currently, I inclined to following approach: 1. Compiler MUST not do any checks on axioms besides syntactic. 2. User has ALL responsibility on verifying axioms. 3. Compiler has rights to exploit any knowledge which it gets from axioms, regardless of fact if type fulfil them or not. I.e. in following example: axiom { a.size() == a.length(); } and: int T:size() { return 0;} int T:length() { return 1;} Compiler has rights to substitute a.size() with a.length(); - user takes all responsibility for such cases. What I would like also to see, or at least to discuss, is requirement to explicitly state that type fulfils axioms. Maybe something similar to non-intrusive traits approach. That would be more robust than requirements only at algorithms side which are silently accepted. That would also allow us to overload functions on axioms too. For instance some types may have exactly same syntactic requirements, but fulfil different axioms. In general compiler can't check axioms, so it can't select right overload. If axioms would be not used for overload, then we have to "duplicate" them with traits. Best Regards, Evgeny

On Mon, Oct 8, 2012 at 12:02 PM, Evgeny Panasyuk <evgeny.panasyuk@gmail.com> wrote:
Currently, I inclined to following approach: 1. Compiler MUST not do any checks on axioms besides syntactic. 2. User has ALL responsibility on verifying axioms. 3. Compiler has rights to exploit any knowledge which it gets from axioms, regardless of fact if type fulfil them or not.
IMO, this is a rather weak motivation to support axioms especially in the core language (just the syntax check of some code that could otherwise and almost equivalently be written using code comments) but it seems to be the only sensible thing to do with axioms. Despite the weak motivation, I'm thinking to support axioms in Boost.Contract so to check them syntactically. Ultimately, if users don't want to use axioms, they can just not use them (and maybe document the concept semantics via code comments). --Lorenzo

08.10.2012 23:40, Lorenzo Caminiti wrote:
IMO, this is a rather weak motivation to support axioms especially in the core language (just the syntax check of some code that could otherwise and almost equivalently be written using code comments) but it seems to be the only sensible thing to do with axioms.
I hope C++ Axioms are not about any kind of checking, otherwise some other name should be chosen. Axioms are: 1) Formal and clear language to talk about semantics. Formal, small and clear language, which may became standard for C++ developers is much better than comments. 2) That language is understandable by compiler. Possible use cases: a) Overload resolution based on semantic properties b) Optimizability - very strong side of C++ may become even stronger. However, rules regarding which kind of optimizations compiler allowed to do with axioms, should be well-defined. Personally, I'd trade lambdas, for well-defined axiom/semantic-based optimizations allowed by ISO. Best Regards, Evgeny

On Mon, Oct 8, 2012 at 1:32 PM, Evgeny Panasyuk <evgeny.panasyuk@gmail.com> wrote:
08.10.2012 23:40, Lorenzo Caminiti wrote:
IMO, this is a rather weak motivation to support axioms especially in the core language (just the syntax check of some code that could otherwise and almost equivalently be written using code comments) but it seems to be the only sensible thing to do with axioms.
I hope C++ Axioms are not about any kind of checking, otherwise some other name should be chosen. Axioms are: 1) Formal and clear language to talk about semantics.
I disagree, I think it'd be great if compilers could check the semantics of programs (I know it's impossible, but it'd be great). So given that axioms state the semantics of concepts and that I'd like to the compiler to check the semantics of programs, I'd like the compiler to check the axioms (but again, unfortunately I understand that it cannot be done).
Formal, small and clear language, which may became standard for C++ developers is much better than comments. 2) That language is understandable by compiler. Possible use cases: a) Overload resolution based on semantic properties b) Optimizability - very strong side of C++ may become even stronger. However, rules regarding which kind of optimizations compiler allowed to do with axioms, should be well-defined. Personally, I'd trade lambdas, for well-defined axiom/semantic-based optimizations allowed by ISO.
--Lorenzo

09.10.2012 3:03, Lorenzo Caminiti wrote:
name should be chosen. Axioms are: 1) Formal and clear language to talk about semantics. I disagree, I think it'd be great if compilers could check the semantics of programs (I know it's impossible, but it'd be great). So given that axioms state the semantics of concepts and that I'd like to
I hope C++ Axioms are not about any kind of checking, otherwise some other the compiler to check the semantics of programs, I'd like the compiler to check the axioms (but again, unfortunately I understand that it cannot be done).
Besides of fact that it can't be checked in general way, just imagine that it could: What about types which fulfil axioms only on some set of values, and only in some cases? How "checker" would distinguish cases when it is OK to have types which fulfil axioms partly, and when it is not? Let's get back to NaN's: concept TotallyOrdered<EqualityComparable T> = requires (T a, T b, T c) { ... axiom { ... a < b || b < a || a == b; } ... }; ____ #include <iostream> #include <limits> #include <cassert> using namespace std; int main() { double a=numeric_limits<double>::quiet_NaN(); double b=numeric_limits<double>::quiet_NaN(); cout << (a < b); cout << (b < a); cout << (a == b); assert( (a < b) || (b < a) || (a == b) ); return 0; } http://ideone.com/Q8l17 stdout : 000 stderr : prog: prog.cpp:13: int main(): Assertion `(a < b) || (b < a) || (a == b)' failed. ____ OK, compiler would prove that double type does not fulfil TotallyOrdered concept's axiom. And now, it will reject following algorithm when used on array of doubles: template<ForwardIterator I, TotallyOrdered T, Relation<ValueType<I>, T> R> I lower_bound(I first, I last, const T& value, R comp); Do you agree to forbid usage of lower_bound on array of doubles? Or what mechanism should be used to allow types which fulfil axioms only partly? axiom_cast<double,TotallyOrdered> ? Best Regards, Evgeny

09.10.2012 4:24, Andrew Sutton wrote:
Or what mechanism should be used to allow types which fulfil axioms only partly? axiom_cast<double,TotallyOrdered> ? I think Andrzej's last comments may address your questions. They were very insightful and worth reading closely.
Thank you for note, I will check. Best Regards, Evgeny

09.10.2012 4:24, Andrew Sutton wrote:
Or what mechanism should be used to allow types which fulfil axioms only partly? axiom_cast<double,TotallyOrdered> ? I think Andrzej's last comments may address your questions. They were very insightful and worth reading closely.
I read Andrzej's comments, and in fact I already answered to last one. My question you quoted above, was addressed to those who think that automatic axioms verifications is a good idea. As I said earlier in my opinion compiler MUST not do any checks on axioms besides syntactic and user has ALL responsibility on verifying axioms. Best Regards, Evgeny

Or what mechanism should be used to allow types which fulfil axioms only partly? axiom_cast<double,TotallyOrdered> ?
I would prefer something like this - explicitly overriding axiom-checking in the few cases where this would need to be done - over losing the benefits of the compiler checking the axioms in cases where it can (which are probably also few, but they exist). Regards, Nate

On Mon, Oct 8, 2012 at 10:34 PM, Nathan Ridge <zeratul976@hotmail.com> wrote:
Or what mechanism should be used to allow types which fulfil axioms only partly? axiom_cast<double,TotallyOrdered> ?
I would prefer something like this - explicitly overriding axiom-checking in the few cases where this would need to be done - over losing the benefits of the compiler checking the axioms in cases where it can (which are probably also few, but they exist).
Maybe we could use the difference between bound variables and unbound to say what can be checked: template <typename Iterator, typename T> void find(Iterator first, Iterator last, T value) { axiom { x::T. x == x } [ ... ] } Would allow the compiler to check all values the function find sees. So, all values in [first, last) and value. And: template <typename Iterator, typename T> void find(Iterator first, Iterator last, T value) { axiom { /\x in T. x == x } [ ... ] } The compiler can check all values in T, so it could show that double violates this because NaNs don't compare equal to itself. Just a thought.
Regards, Nate
Regards, -- Felipe Magno de Almeida

On Mon, Oct 8, 2012 at 4:40 PM, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
On Mon, Oct 8, 2012 at 12:02 PM, Evgeny Panasyuk <evgeny.panasyuk@gmail.com> wrote:
Currently, I inclined to following approach: 1. Compiler MUST not do any checks on axioms besides syntactic. 2. User has ALL responsibility on verifying axioms. 3. Compiler has rights to exploit any knowledge which it gets from axioms, regardless of fact if type fulfil them or not.
IMO, this is a rather weak motivation to support axioms especially in the core language (just the syntax check of some code that could otherwise and almost equivalently be written using code comments) but it seems to be the only sensible thing to do with axioms.
Despite the weak motivation, I'm thinking to support axioms in Boost.Contract so to check them syntactically. Ultimately, if users don't want to use axioms, they can just not use them (and maybe document the concept semantics via code comments).
You find compiler optimization from axioms a weak motivation (3)? Why?
--Lorenzo
Regards, -- Felipe Magno de Almeida

On Mon, Oct 8, 2012 at 2:21 PM, Felipe Magno de Almeida <felipe.m.almeida@gmail.com> wrote:
On Mon, Oct 8, 2012 at 4:40 PM, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
On Mon, Oct 8, 2012 at 12:02 PM, Evgeny Panasyuk <evgeny.panasyuk@gmail.com> wrote:
Currently, I inclined to following approach: 1. Compiler MUST not do any checks on axioms besides syntactic. 2. User has ALL responsibility on verifying axioms. 3. Compiler has rights to exploit any knowledge which it gets from axioms, regardless of fact if type fulfil them or not.
IMO, this is a rather weak motivation to support axioms especially in the core language (just the syntax check of some code that could otherwise and almost equivalently be written using code comments) but it seems to be the only sensible thing to do with axioms.
Despite the weak motivation, I'm thinking to support axioms in Boost.Contract so to check them syntactically. Ultimately, if users don't want to use axioms, they can just not use them (and maybe document the concept semantics via code comments).
You find compiler optimization from axioms a weak motivation (3)? Why?
Because if the primary role of axioms is to state concept's semantics then I'd expect them to be used to enforce concept's semantics (with some type of compile-time checking) instead than using them for some optimization that BTW is going to be compiler-specific. You standardize axioms but how would you standardize the optimization that a compilers will generate based on them? If such optimization cannot be standardized (I don't think it can) and if that is the main reason for axioms, why are axioms standardized? On a separate note, within the context of supporting axioms within a concept library, obviously (3) doesn't apply at all (but this is a different point). In any case weak != 0 so IMO there's some value != 0 in axioms as "comments that document the concept's semantics and that are checked for syntax errors". Please note that all of this is just my personal opinion. Thanks, --Lorenzo

09.10.2012 3:12, Lorenzo Caminiti wrote:
Because if the primary role of axioms is to state concept's semantics then I'd expect them to be used to enforce concept's semantics (with some type of compile-time checking) instead than using them for some optimization that BTW is going to be compiler-specific.
I don't see big problems with some degree of optimizations which are done by one compiler but not by another. We already have copy-elision which works better on some compilers than on others.
You standardize axioms but how would you standardize the optimization that a compilers will generate based on them? If such optimization cannot be standardized (I don't think it can) and if that is the main reason for axioms, why are axioms standardized?
I agree, and I wrote earlier, well-defined rules which kind of transformations are allowed based on axioms, should be clearly stated. I don't think it is impossible task, but obviously not easy. P.S. I just have wild idea - what about axioms for function's arguments? i.e. not on types, but on values. Maybe some kind of axiomatic_assert. For instance that can be used to prevent pointers aliasing ("restrict" keyword in C99): void* memcpy( void* dest, const void* src, size_t count ) { axiomatic_assert( do_not_overlap(dest,src,count) ); // ... } Best Regards, Evgeny

P.S. I just have wild idea - what about axioms for function's arguments? i.e. not on types, but on values. Maybe some kind of axiomatic_assert. For instance that can be used to prevent pointers aliasing ("restrict" keyword in C99): void* memcpy( void* dest, const void* src, size_t count ) { axiomatic_assert( do_not_overlap(dest,src,count) ); // ... }
Is this not the same thing as preconditions in contract programming and Lorenzo's Boost.Contract library? Regards, &rzej

On Wed, Oct 10, 2012 at 12:57 AM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
P.S. I just have wild idea - what about axioms for function's arguments? i.e. not on types, but on values. Maybe some kind of axiomatic_assert. For instance that can be used to prevent pointers aliasing ("restrict" keyword in C99): void* memcpy( void* dest, const void* src, size_t count ) { axiomatic_assert( do_not_overlap(dest,src,count) ); // ... }
Is this not the same thing as preconditions in contract programming and Lorenzo's Boost.Contract library?
I think it is (of course, Boost.Contract preconditions are better because in declarations, const-correct, etc). On this topic, I still didn't give up the desire of generating preconditions from axioms and checking them at run-time. N3351 says not to do so because axioms are not constant-correct but given that axioms only alter dummy variables declared within the concepts and not in user code, I don't see why that is a problem (it'd be like allowing preconditions to declare and alter local variables, the state of the user program outside the contract code will still remain unchanged). The major issue I see so far with generating and checking preconditions from axioms is that axioms are programmed using "magic" operations like eq, <=>, is_valid, etc that cannot be implemented :( Thanks, --Lorenzo

On this topic, I still didn't give up the desire of generating preconditions from axioms and checking them at run-time. N3351 says not to do so because axioms are not constant-correct but given that axioms only alter dummy variables declared within the concepts and not in user code, I don't see why that is a problem (it'd be like allowing preconditions to declare and alter local variables, the state of the user program outside the contract code will still remain unchanged).
Or you could treat axioms more like invariants: check them before and after every [I do not know what]. The difference from contracts is that when you specify a contract you also implicitly say at which point you expect them to be checked. With axioms you just state an "absolute truth" and it is not clear how often it should be checked. Also note this example of axiom that someone here has already brought up: axiom Allocation(size_t n) { malloc(n) <=> realloc(0, n); } If you just create two hidden values p1 and p2: auto p1 = malloc(n); auto p2 = realloc(0, n); If you try to compare the values the will be not equal, but the axiom still holds. The major issue I see so far with generating and checking
preconditions from axioms is that axioms are programmed using "magic" operations like eq, <=>, is_valid, etc that cannot be implemented :(
I think that to you (the concept library author) only token <=> is the magic. Defining function templates eq(), is_valid() and equality_preserving() will be the problem of the users of your library. These functions are particular to STL; not to generic programming. Regards, &rzej

On Wed, Oct 10, 2012 at 2:24 AM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
On this topic, I still didn't give up the desire of generating preconditions from axioms and checking them at run-time. N3351 says not to do so because axioms are not constant-correct but given that axioms only alter dummy variables declared within the concepts and not in user code, I don't see why that is a problem (it'd be like allowing preconditions to declare and alter local variables, the state of the user program outside the contract code will still remain unchanged).
Or you could treat axioms more like invariants: check them before and after every [I do not know what]. The difference from contracts is that when you specify a contract you also implicitly say at which point you expect them to be checked. With axioms you just state an "absolute truth" and it is not clear how often it should be checked.
Also note this example of axiom that someone here has already brought up:
axiom Allocation(size_t n) { malloc(n) <=> realloc(0, n); }
If you just create two hidden values p1 and p2:
auto p1 = malloc(n); auto p2 = realloc(0, n);
The problem here is <=> because I can't implement it (and for example I can't implement it using p1 and p2 as in the code above). The problem is not the idea of generating pre, inv, etc for axioms and checking them at run-time even if they are not const-correct. The problem is that we write axioms using operations that cannot be implemented like <=> so we can't actually check axioms. Is it a good thing to write code that cannot be implemented, compiled, and executed? IMO, it's not a good thing, that's what code comments are for. (Again, that's just my opinion and I already said that I see some value small but > 0 in the syntax checking that we can do with programming axioms.)
If you try to compare the values the will be not equal, but the axiom still holds.
The major issue I see so far with generating and checking
preconditions from axioms is that axioms are programmed using "magic" operations like eq, <=>, is_valid, etc that cannot be implemented :(
I think that to you (the concept library author) only token <=> is the magic. Defining function templates eq(),
If <=> is defined as the equivalence operator: operation1 <=> operation2 ::= "operation1 and operation2 are equivalent in that they have the same effect on the state of the program" [1] Then what you say it's true for eq too. If instead <=> is the logic if-and-only-if (iff): predicate1 <=> predicate2 ::= "predicate1 is true if and only if predicate2 is true" [2] Then eq() has a special meaning as it is used together with the <=> iff to state equivalence as defined by [1]. N3351 uses [2] where both <=> and eq() are magic but I'd prefer [1] where only [1] is magic (and more powerful). At least that is my understanding. I will probably implement [1] instead of [2] in my lib. BTW, under [2] a good alphanumeric name for <=> is "iff". However, what's a good alphanumeric name for <=> under [1]? (My lib can't use symbols.) operation1 <=> operation2 (operation1) sameas (operation2) // I like this (operation1) same_as (operation2) (operation1) equivals (operation2)
is_valid() and equality_preserving() will be the problem of the users of your library. These functions are particular to STL; not to generic programming.
True for all magic operations other than <=> and eq, its users' defined magic. However, in N3351 equality_preserving and not_equality_preserving have this magic property that they override each other from one concept definition to the next... I'm not sure how users will ever be able to define such operations in a way that they can implement the overriding property without language or library support (but then again, these operations are not implementable to begin with so we don't have to worry about it). Thanks. --Lorenzo

10.10.2012 12:50, Lorenzo Caminiti wrote:
On Wed, Oct 10, 2012 at 12:57 AM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
P.S. I just have wild idea - what about axioms for function's arguments? i.e. not on types, but on values. Maybe some kind of axiomatic_assert. For instance that can be used to prevent pointers aliasing ("restrict" keyword in C99): void* memcpy( void* dest, const void* src, size_t count ) { axiomatic_assert( do_not_overlap(dest,src,count) ); // ... }
Is this not the same thing as preconditions in contract programming and Lorenzo's Boost.Contract library? I think it is
I supposed that axiomatic_asserts MUST be never checked. For instance, because such conditions may not be true in all cases, like in concept's axioms (however, there are several different opinions in this regard).
(of course, Boost.Contract preconditions are better because in declarations, const-correct, etc).
I do not restrict axiomatic_assert to function's body, some kind of similar feature may appear in declarations too. Main idea is to have general mechanism to give away some portions of semantic, in a favour of performance - to sign a contract with compiler's optimizer. Concept's axioms are applied only to types, but not to specific variables. While axiomatic_assert can be used to state semantic relations on specific set of variables.
On this topic, I still didn't give up the desire of generating preconditions from axioms and checking them at run-time.
In general, do you think that C++ axioms MUST always be preserved? Best Regards, Evgeny

10.10.2012 11:57, Andrzej Krzemienski wrote:
P.S. I just have wild idea - what about axioms for function's arguments? i.e. not on types, but on values. Maybe some kind of axiomatic_assert. For instance that can be used to prevent pointers aliasing ("restrict" keyword in C99): void* memcpy( void* dest, const void* src, size_t count ) { axiomatic_assert( do_not_overlap(dest,src,count) ); // ... }
Is this not the same thing as preconditions in contract programming and Lorenzo's Boost.Contract library?
1. I have just looked to http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1962.html referenced in Boost.Contract. I assumed that it was not about performance, but only about runtime checking. So, I am little bit surprised that it considers performance also. Anyway, I supposed that axiomatic_assert MUST never check it's assertions - it is just pure language to talk with optimizer. And as for concept's axioms, I think user legally (from the formal point of view) can make axiomatic_assert on properties which are not true in general case - so this is main difference. There are cases, when checking of pre-conditions would make impossible to achieve function's post-conditions (I think already mentioned here), for instance: a) multi-passing sequence represented by Input Iterator. b) lower_bound: if check that it's input range is sorted, then post-condition regarding number of logarithmic comparisons will be broken. Maybe this is OK in unit-tests, but I don't think it is OK even in Debug builds. 2. I don't think that Lorenzo's Boost.Contract library able to enable any optimizations, at least while we don't have some kind of support from compiler. While main purpose of axiomatic_assert(which I think of) is optimizability, like preventing pointers aliasing and so on. Best Regards, Evgeny

2012/10/10 Evgeny Panasyuk <evgeny.panasyuk@gmail.com>
10.10.2012 11:57, Andrzej Krzemienski wrote:
P.S. I just have wild idea - what about axioms for function's arguments?
i.e. not on types, but on values. Maybe some kind of axiomatic_assert. For instance that can be used to prevent pointers aliasing ("restrict" keyword in C99): void* memcpy( void* dest, const void* src, size_t count ) { axiomatic_assert( do_not_overlap(dest,src,count) ); // ... }
Is this not the same thing as preconditions in contract programming and Lorenzo's Boost.Contract library?
1. I have just looked to http://www.open-std.org/jtc1/** sc22/wg21/docs/papers/2006/**n1962.html<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1962.html>referenced in Boost.Contract. I assumed that it was not about performance, but only about runtime checking. So, I am little bit surprised that it considers performance also.
Anyway, I supposed that axiomatic_assert MUST never check it's assertions - it is just pure language to talk with optimizer. And as for concept's axioms, I think user legally (from the formal point of view) can make axiomatic_assert on properties which are not true in general case - so this is main difference.
There are cases, when checking of pre-conditions would make impossible to achieve function's post-conditions (I think already mentioned here), for instance: a) multi-passing sequence represented by Input Iterator. b) lower_bound: if check that it's input range is sorted, then post-condition regarding number of logarithmic comparisons will be broken. Maybe this is OK in unit-tests, but I don't think it is OK even in Debug builds.
2. I don't think that Lorenzo's Boost.Contract library able to enable any optimizations, at least while we don't have some kind of support from compiler. While main purpose of axiomatic_assert(which I think of) is optimizability, like preventing pointers aliasing and so on.
Ok, I think I now know what you are saying: axiomatic_assert states what compilers are allowed to assume, but what they are not allowed to check. This does differ from the typical usage of preconditions; but I still believe that preconditions in contract-programming sense should be able to offer this capability already. I am talking about contract-programming as language feature, because library-based solutions offer little help to the optimizer. Lorenzo's library offers the capability of declaring preconditions that must never be executed. This addresses your examples above (with InputIterator-based range and runtime complexity of lower_bound). See here: http://contractpp.svn.sourceforge.net/viewvc/contractpp/releases/contractpp_... In short, you can express an assertion and attach a condition when (if ever) it should be checked. In ideal syntax, it would be written as: template <class It> requires InputIterator<It> size_t distance(It b, It e) precondition(is_range(b, e), MultiPassIterator<It>) { } The second argument in precondition (MultiPassIterator<It>) says under what (static) condition the compiler is allowed to perform the check. If you want to disable the check forever (but still want to express that this is an assumption that compilers can make), precondition(is_range(b, e), false) To wrap up, I believe that contract-programming's primary feature is to be able to express assumptions that you may hold for granted. Being able to check these at run-time is just a bonus. Regards, &rzej

On Thu, Oct 11, 2012 at 12:46 AM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
2012/10/10 Evgeny Panasyuk <evgeny.panasyuk@gmail.com>
10.10.2012 11:57, Andrzej Krzemienski wrote:
P.S. I just have wild idea - what about axioms for function's arguments?
i.e. not on types, but on values. Maybe some kind of axiomatic_assert. For instance that can be used to prevent pointers aliasing ("restrict" keyword in C99): void* memcpy( void* dest, const void* src, size_t count ) { axiomatic_assert( do_not_overlap(dest,src,count) ); // ... }
Is this not the same thing as preconditions in contract programming and Lorenzo's Boost.Contract library?
1. I have just looked to http://www.open-std.org/jtc1/** sc22/wg21/docs/papers/2006/**n1962.html<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1962.html>referenced in Boost.Contract. I assumed that it was not about performance, but only about runtime checking. So, I am little bit surprised that it considers performance also.
Anyway, I supposed that axiomatic_assert MUST never check it's assertions - it is just pure language to talk with optimizer. And as for concept's axioms, I think user legally (from the formal point of view) can make axiomatic_assert on properties which are not true in general case - so this is main difference.
There are cases, when checking of pre-conditions would make impossible to achieve function's post-conditions (I think already mentioned here), for instance: a) multi-passing sequence represented by Input Iterator. b) lower_bound: if check that it's input range is sorted, then post-condition regarding number of logarithmic comparisons will be broken. Maybe this is OK in unit-tests, but I don't think it is OK even in Debug builds.
2. I don't think that Lorenzo's Boost.Contract library able to enable any optimizations, at least while we don't have some kind of support from compiler. While main purpose of axiomatic_assert(which I think of) is optimizability, like preventing pointers aliasing and so on.
Ok, I think I now know what you are saying: axiomatic_assert states what compilers are allowed to assume, but what they are not allowed to check. This does differ from the typical usage of preconditions; but I still believe that preconditions in contract-programming sense should be able to offer this capability already. I am talking about contract-programming as language feature, because library-based solutions offer little help to the optimizer.
Lorenzo's library offers the capability of declaring preconditions that must never be executed. This addresses your examples above (with InputIterator-based range and runtime complexity of lower_bound). See here: http://contractpp.svn.sourceforge.net/viewvc/contractpp/releases/contractpp_...
In short, you can express an assertion and attach a condition when (if ever) it should be checked. In ideal syntax, it would be written as:
template <class It> requires InputIterator<It> size_t distance(It b, It e) precondition(is_range(b, e), MultiPassIterator<It>) { }
A small note (not really relevant for this discussion) is that I recently find out that the ideal syntax to express assertion requirements is to use "static if" as proposed by N3329: template< typename I > requires InputIterator<I> size_t distance ( I b, I e ) precondition { static if(MultiPassIterator<I>) { is_range(b, e); } } { ... // implementation } Boost.Contract will adopt such a syntax on its 1st release: http://sourceforge.net/apps/trac/contractpp/ticket/101
The second argument in precondition (MultiPassIterator<It>) says under what (static) condition the compiler is allowed to perform the check. If you want to disable the check forever (but still want to express that this is an assumption that compilers can make),
precondition(is_range(b, e), false)
precondition { static if(false) is_range(b, e); }
To wrap up, I believe that contract-programming's primary feature is to be able to express assumptions that you may hold for granted. Being able to check these at run-time is just a bonus.
--Lorenzo

11.10.2012 11:46, Andrzej Krzemienski wrote:
Lorenzo's library offers the capability of declaring preconditions that must never be executed. This addresses your examples above (with InputIterator-based range and runtime complexity of lower_bound). See here: http://contractpp.svn.sourceforge.net/viewvc/contractpp/releases/contractpp_...
Ok, I see - there is overlapping with axiomatic_assert. And thank you for detailed answer!
To wrap up, I believe that contract-programming's primary feature is to be able to express assumptions that you may hold for granted. Being able to check these at run-time is just a bonus.
So it looks like there are several view points on it. As far as I understand (sorry if I am wrong), Lorenzo wants checking not just like a bonus, but as one of primary goals: "IMO, this is a rather weak motivation to support axioms especially in the core language (just the syntax check of some code that could otherwise and almost equivalently be written using code comments) but it seems to be the only sensible thing to do with axioms. " Best Regards, Evgeny

On Mon, Oct 8, 2012 at 7:12 PM, Lorenzo Caminiti <lorcaminiti@gmail.com>wrote:
Because if the primary role of axioms is to state concept's semantics then I'd expect them to be used to enforce concept's semantics (with some type of compile-time checking) instead than using them for some optimization that BTW is going to be compiler-specific. You standardize axioms but how would you standardize the optimization that a compilers will generate based on them? If such optimization cannot be standardized (I don't think it can) and if that is the main reason for axioms, why are axioms standardized?
You don't need to standardize specifically that certain optimizations must be made, but you can standardize what types of optimizations are allowed given the definition of certain axioms (transformation may be a better word here than optimization). To point to something somewhat similar in C++ already, consider NRVO -- the standard explicitly allows for such high-level optimizations that realistically could change the behavior of a program depending on the definition of a type's copy or move, but these optimizations are not actually required to be performed and in practice only some compilers perform them some of the time. Still, overall I think the specification of RVO in the standard has worked fairly well over the years, and I imagine the same would be true of code transformations performed due to the presence of certain axioms. Also, while you often can't verify axioms (they are, after all, axioms), under certain circumstances a smart compiler may be intelligent enough to notice something like provided axioms being contradictory, etc. In that case, even if the code is expected to compile, a given compiler could at least emit a warning (sort of like how some compilers warn if you are doing something that it can detect would result in undefined behavior, but not all, and the code will generally still compile). You don't need to state in the standard exactly what types of code transformations must be performed and when, only which types of transformations are allowed in the presence of certain axiom definitions. In any case weak != 0 so IMO there's some value != 0 in axioms as
"comments that document the concept's semantics and that are checked for syntax errors".
I agree. -- -Matt Calabrese

On 8 October 2012 18:12, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
You standardize axioms but how would you standardize the optimization that a compilers will generate based on them?
If you violate them, you have undefined behavior, and the compiler can do whatever it wants. -- Nevin ":-)" Liber <mailto:nevin@eviloverlord.com> (847) 691-1404

2012/10/8 Evgeny Panasyuk <evgeny.panasyuk@gmail.com>
08.10.2012 14:06, Andrzej Krzemienski wrote:
Should the programmer instantiate STL algorithms for 'T' == double? My answer is: they know the risks (of potentially not meeting the assumptions), they should assess if the risk is low or make sure they eliminate the usage of NaNs, and they should make the call at their own risk. If the program has a bug due to type double not having met the assumptions on 'T' it is the fault of the programmer who requested the instantiation of the algorithm with type double.
Axioms (according to N2887) do not change this situation. An assumption of type 'T' is never 'valid' or 'invalid'. It is just an assumption that one may use or not. When you request an instantiation of an algorithm constrained with an 'axiom-rich' concept with type X, you take the responsibility for X meeting or not the expectations expressed in the concept.
So, all responsibility is on user, and in that case, compiler MUST not reject code if he founds that type does not fulfil axioms. Is it right?
IMO - yes. However, note that it is only my interpretation of N2887, which says that axioms only state what compilers (and generic algorithm authors) can assume - nothing more. Another interpretation of axioms (which I am not in favor of) would be to say that axioms impose some constraints on type 'T' which must be met even if compiler cannot check them. I also believe that the usage of term 'user' may be ambiguous. To be more precise I would say: by instantiating a generic algorithm constrained by concept C with concrete type X, you agree to all code transformations that may occur as a result of making use of C's axioms; regardless if X is prepared for these transformations or not.
The good strategy for compilers, regarding axioms, would be to make use of axioms only if they have some way of verifying if type 'X' meets them, (either by limited static code analysis or run-time checks). This might require selecting only a subset of all axioms that the compiler is able to check and utilize only this subset.
So, if type does not fulfil axioms, then compiler must not complain AND it
must not make any assumptions and optimizations based on these axioms? So, within this approach if user pass some ieee754 floating point type to algorithm which expects types with associative addition operation, compiler MUST not assume associativity, can't do optimizations, is it right? Even if user want to take all responsibility, and he accepts all side-effects?
That does not sounds - if all responsibility to check is on user, compiler should be free to exploit axioms knowledge.
Currently, I inclined to following approach: 1. Compiler MUST not do any checks on axioms besides syntactic. 2. User has ALL responsibility on verifying axioms. 3. Compiler has rights to exploit any knowledge which it gets from axioms, regardless of fact if type fulfil them or not. I.e. in following example:
Again, under the interpretation in N2887, I would say that you are right about points (1) and (3), but not necessarily by (2). That is, you are right that no-one else besides the guy that instantiates the constrained template has any responsibility, but such a user does not have to make sure that the axioms hold either: in fact he may deliberately request the instantiation because he knows the axioms do not hold and he wants to make use of this! One example provided by Matt is copy elision. Just suppose for a second that CopyElision is an axiom. Even if you know that type X has a destructor or copy constructor with side effects you still want to make use of the axiom. Because you allow the pair-wise cancellation of side effects of both operations. (Note that if compiler was checking if type X satisfies axiom CopyElision, it would refuse to compile or prevent a useful optimization). Another example of axioms for code transformations would be the following: axiom AvoidCancellation( T x, T y ) { y * x / x <=> y; } You know that it does not hold for any reasonable arithmetic type: and that's why you define it. With this axiom the compiler knows what transformations to perform to avoid producing NaNs.
axiom { a.size() == a.length(); } and: int T:size() { return 0;} int T:length() { return 1;} Compiler has rights to substitute a.size() with a.length(); - user takes all responsibility for such cases.
What I would like also to see, or at least to discuss, is requirement to explicitly state that type fulfils axioms. Maybe something similar to non-intrusive traits approach. That would be more robust than requirements only at algorithms side which are silently accepted. That would also allow us to overload functions on axioms too. For instance some types may have exactly same syntactic requirements, but fulfil different axioms. In general compiler can't check axioms, so it can't select right overload. If axioms would be not used for overload, then we have to "duplicate" them with traits.
The previous concept proposals offered concept maps for this purpose. For instance n2906 suggested the following solution: by default concepts are implicit (auto-concepts), but concept refinements that differ from their bases only by axioms are defined as explicit, and you need to use a concept map to state that your type models the refined concept. N3351<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3351.pdf>suggests the use of tags for this purpose: in short, you turn an axiom set into a bit artificial small syntactic requirement. One could think of a separate language feature for this purpose, but if such axiom-only concept refinements are rare it may not be worth the effort. Regards, &rzej

Hello, 09.10.2012 12:04, Andrzej Krzemienski wrote:
Currently, I inclined to following approach: 1. Compiler MUST not do any checks on axioms besides syntactic. 2. User has ALL responsibility on verifying axioms. 3. Compiler has rights to exploit any knowledge which it gets from axioms, regardless of fact if type fulfil them or not. I.e. in following example: Again, under the interpretation in N2887, I would say that you are right about points (1) and (3), but not necessarily by (2). That is, you are right that no-one else besides the guy that instantiates the constrained template has any responsibility, but such a user does not have to make sure that the axioms hold either: in fact he may deliberately request the instantiation because he knows the axioms do not hold and he wants to make use of this!
Oops, yes, I definitely agree. I had in mind what you say. I should wrote instead something like: 2. User has ALL responsibility on verifying axioms or accepting all side-effects of applied transformations based on axioms.
What I would like also to see, or at least to discuss, is requirement to explicitly state that type fulfils axioms. Maybe something similar to non-intrusive traits approach. That would be more robust than requirements only at algorithms side which are silently accepted. That would also allow us to overload functions on axioms too. For instance some types may have exactly same syntactic requirements, but fulfil different axioms. In general compiler can't check axioms, so it can't select right overload. If axioms would be not used for overload, then we have to "duplicate" them with traits. The previous concept proposals offered concept maps for this purpose. For instance n2906 suggested the following solution: by default concepts are implicit (auto-concepts), but concept refinements that differ from their bases only by axioms are defined as explicit, and you need to use a concept map to state that your type models the refined concept. N3351<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3351.pdf>suggests the use of tags for this purpose: in short, you turn an axiom set into a bit artificial small syntactic requirement. One could think of a separate language feature for this purpose, but if such axiom-only concept refinements are rare it may not be worth the effort.
Yes, I saw this "Derived" trick: concept ForwardIterator<InputIterator I> = Incrementable<I> && Derived<IteratorCategory<I>, forward_iterator_tag>; and concept InputIterator<WeakInputIterator I> = EqualityComparable<I> && Derived<IteratorCategory<I>, input_iterator_tag>; as I said: "If axioms would be not used for overload, then we have to "duplicate" them with traits. " And here, traits are used: IteratorCategory. Such approach forces us to make duplication: 1. Write axioms 2. Write auxiliary type traits, which in fact implicitly represent axioms - in order to just have overload. With some kind of language support these two, can be merged together - we will not just specialize some auxiliary traits, but we will explicitly state that type fulfils axiom. But I agree with you, if axiom-based overloading very rarely required - it does not worth of standardizing effort (but still should be kept in mind). Best Regards, Evgeny

on Fri Oct 05 2012, Andrew Sutton <asutton.list-AT-gmail.com> wrote:
Zero breaks the postconditions of division. So?
Not always. Dividing by 0.0 for floating point types is a valid operation resulting in inf.
Yes, I'm talking about ints.
One of the goals of looking at semantics in this way is to help programmers document and better characterize the requirements of their algorithms. An algorithm that computes generically over Fields necessarily excludes NaN as an input and must guard against division by 0. An algorithm that computes over, say, ExtendedReals may allow division by 0, but also excludes NaN. A generic algorithm designed specifically for floating point types allows division by 0 and NaN as inputs.
Right.
The point isn't to change the meaning of existing programs by excluding certain values or expressions. Axioms should present a consistent logical framework that precisely but generally describe the semantics our existing programs. This is not easily done.
I agree; it is not. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On Thu, Sep 27, 2012 at 3:35 AM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
I believe that N3351 is not a good place to learn axioms from. I would rather recommend N2887: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2887.pdf N3351 just did not focus on axioms. Axioms do not necessarily state a boolean predicate. Consider these examples:
*axiom ( Dictionary d, Record r ) { is_sorted(d) => linear_search(d, r) <=> binary_search(d, r); }*
* axiom Erasure( Container c, Size n ) { n == 0 => c.resize(n) <=> c.clear(); }*
* axiom Erasure2( Container c ) { c.resize(0) <=> c.clear(); }*
Especially the last one: it states that two expressions, each with side effects, are equivalent.
I see. Just thinking out loud: If c is a dummy variable, like a local variable within the preconditions, then it should be OK if it gets modified by the preconditions... shouldn't it?
I see (I think) what you are up to, but I see a couple of further issues with going this way:
(1). Note that in the third example both expressions return nothing (void). This is valid and desired for axioms. What would you be comparing to check if the axiom holds?
Again, I'm learning these things as we go... but aren't => and <=> the "implies" and "if and only if" logical operators? If so, their operand must be boolean predicates so I don't understand what this axioms means: c.resize(0) <=> c.clear() Oh, I see, <=> is some sort of magic "equivalence" operator as it can be defined only meta+programmatically in the programmer's mind. This axiom means that "the effect of c.resize(0) must be /equivalent/ to c.clear()" in a way that the program will never be able to assert. I can't even check that syntactically, can I? What does it mean to check the axiom above syntactically given that I'd have to completely strip away the <=> operator because it cannot be represented programmatically? Then such an axiom is just a code comment and maybe it should be programmed as such IMO.
(2). Axioms need not hold for all values of the given type, but only for those values that a given algorithm (constrained by a concept with axioms), and only in expressions that are really stated in the executed algorithm.
For instance, axiom EqualityComparable::Reflexivity states that some (rather than every) objects must compare equal to themselves. If you consider type float, objects with value NaN (not-a-number) compare unequal. Yet, it is still ok to use float as a model of concept EqualityComparable if the algorithm (function) that makes the use of concept never operates on NaN-s.
Yes, I'm reading about this... N3351 says that preconditions can limit the values to well formed values (that makes sense) so maybe combing preconditions with concepts and check the axioms only for the values that pass the preconditions... again, just thinking out loud while still readingN3351.
For even more information on axioms see this link: http://kojot.sggw.waw.pl/~akrzemi1/cpp/axioms/axioms_index.html
Thanks! --Lorenzo

Again, I'm learning these things as we go... but aren't => and <=> the "implies" and "if and only if" logical operators? If so, their operand must be boolean predicates so I don't understand what this axioms means:
c.resize(0) <=> c.clear()
Oh, I see, <=> is some sort of magic "equivalence" operator as it can be defined only meta+programmatically in the programmer's mind. This axiom means that "the effect of c.resize(0) must be /equivalent/ to c.clear()" in a way that the program will never be able to assert. I can't even check that syntactically, can I? What does it mean to check the axiom above syntactically given that I'd have to completely strip away the <=> operator because it cannot be represented programmatically? Then such an axiom is just a code comment and maybe it should be programmed as such IMO.
Perhaps we can interpret this formally as "all the variables that appear in both expressions [in this case, c] compare equal after the expressions are evalauted"? Regards, Nate

On Thu, Sep 27, 2012 at 12:03 PM, Nathan Ridge <zeratul976@hotmail.com> wrote:
Again, I'm learning these things as we go... but aren't => and <=> the "implies" and "if and only if" logical operators? If so, their operand must be boolean predicates so I don't understand what this axioms means:
c.resize(0) <=> c.clear()
Oh, I see, <=> is some sort of magic "equivalence" operator as it can be defined only meta+programmatically in the programmer's mind. This axiom means that "the effect of c.resize(0) must be /equivalent/ to c.clear()" in a way that the program will never be able to assert. I can't even check that syntactically, can I? What does it mean to check the axiom above syntactically given that I'd have to completely strip away the <=> operator because it cannot be represented programmatically? Then such an axiom is just a code comment and maybe it should be programmed as such IMO.
Perhaps we can interpret this formally as "all the variables that appear in both expressions [in this case, c] compare equal after the expressions are evalauted"?
Yeah, I was thinking more about it and maybe we can program it more or less like this: c1.resize(0); c2.clear(); assert( eq(c1, c2) ); // the actual check For every (this "for every" part is going to be hard to implement) c1 and c2 of type C and where eq returns true iff c1 and c2 are equal (eq could be implemented as operator== at least to start with). --Lorenzo

On Thu, 27 Sep 2012, Lorenzo Caminiti wrote:
On Thu, Sep 27, 2012 at 12:03 PM, Nathan Ridge <zeratul976@hotmail.com> wrote:
Again, I'm learning these things as we go... but aren't => and <=> the "implies" and "if and only if" logical operators? If so, their operand must be boolean predicates so I don't understand what this axioms means:
c.resize(0) <=> c.clear()
Oh, I see, <=> is some sort of magic "equivalence" operator as it can be defined only meta+programmatically in the programmer's mind. This axiom means that "the effect of c.resize(0) must be /equivalent/ to c.clear()" in a way that the program will never be able to assert. I can't even check that syntactically, can I? What does it mean to check the axiom above syntactically given that I'd have to completely strip away the <=> operator because it cannot be represented programmatically? Then such an axiom is just a code comment and maybe it should be programmed as such IMO.
Perhaps we can interpret this formally as "all the variables that appear in both expressions [in this case, c] compare equal after the expressions are evalauted"?
Yeah, I was thinking more about it and maybe we can program it more or less like this:
c1.resize(0); c2.clear(); assert( eq(c1, c2) ); // the actual check
For every (this "for every" part is going to be hard to implement) c1 and c2 of type C and where eq returns true iff c1 and c2 are equal (eq could be implemented as operator== at least to start with).
One could write axioms for types that don't have an operator==, or where comparison doesn't make sense: p=malloc(n) <=> p=realloc(0,n) v.random_shuffle().random_shuffle() <=> v.random_shuffle() (x+y)+z <=> x+(y+z) for double (somewhat like passing -fassociative-math to gcc) -- Marc Glisse

On Thu, Sep 27, 2012 at 12:29 PM, Marc Glisse <marc.glisse@inria.fr> wrote:
On Thu, 27 Sep 2012, Lorenzo Caminiti wrote:
On Thu, Sep 27, 2012 at 12:03 PM, Nathan Ridge <zeratul976@hotmail.com> wrote:
Again, I'm learning these things as we go... but aren't => and <=> the "implies" and "if and only if" logical operators? If so, their operand must be boolean predicates so I don't understand what this axioms means:
c.resize(0) <=> c.clear()
Oh, I see, <=> is some sort of magic "equivalence" operator as it can be defined only meta+programmatically in the programmer's mind. This axiom means that "the effect of c.resize(0) must be /equivalent/ to c.clear()" in a way that the program will never be able to assert. I can't even check that syntactically, can I? What does it mean to check the axiom above syntactically given that I'd have to completely strip away the <=> operator because it cannot be represented programmatically? Then such an axiom is just a code comment and maybe it should be programmed as such IMO.
Perhaps we can interpret this formally as "all the variables that appear in both expressions [in this case, c] compare equal after the expressions are evalauted"?
Yeah, I was thinking more about it and maybe we can program it more or less like this:
c1.resize(0); c2.clear(); assert( eq(c1, c2) ); // the actual check
For every (this "for every" part is going to be hard to implement) c1 and c2 of type C and where eq returns true iff c1 and c2 are equal (eq could be implemented as operator== at least to start with).
One could write axioms for types that don't have an operator==, or where comparison doesn't make sense: p=malloc(n) <=> p=realloc(0,n)
Wouldn't this mean the following? For any p and q of some type T: p = malloc(n); q = realloc(0, n); assert( eq(p, q) ); Where eq() is the N3351 equality operator that checks if two objects are equal. eq could be implemented using == but it's more of a logical equality so even if a type doesn't have == than it is still logically sensible to check if two objects for that type are the same. In reality however, eq will have to be implemented and checkable somehow so if a type doesn't have a way to programatically checked for equality (using == or some other boolean function eq(), equal(), is_equal(), etc) then either you don't program the axiom or you add an assertion requirement to guard against the lack of == (like Boost.Contract does): p = malloc(n); q = realloc(0, n); assert( p == q ), requires has_equal<T>::value; // with assertion requirement that T has == For a given T, if has_equal<T>::value is true, assert( p == q ) will be compiled and checked at run-time, otherwise it will not even be compiled.
v.random_shuffle().random_shuffle() <=> v.random_shuffle() (x+y)+z <=> x+(y+z) for double (somewhat like passing -fassociative-math to gcc)
--Lorenzo

One could write axioms for types that don't have an operator==, or where comparison doesn't make sense: p=malloc(n) <=> p=realloc(0,n)
Wouldn't this mean the following? For any p and q of some type T:
p = malloc(n); q = realloc(0, n); assert( eq(p, q) );
Where eq() is the N3351 equality operator that checks if two objects are equal. eq could be implemented using == but it's more of a logical equality so even if a type doesn't have == than it is still logically sensible to check if two objects for that type are the same. In reality however, eq will have to be implemented and checkable somehow so if a type doesn't have a way to programatically checked for equality (using == or some other boolean function eq(), equal(), is_equal(), etc) then either you don't program the axiom or you add an assertion requirement to guard against the lack of == (like Boost.Contract does):
*cringe* Don't think about eq(). That was a compromise between myself and Marcin Zalewski. We couldn't determine if <=> should be interpreted as logical equivalence or behavioral equivalence (as in the previous axioms work). We ended assigning <=> to logical equivalence and using eq() as a kind of oracle that decided equality: either according to the rules of == or memberwise equality comparison. In retrospect, I think this was the wrong choice. Using <=> to describe the behavioral equivalence of two expressions provides a much more robust way of stating the relative meaning of operations. Andrew

On Fri, Sep 28, 2012 at 5:39 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
One could write axioms for types that don't have an operator==, or where comparison doesn't make sense: p=malloc(n) <=> p=realloc(0,n)
Wouldn't this mean the following? For any p and q of some type T:
p = malloc(n); q = realloc(0, n); assert( eq(p, q) );
Where eq() is the N3351 equality operator that checks if two objects are equal. eq could be implemented using == but it's more of a logical equality so even if a type doesn't have == than it is still logically sensible to check if two objects for that type are the same. In reality however, eq will have to be implemented and checkable somehow so if a type doesn't have a way to programatically checked for equality (using == or some other boolean function eq(), equal(), is_equal(), etc) then either you don't program the axiom or you add an assertion requirement to guard against the lack of == (like Boost.Contract does):
*cringe* Don't think about eq(). That was a compromise between myself and Marcin Zalewski. We couldn't determine if <=> should be interpreted as logical equivalence or behavioral equivalence (as in the previous axioms work). We ended assigning <=> to logical equivalence and using eq() as a kind of oracle that decided equality: either according to the rules of == or memberwise equality comparison.
Just to put it into perspective, equality comes from Elements of Programming. Page 7 shows that there is a "specification" equality and a "C++" equality. If one reads the first chapter of EoP, the notion of equality is explained well in context of what the types actually represent. In a sense, there is an implicit assumption of "denotational" semantics that assigns to every value an abstract *mathematical* entity. Then, equality is simply mathematical equality on these entities. The == operator *does not* have to always be implemented. The book discusses this as well. Sometimes it is hard, and sometimes it is impossible to implement ==. Sometimes it is enough to just compare some kind of a hash. In some other cases, we may settle for representational equality, where we just compare bits. This gives us a partial implementation of a true equality. There is a paper related to this subject, by John Lakos: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2479.pdf This paper has a similar notion of equality.
In retrospect, I think this was the wrong choice. Using <=> to describe the behavioral equivalence of two expressions provides a much more robust way of stating the relative meaning of operations.
To put this into perspective too, I think that "behavioral" equivalence here would be an equivalence in terms of "operational" semantics. I don't think that either approach is wrong in any way. Each approach fits some applications better then others. If behavioral equivalence is taken to mean equality defined in terms of operational semantics, then defining it works out a bit differently. For example, we must deal with cases such as 1/2 and 2/4 being equal which comes for free in denotational semantics, but requires some fiddling in operational semantics. We just need to be careful about how we defined "behavioral equivalence," but that is not to say that it is impossible. So, to summarize, the current approach is a direct descendant of EoP where we have simply renamed the "specification" = from page 7 to eq(). Reading EoP carefully can help in understanding how it works. For operational definitions of equality, one can look back at the existing work. BTW, just an additional example to keep in mind, which we have used frequently. Two vectors with equal elements but different capacities (allocated memory) are trivially equal in denotational approach (such as used in EoP) since they end up representing identical mathematical sequences. With operational semantics one has to sweat a bit more, and perhaps say something like "the two vectors will behave the same with respect to some operations." This has to be a recursively applied definition, since these operations may themselves return something that differs only, say, in memory allocation. Thus, for operational definition, we always need some "context" to decide when things are equal. The denotational definition needs a context too, but it takes care of it right away and once and for all.
Andrew
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Wed, Sep 26, 2012 at 12:00 PM, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
However, N3351 says that axioms should not be checked by the compiler...
``Axioms express the semantics of a concept’s required syntax; they are assumed to be true and must not be checked by the compiler beyond conformance to the C++ syntax. Any additional checking is beyond the scope of the compiler’s translation requirements. The compiler must not generate code that evaluates axioms as preconditions, either. This could lead to program errors if the evaluated assertions have side effects. For example, asserting that distance(first, last) > 1 when the type of those iterators is istream_iterator will consume the first element of the range, causing the assertion to pass, but the algorithm to have undefined behavior.''
So what am I supposed to do with axioms? Do nothing? Axioms are just "comments" that document the semantic?
All: I'm still struggling with my original question... what am I supposed to programatically do with axioms? Take a look at this axioms--is_valid is not implementable... what would I check here? concept BidirectionalIterator<ForwardIterator I> = ... requires decrement (I i, I j) { ... axiom { is_valid(––i) <=> is_valid(i––); is_valid(i––) => (i == j => i–– == j); is_valid(i––) => (i == j => (i––, i) == ––j); } Same thing for the magic eq() that is not implementable (if not trivially as operator==). And if we get rid of eq and define <=> as equivalence instead of iff then <=> is not implementable. Even just checking axiom's syntax... what does that mean if I cannot implement eq, is_valid, <=>, etc? I don't understand... It seems to me that we are writing axioms using a bunch of non-implementable operations... IMO, if axioms cannot be checked programatically, either syntactically at compile-time and/or with run-time assertions, then they should just be noted as code comments... Why would I program (axiom) code that is neither compiled nor executed? Thanks, --Lorenzo

However, N3351 says that axioms should not be checked by the compiler...
``Axioms express the semantics of a concept’s required syntax; they are assumed to be true and must not be checked by the compiler beyond conformance to the C++ syntax. Any additional checking is beyond the scope of the compiler’s translation requirements. The compiler must not generate code that evaluates axioms as preconditions, either. This could lead to program errors if the evaluated assertions have side effects. For example, asserting that distance(first, last) > 1 when the type of those iterators is istream_iterator will consume the first element of the range, causing the assertion to pass, but the algorithm to have undefined behavior.''
So what am I supposed to do with axioms? Do nothing? Axioms are just "comments" that document the semantic?
All: I'm still struggling with my original question... what am I supposed to programatically do with axioms?
Take a look at this axioms--is_valid is not implementable... what would I check here?
concept BidirectionalIterator<ForwardIterator I> = ... requires decrement (I i, I j) { ... axiom { is_valid(––i) <=> is_valid(i––); is_valid(i––) => (i == j => i–– == j); is_valid(i––) => (i == j => (i––, i) == ––j); }
Same thing for the magic eq() that is not implementable (if not trivially as operator==). And if we get rid of eq and define <=> as equivalence instead of iff then <=> is not implementable. Even just checking axiom's syntax... what does that mean if I cannot implement eq, is_valid, <=>, etc? I don't understand...
It seems to me that we are writing axioms using a bunch of non-implementable operations... IMO, if axioms cannot be checked programatically, either syntactically at compile-time and/or with run-time assertions, then they should just be noted as code comments... Why would I program (axiom) code that is neither compiled nor executed?
This sounds like two questions: "what axioms are for", and "what should Boost.Contract do about them". Regarding the first, your concerns are about the same as the members of the Committee had when the concepts were being standardised (see here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2848.html). Some observed even more concerning thing: compilers would be allowed to replace one expression with another, but there would be no way to check if the two expressions are equivalent. I think there is some merit in putting axioms into concepts. First, there are for you: the programmer, not to compiler. When you see that function fun<T> requires that T is a StrictWeekOrdering, you want to know what the concept expects of your type, you want to read it, and axiom tells you that. You could type in the comment "it must be transitive" but this is too informal. Second, rather than a normal comment, they are a syntax-checked comment: if you rename your variables or type names in concept definition, the compiler will remind you that you also need to change the axiom. It is somewhat similar to asserts. Even if you disable them both in production and debug code, they are still useful: because you can reed what they state, and if what they state is grammatically correct (see http://akrzemi1.wordpress.com/2011/04/06/assertions/). Third, compilers could make use of the declarations in axioms (even if they cannot validate the "correctness" of an axiom). This is more-less how copy elision works. It works under unverifiable assumption that copy constructor does nothing else but creating a copy of the original object and the destructor does nothing else but destroying an object. This assumption does not always hold: for instance you may be doing some logging in the constructor (and be surprised that the logging vanishes), but compilers are nonetheless allowed to do such substitution (which is in fact an elimination of constructor and destructor call). If we had axioms we could specify copy elision as an axiom, say: * axiom CopyElision( T v ) {* * { T(v); } <=> {} * * }* And need not define any special rules for copy elision in the standard. (Well, I know it would not work, but you get the idea.) Regarding question "what should Boost.Contract do about axioms", my opinion is that you should only check them against C++ syntax and type safety. That is, if you see this axiom: *concept BidirectionalIterator<ForwardIterator I> = requires decrement (I i, I j) { axiom { is_valid(––i) <=> is_valid(i––); is_valid(i––) => (i == j => i–– == j); is_valid(i––) => (i == j => (i––, i) == ––j); } ...* You should check if expressions: *is_valid(––i), **is_valid(i––)*, *i == j*, *i–– == j*, *(i––, i) == ––j *are valid (compilable), and that's it. That is, whoever is defining concept *BidirectionalIterator *must make sure that function *is_valid *is in scope and does something -- not you. Regards, &rzej

This sounds like two questions: "what axioms are for", and "what should Boost.Contract do about them".
I think that sums up the problem quite well.
I think there is some merit in putting axioms into concepts. ... snip
Third, compilers could make use of the declarations in axioms (even if they cannot validate the "correctness" of an axiom). This is more-less how copy elision works. It works under unverifiable assumption that copy constructor does nothing else but creating a copy of the original object and the destructor does nothing else but destroying an object. This assumption does not always hold: for instance you may be doing some logging in the constructor (and be surprised that the logging vanishes), but compilers are nonetheless allowed to do such substitution (which is in fact an elimination of constructor and destructor call). If we had axioms we could specify copy elision as an axiom, say:
* axiom CopyElision( T v ) {* * { T(v); } <=> {} * * }*
And need not define any special rules for copy elision in the standard. (Well, I know it would not work, but you get the idea.)
I think that trying to define the semantics of the language from within the language gets us to shaky ground because we can't say under what circumstances a copy could be elided. But again, we're in an early state of design w.r.t. a proposal for how axioms might actually be used, and I'm not particularly knowledgeable of those details. I can only say with confidence that it's being worked on.
Regarding question "what should Boost.Contract do about axioms", my opinion is that you should only check them against C++ syntax and type safety. That is, if you see this axiom:
*concept BidirectionalIterator<ForwardIterator I> = requires decrement (I i, I j) { axiom { is_valid(––i) <=> is_valid(i––); is_valid(i––) => (i == j => i–– == j); is_valid(i––) => (i == j => (i––, i) == ––j); } ...*
You should check if expressions: *is_valid(––i), **is_valid(i––)*, *i == j*, *i–– == j*, *(i––, i) == ––j *are valid (compilable), and that's it. That is, whoever is defining concept *BidirectionalIterator *must make sure that function *is_valid *is in scope and does something -- not you.
Yes, and you'll never be able to actually do anything programmatically with is_valid. You might also consider using axioms as test cases for unit testing. This is something that was done by a group in Bergen using a metacompiler. http://www.jot.fm/issues/issue_2011_01/article10.pdf The paper should be free. If not, I can email a version (off-list, of course) to anybody that is interested. And I just returned from presenting a paper on testing generic libraries where I discussed a manual approach to the same problem. The paper hasn't been published yet, but I can make a copy available if you are interested. Andrew

On Mon, Oct 1, 2012 at 4:01 AM, Andrew Sutton <asutton.list@gmail.com> wrote:
This sounds like two questions: "what axioms are for", and "what should Boost.Contract do about them".
This is fair. However, we could also say: (1) what axioms are for, (2) what should C++1x do about them, and (3) Boost.Contract should do with axioms as close as possible as C++1x does with axioms.
I think that sums up the problem quite well.
I think there is some merit in putting axioms into concepts. ... snip
Third, compilers could make use of the declarations in axioms (even if they cannot validate the "correctness" of an axiom). This is more-less how copy elision works. It works under unverifiable assumption that copy constructor does nothing else but creating a copy of the original object and the destructor does nothing else but destroying an object. This assumption does not always hold: for instance you may be doing some logging in the constructor (and be surprised that the logging vanishes), but compilers are nonetheless allowed to do such substitution (which is in fact an elimination of constructor and destructor call). If we had axioms we could specify copy elision as an axiom, say:
* axiom CopyElision( T v ) {* * { T(v); } <=> {} * * }*
And need not define any special rules for copy elision in the standard. (Well, I know it would not work, but you get the idea.)
I think that trying to define the semantics of the language from within the language gets us to shaky ground because we can't say under what circumstances a copy could be elided.
But again, we're in an early state of design w.r.t. a proposal for how axioms might actually be used, and I'm not particularly knowledgeable of those details. I can only say with confidence that it's being worked on.
Regarding question "what should Boost.Contract do about axioms", my opinion is that you should only check them against C++ syntax and type safety. That is, if you see this axiom:
*concept BidirectionalIterator<ForwardIterator I> = requires decrement (I i, I j) { axiom { is_valid(––i) <=> is_valid(i––); is_valid(i––) => (i == j => i–– == j); is_valid(i––) => (i == j => (i––, i) == ––j); } ...*
You should check if expressions: *is_valid(––i), **is_valid(i––)*, *i == j*, *i–– == j*, *(i––, i) == ––j *are valid (compilable), and that's it. That is, whoever is defining concept *BidirectionalIterator *must make sure that function *is_valid *is in scope and does something -- not you.
Yes, and you'll never be able to actually do anything programmatically with is_valid.
Same for eq(), <=>, and =>, I can't do anything programmatically with them neither, can I? Just to clarify, when I say to use code comments, you could use programming-style comments. So the difference is between: concept BidirectionalIterator<ForwardIterator I> = // (a) requires decrement (I i, I j) { axiom { is_valid(––i) <=> is_valid(i––); is_valid(i––) => (i == j => i–– == j); is_valid(i––) => (i == j => (i––, i) == ––j); } ... And: concept BidirectionalIterator<ForwardIterator I> = // (b) requires decrement (I i, I j) { // axiom { // is_valid(––i) <=> is_valid(i––); // is_valid(i––) => (i == j => i–– == j); // is_valid(i––) => (i == j => (i––, i) == ––j); // } ... As Andrzej pointed out, (a) is better than (b) because the compiler will error if the axioms get out of sync with the rest of the concept (change type/variable name I, i, j, etc). Any other reason to favor (a) over (b)? (Leaving a side possible future compiler optimizations that might never come and that will not be possible with a lib like Boost.Contract in any case.) IMO, with (b) is more clear that axioms don't actually check anything while (a) can be misleading causing programmers to think that the expressed semantics of the concept will actually be checked and enforced by the program. I guess it's only fair to assume that programmers read the docs and understand that axioms _document_ but don't actually check or enforce the semantics. At the end, I'm thinking that there's value in the syntax checking provided by (a) over (b) (even if I wish (a) could check and enforce more of the semantics maybe generating preconditions to check at run-time for the concept... I'm not sure...).
You might also consider using axioms as test cases for unit testing.
BTW, this is true for pre/post conditions as well.
This is something that was done by a group in Bergen using a metacompiler.
http://www.jot.fm/issues/issue_2011_01/article10.pdf
The paper should be free. If not, I can email a version (off-list, of course) to anybody that is interested.
And I just returned from presenting a paper on testing generic libraries where I discussed a manual approach to the same problem. The paper hasn't been published yet, but I can make a copy available if you are interested.
Thanks, --Lorenzo

Yes, and you'll never be able to actually do anything programmatically with is_valid.
Same for eq(), <=>, and =>, I can't do anything programmatically with them neither, can I?
I suppose you cannot.
Just to clarify, when I say to use code comments, you could use programming-style comments. So the difference is between:
concept BidirectionalIterator<ForwardIterator I> = // (a) requires decrement (I i, I j) { axiom { is_valid(––i) <=> is_valid(i––); is_valid(i––) => (i == j => i–– == j); is_valid(i––) => (i == j => (i––, i) == ––j); } ...
And:
concept BidirectionalIterator<ForwardIterator I> = // (b) requires decrement (I i, I j) { // axiom { // is_valid(––i) <=> is_valid(i––); // is_valid(i––) => (i == j => i–– == j); // is_valid(i––) => (i == j => (i––, i) == ––j); // } ...
As Andrzej pointed out, (a) is better than (b) because the compiler will error if the axioms get out of sync with the rest of the concept (change type/variable name I, i, j, etc). Any other reason to favor (a) over (b)? (Leaving a side possible future compiler optimizations that might never come and that will not be possible with a lib like Boost.Contract in any case.)
The other reasons I can think of can just work comments: e.g., differentiating two concepts that differ only by semantics but not syntax, like InputIterator and ForwardIterator or StrictWeakOrder<T> and Predicate<T, T>. IMO, with (b) is more clear that axioms don't actually check anything
while (a) can be misleading causing programmers to think that the expressed semantics of the concept will actually be checked and enforced by the program. I guess it's only fair to assume that programmers read the docs and understand that axioms _document_ but don't actually check or enforce the semantics.
Not supporting axioms altogether, I guess, is also a reasonable approach. As you say, many people are confused about axioms' semantics. And having axioms in concepts as language feature (rather than informal comments) is controversial in general: mostly because there is almost nothing a compiler could do about them.
At the end, I'm thinking that there's value in the syntax checking provided by (a) over (b) (even if I wish (a) could check and enforce more of the semantics maybe generating preconditions to check at run-time for the concept... I'm not sure...).
But then again, you are implementing a contract programming library; and the primary goal of specifying contracts -- I believe -- is not that you check them in run-time, but that everyone can read what they state: they are a comment, although enforced by run-time checks, if one can say so. It looks like it may be more the question of an arbitrary call. My preference would be to be able to specify axioms with concepts, although I find it difficult to justify why. Perhaps when they are part of language (or type-system) I feel more obliged to satisfy them. :) Regards, &rzej

Le 26/09/12 21:00, Lorenzo Caminiti a écrit :
Hello all,
However, N3351 says that axioms should not be checked by the compiler...
``Axioms express the semantics of a concept’s required syntax; they are assumed to be true and must not be checked by the compiler beyond conformance to the C++ syntax. Any additional checking is beyond the scope of the compiler’s translation requirements. The compiler must not generate code that evaluates axioms as preconditions, either. This could lead to program errors if the evaluated assertions have side effects. For example, asserting that distance(first, last) > 1 when the type of those iterators is istream_iterator will consume the first element of the range, causing the assertion to pass, but the algorithm to have undefined behavior.''
So what am I supposed to do with axioms? Do nothing? Axioms are just "comments" that document the semantic?
For example:
`` concept EqualityComparable<typename T> = requires (T a, T b, T c) { bool { a == b }; bool { a != b }; axiom { a == b <=> eq(a, b); } // true equality axiom { a == a; // reflexive a == b => b == a; // symmetric a == b && b == c => a == c; // transitive } axiom { a != b <=> !(a == b); } // complement };
So, EqualityComparable<T> is true if T 1. has == and != with result types that can be converted to bool 2. == compares for true equality 3. == is reflexive, symmetric, and transitive 4. != is the complement of == However, the compiler can only check the first of these conditions. The rest must be verified through other means (i.e. manually). ''
Hi Lorenzo, Not only the compiler should not check the axioms, but it will in general be unable to do it. So a type modeling the non-axiom part is seen as a model of the concept even if doesn't satisfy the axioms. An alternative for the concept definition could be to force the modeler to state explicitly that the class satisfies some kind of semantic trait associated to the axiom. The preceding example can be rewrite as follows concept EqualityComparable<typename T> = TrueEquality<T> && EqualReflexive<T> && EqualSymetric<T> && EqualTransitive<T> && requires (T a, T b, T c) { bool { a == b }; bool { a != b }; }; where concept TrueEquality<typename T> = is_true_equal<T>::value && requires (T a, T b) { axiom { a == b <=> eq(a, b); } // true equality }; concept EqualReflexive<typename T> = is_equal_reflexive<T>::value && requires (T a) { a == a; // reflexive }; concept EqualSymmetric<typename T> = is_equal_symetric<T>::value && requires (T a, T b) { a == b => b == a; // symmetric }; concept EqualTransitive<typename T> = is_equal_transitive<T>::value && requires (T a, T b, T c) { a == b && b == c => a == c; // transitive }; The traits is_true_equal, is_equal_reflexive, is_equal_symmetric and is_equal_transitive could be defined for the builtin types as a true-type and a false-type otherwise. A user defining a class X that is a model of EqualityComparable should need however to specialize these traits template <> is_true_equal<X>: true_type {}; template <> is_equal_reflexive<X>: true_type {}; template <> is_equal_symmetric<X>: true_type {}; template <> is_equal_transitive<X>: true_type {}; In this way, the compiler could take advantage of the axioms but only when the modeler has given permission. This has of course a lot of disadvantages as ... Best, Vicente
participants (16)
-
Andrew Sutton
-
Andrzej Krzemienski
-
Dave Abrahams
-
Evgeny Panasyuk
-
Felipe Magno de Almeida
-
Klaim - Joël Lamotte
-
Larisse Voufo
-
Lorenzo Caminiti
-
Marc Glisse
-
Marcin Zalewski
-
Matt Calabrese
-
Nathan Ridge
-
Nevin Liber
-
Sebastian Redl
-
Steven Watanabe
-
Vicente J. Botet Escriba