
2012/10/8 Evgeny Panasyuk <evgeny.panasyuk@gmail.com>
08.10.2012 14:06, Andrzej Krzemienski wrote:
Should the programmer instantiate STL algorithms for 'T' == double? My answer is: they know the risks (of potentially not meeting the assumptions), they should assess if the risk is low or make sure they eliminate the usage of NaNs, and they should make the call at their own risk. If the program has a bug due to type double not having met the assumptions on 'T' it is the fault of the programmer who requested the instantiation of the algorithm with type double.
Axioms (according to N2887) do not change this situation. An assumption of type 'T' is never 'valid' or 'invalid'. It is just an assumption that one may use or not. When you request an instantiation of an algorithm constrained with an 'axiom-rich' concept with type X, you take the responsibility for X meeting or not the expectations expressed in the concept.
So, all responsibility is on user, and in that case, compiler MUST not reject code if he founds that type does not fulfil axioms. Is it right?
IMO - yes. However, note that it is only my interpretation of N2887, which says that axioms only state what compilers (and generic algorithm authors) can assume - nothing more. Another interpretation of axioms (which I am not in favor of) would be to say that axioms impose some constraints on type 'T' which must be met even if compiler cannot check them. I also believe that the usage of term 'user' may be ambiguous. To be more precise I would say: by instantiating a generic algorithm constrained by concept C with concrete type X, you agree to all code transformations that may occur as a result of making use of C's axioms; regardless if X is prepared for these transformations or not.
The good strategy for compilers, regarding axioms, would be to make use of axioms only if they have some way of verifying if type 'X' meets them, (either by limited static code analysis or run-time checks). This might require selecting only a subset of all axioms that the compiler is able to check and utilize only this subset.
So, if type does not fulfil axioms, then compiler must not complain AND it
must not make any assumptions and optimizations based on these axioms? So, within this approach if user pass some ieee754 floating point type to algorithm which expects types with associative addition operation, compiler MUST not assume associativity, can't do optimizations, is it right? Even if user want to take all responsibility, and he accepts all side-effects?
That does not sounds - if all responsibility to check is on user, compiler should be free to exploit axioms knowledge.
Currently, I inclined to following approach: 1. Compiler MUST not do any checks on axioms besides syntactic. 2. User has ALL responsibility on verifying axioms. 3. Compiler has rights to exploit any knowledge which it gets from axioms, regardless of fact if type fulfil them or not. I.e. in following example:
Again, under the interpretation in N2887, I would say that you are right about points (1) and (3), but not necessarily by (2). That is, you are right that no-one else besides the guy that instantiates the constrained template has any responsibility, but such a user does not have to make sure that the axioms hold either: in fact he may deliberately request the instantiation because he knows the axioms do not hold and he wants to make use of this! One example provided by Matt is copy elision. Just suppose for a second that CopyElision is an axiom. Even if you know that type X has a destructor or copy constructor with side effects you still want to make use of the axiom. Because you allow the pair-wise cancellation of side effects of both operations. (Note that if compiler was checking if type X satisfies axiom CopyElision, it would refuse to compile or prevent a useful optimization). Another example of axioms for code transformations would be the following: axiom AvoidCancellation( T x, T y ) { y * x / x <=> y; } You know that it does not hold for any reasonable arithmetic type: and that's why you define it. With this axiom the compiler knows what transformations to perform to avoid producing NaNs.
axiom { a.size() == a.length(); } and: int T:size() { return 0;} int T:length() { return 1;} Compiler has rights to substitute a.size() with a.length(); - user takes all responsibility for such cases.
What I would like also to see, or at least to discuss, is requirement to explicitly state that type fulfils axioms. Maybe something similar to non-intrusive traits approach. That would be more robust than requirements only at algorithms side which are silently accepted. That would also allow us to overload functions on axioms too. For instance some types may have exactly same syntactic requirements, but fulfil different axioms. In general compiler can't check axioms, so it can't select right overload. If axioms would be not used for overload, then we have to "duplicate" them with traits.
The previous concept proposals offered concept maps for this purpose. For instance n2906 suggested the following solution: by default concepts are implicit (auto-concepts), but concept refinements that differ from their bases only by axioms are defined as explicit, and you need to use a concept map to state that your type models the refined concept. N3351<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3351.pdf>suggests the use of tags for this purpose: in short, you turn an axiom set into a bit artificial small syntactic requirement. One could think of a separate language feature for this purpose, but if such axiom-only concept refinements are rare it may not be worth the effort. Regards, &rzej