
Replying to a day's messages in my own order. :-) On Dec 8, 2008, at 3:14 PM, Paul A. Bristow wrote:
Gordon wrote
Also it might be worthwhile to point out that shared runtime mutable constraints with no overhead per constrained object are possible, using static members. (Or maybe that's too weird.) I might implement epsilon this way, as my impression is that numeric_limits<>::epsilon() is not useful for this purpose.
Of course I won't use this technique but something more like what's described in "Compile-time fixed bounds" - don't know what I was thinking. Runtime modification of epsilon is just silly.
I'd just like to point out that there are more than the 'near-epilson computational noise' reasons, discussed so far, why it is useful to be able to make 'fuzzier' floating-point comparisons.
Yes, I am now convinced that all three use cases are valid in different situations - there are times when you would want an epsilon paired with each value, there are times when you want a class of floats which are associated with a single runtime epsilon, and there are times when epsilon can be chosen at compile time with a policy. My point earlier was simply that the Boost.Test predicates won't work out of the box because they require epsilon as a runtime argument - epsilon needs to be built into the predicate in one of the three ways in order to make it STL compatible. It is interesting that size of predicates is not usually an issue because they are passed to algorithms or bigger containers - since constrained_value is a container of one suddenly it matters a lot. Robert wrote:
If an "exact floating point" type could be provided (out of scope of this library), being a wrapper for float/double and making sure that its underlying value is always truncated, you could perform comparisons (and all the other operations) that are repeatable, without the possibility that a comparison that once succeeded will later fail. Does it sound sensible?
I consider this a quick-and-dirty solution which would probably work for a lot of situations. It does sound like it would be consistent; however you still have the regular-old problem with floats, that they're almost never equal except once in a blue moon. I hope to find a way to implement it without assembly. Kim Barrett wrote:
inline double exact(double x) { struct { volatile double x; } xx = { x }; return xx.x; }
The idea is to force the value to make a round trip through a memory location of the "correct" size. The use of volatile should prevent the compiler from optimizing away the trip through memory.
Interesting! I still don't trust the compiler not to optimize it away, but it's definitely worth a try. Robert wrote:
The "delta" (difference between extended and truncated value) may have a very big value for big numbers and very small for small ones, so epsilon should rather be scaled according to the magnitude of compared numbers.
The Knuth method implemented by Boost.Test multiplies epsilon by each of the values and compares the difference in the values with the result, so avoiding the scale problem. Robert wrote:
Thorsten wrote:
I totally disagree. People have to deal with floats anyway. That is a seperate issue. The advice should be removed IMO, and bounded_float provided.
It should be provided, but Boost should first include some set of mechanisms to deal with the FP issues. They are too general to be implemented within this library and they are not tightly coupled with the concept of constrained types. I see this as an analogy to arithmetic overflows prevention, which is also too general and too orthogonal to this library.
Yes, floating point predicates should be a separate library, and the floating point FUD should be removed from the documentation as well. My yes vote is still conditional on your cooperation with us on letting floating points work, because the library would not be useful to me otherwise. I am now regretting that I didn't look at the code, because I didn't know that there were assertions testing the invariants in a different way from the predicates. Is this only in the bounded part of the library? I thought the value was tested after every change using the predicate, and then called the error policy if the predicate failed. That's the way I think it should be - you don't need to add any extra invariant value to that, it's just perfect. Like Stjepan said:
As far as "if the test guarantees...the library guarantees...", it should be no more complicated to understand that "if X is thread-safe then something<X> is thread-safe", or something similar regarding exception safety.
This is the magic of C++, that templates can be used in unforeseen ways because they take on the qualities of their arguments. Generally I don't think a library should assert on any user input - even an invalid/inconsistent predicate! I figured this was the point of having an error policy - everyone has their own idea how they want to handle errors. Assertions are forbidden in a lot of corporate environments. <joke>Perhaps there should be an predicate consistency policy.</joke> I will look at the code tomorrow. Two last points: 1. I also like the monitored values use case and hope that you will take the time to consider it before submitting a "final" version. (Libraries are never truly finished.) 2. You don't have to worry about NaN - users can choose a predicate that's appropriate for their application. Personally I would always use a predicate that consistently rejects NaN, because I'd want the error pointed out to me ASAP. But that should be implemented as a separate check that is combined in, e.g. using std::logical_and<> (if that hasn't been superceded by something in Boost). I guess we have to wait for decltype to be able to use lambda expressions as predicates here? Gordon