Re: [Boost-users] [boost] [review][constrained_value] Review ofConstrainedValueLibrary begins today

This too is already solved if any predicate is acceptable: just use a predicate functor that forwards via a pointer or reference to another global predicate. Or (somewhat less efficient but easier) use the dynamic boost::function predicate that is already included and boost::bind/lambda/phoenix to a method on this global object.
Gordon: I absolutely agree that if this library was purely for bounds testing, managing invariants, etc, that predicates are sufficient. But for high performance numerics, that is not what would happen. First of all, you would turn off automatic testing at runtime. But one of the key value of this library, in both release/debug, for those examples is to get information about the bounds and intervals from the type itself for generic algorithms. A predicate as a function can test assignment, but it can't give back structured data for these algorithms. If you can show me how the testing predicate can be easily queried within this framework for the type metadata itself(e.g. a list of intervals), then I will shut up. I really think people need to think through this alternative use case of this powerful library. And Robert: There are a lot of discussions on the boost mailing list (which I can't post on) about NaN and others for numeric_limits. In virtually every case with high performance computing, you would turn off predicate testing on assignment for release builds because it is too expensive. Then you would, judiciously, test inclusion of the value. So the model here is that if bounded_float is a subset of float, a value in this class it is always in the float superset, but I can test inclusion within the other subset that I have defined. Here, predicates work fine as long as there is a manual way to test set inclusion. And numeric libraries frequently use NaN, infinity, etc. as signals for algorithms. So these values should be associated with the superset in the model as a default approach. So what I am saying is that people should be given the option (as a preprocessor instruction is sufficient) to have the value as always within the bounds vs. considering this as a "set" which they can then test inclusion at their own liesure. This is a pragmatic approach to ensure consistency of this library with existing generic numerical algorithms, which is just as important for the numerics as solving the floating point problem. Also, I am not aware of the structure of the numeric_limits, but I doubt these are written to be virtual/subclassed due to efficiency. What this means is that you would probably have to write your own traits class if we don't make this automatic, and you can't pick or choose to override existing tratis. And these numeric traits are impossible for normal developers to write since they are platform specific. Last, and this is not something I know enough about because I am not a numeric analyst, but the epsilon method discussed in the mailing list may have a flaw for numeric work. What we effectively are doing here is creating an epsilon neighborhood for testing equality. As we know from set theory/topology, this does not create a weak ordering because it fails transitivity. Now, one might say that this is irrelevant because it is a reasonable approximation for pragmatic reasons, but you run into a problem in numerics. The reason is that many numerical algorithms are already testing within an epsilon neighborhood for convergence of algorithms, derivatives, etc. So we need to be REALLY careful that this epsilon ball is well within those balls, or we may seriously mess up algorithms depending on a comparison operator.... This may or may not be a problem, but I would pose it on the boost list to ensure that people are thinking about it. One solution may be to rely on the numeric_limits::epsilon trait that would always be available, would be platform specific, and I believe is the small possible neighborhood of a point. Whether this is the maximum amount of truncation possible, I do not know. -Jesse

On Dec 9, 2008, at 9:21 AM, Jesse Perla wrote:
I absolutely agree that if this library was purely for bounds testing, managing invariants, etc, that predicates are sufficient. But for high performance numerics, that is not what would happen. First of all, you would turn off automatic testing at runtime.
I think this is another good reason to allow disabling of these asserts specifically (as opposed to disabling all boost asserts).
But one of the key value of this library, in both release/debug, for those examples is to get information about the bounds and intervals from the type itself for generic algorithms. A predicate as a function can test assignment, but it can't give back structured data for these algorithms. If you can show me how the testing predicate can be easily queried within this framework for the type metadata itself(e.g. a list of intervals), then I will shut up.
I agree that this would be nice, but I see that as a requirement for a predicates library. If Robert wants to work on one, that would be super-cool, but I think that should be a separate library, because predicates can be used for all kinds of things. Although it might start as a child of Constrained Values... Gordon

From: Gordon Woodhull On Dec 9, 2008, at 9:21 AM, Jesse Perla wrote:
I absolutely agree that if this library was purely for bounds testing, managing invariants, etc, that predicates are sufficient. But for high performance numerics, that is not what would happen. First of all, you would turn off automatic testing at runtime.
I think this is another good reason to allow disabling of these asserts specifically (as opposed to disabling all boost asserts).
I think Jesse meant turning off constraint checking in general, not only the invariant asserts.
But one of the key value of this library, in both release/debug, for those examples is to get information about the bounds and intervals from the type itself for generic algorithms. A predicate as a function can test assignment, but it can't give back structured data for these algorithms. If you can show me how the testing predicate can be easily queried within this framework for the type metadata itself(e.g. a list of intervals), then I will shut up.
Well, just the way you query within_bounds predicate to get the values of bounds etc. The point is to provide a wrapper for a predicate which allows you to access all its data, but "overwrites" the invocation operator to always return true. Maybe it could look like this (not tested, not contemplated much): template<typename P> struct true_predicate_wrapper : public P { true_predicate_wrapper() : P() {} true_predicate_wrapper(const P & p) : P(p) {} template<typename V> bool operator () (V) const { return true; } } You can use P predicate (containing all the data like intervals list) in debug mode and true_predicate_wrapper<P> in release mode to retain all the information, but get rid of constraint checks. Best regards, Robert
participants (3)
-
Gordon Woodhull
-
Jesse Perla
-
Robert Kawulak