[review][constrained_value] Review of Constrained Value Library begins today

Hi all, The review of the Robert Kawulak's Constrained Value library begins today December 1, 2008, and will end on December 10th -- I will be the review manager. Please post reviews to the developer list. Here's the library synopsis: The Boost Constrained Value library contains class templates useful for creating constrained objects. A simple example is an object representing an hour of a day, for which only integers from the range [0, 23] are valid values. bounded_int<int, 0, 23>::type hour; hour = 20; // OK hour = 26; // exception! Behavior in case of assignment of an invalid value can be customized. The library has a policy-based design to allow for flexibility in defining constraints and behavior in case of assignment of invalid values. Policies may be configured at compile-time for maximum efficiency or may be changeable at runtime if such dynamic functionality is needed. The library can be downloaded from the here: http://rk.go.pl/f/constrained_value.zip The documentation is also available online here: http://rk.go.pl/r/constrained_value --------------------------------------------------- Please state in your review, whether you think the library should be accepted as a Boost library. Additionally, please consider the following aspects in your review of the library: - What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Did you try to use the library? With what compiler? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain? Thanks, Jeff

- What is your evaluation of the design?
It seems good. It's a simple library with a simple and straightforward design and implementation. However, I have some notes: There seems to be no way to specify compile-time bounds with one bound unconstrained. This seems like a useful thing to be able to express. The docs include some example code on "using constrained objects in debug mode only", and make reference to using unconstrained<> to allow use of the .value() member function. Why not replace the .value() member function with a free function boost::constrained_value::value(), and then write something like this? #ifndef NDEBUG typedef bounded_int<int, 0, 100>::type my_int; #else typedef int my_int; # if IM_FINE_WITH_MACROS # define value(x) x # else inline int value(int x) { return x; } # endif #endif Then, as long as the user always writes "value(x)", letting ADL pick up the boost::constrained_value::value() free function, the macro/inline function above will silently kick in instead if defined. This gets rid of any requirements on the quality of the optimizer in order to get performance just like an int. Did you already try this and find it problematic? The note in the docs about the dangers inherent in using the library with floating point types is well taken. However, it would be nice if you provided more support for such uses. For instance, how about providing a type that just checks for NaNs, or one that checks the constraint every time you call .value(), so that eventually you'll catch a range violation, even if it's masked by the underlying value residing in a register for a while? I have only a passing knowledge of the issues involved, so I realize these suggestions may be naive. However, it would be nice to have either a) something that works much of the time, even if not perfectly, or b) a more in-depth section in Rationale covering why, and in what cases, constrained floating point values are doomed to fail. B) would at least prevent every user of the library from spinning her wheels trying to make it work for floating point values if it never will. I may be asking for excessive handlholding here, I'm not sure. It's just that "don't use built-in floating point types with this library (until you really know what you're doing)" is a little unsatisfying -- the nice thing about most Boost libraries is that you don't have to have deep knowledge of the details underlying them, because they encapsulate much of the critical knowledge necessary for using them.
- What is your evaluation of the implementation?
The implementation seems reasonable, but there are no tests to give me a warm-and-fuzzy that everything works as it seems to on cursory examination. For instance, just glancing over the code, in bounded.hpp, the implementations of within_bounds::is_below() and within_bounds::is_above() appear to be wrong. If lower_bound_excluded() or upper_bound_excluded() is true in the respective functions, shouldn't the result always be false? Instead, the *_bound_excluded() == false case in both functions has the exact same semantics as the *_bound_excluded() == true case. Sure enough, when I wrote the small test app below, very similar to one of the tutorial examples, I got an unexpected exception: #include <iostream> #include <boost/constrained_value.hpp> int main() { namespace cv = boost::constrained_value; typedef cv::bounded< int, int, int, cv::throw_exception<>, bool, bool, std::less<int> >::type b_type; b_type bounded(b_type::constraint_type(-5, 5, true, true)); // prints "1 1" std::cout << bounded.constraint().lower_bound_excluded() << " " << bounded.constraint().upper_bound_excluded() << "\n" << std::endl; bounded = 0; // ok bounded = -6; // throws (!) return 0; } Changing the else cases of within_bounds::is_below() and within_bounds::is_above() to always return false fixed the problem. I think this underscores the need for tests to be provided with libraries submitted to Boost. With a library as small as this one, a complete set of tests is not even that tall an order. Reviewers should be able to comment on the quality of the tests, just as with any other aspect of the implementation. Also, in constrained.hpp, the BOOST_DEFINE_CONSTRAINED_ASSIGNMENT_OPERATOR macro seems a little odd. Why are _op_ and _op_name_ passed in, when _op_ is used to create _op_name_ via token pasting, and _op_name_ is never used?
- What is your evaluation of the documentation?
In general, it's good. It's clear, and covers everything well. However, I consider the fully-generated Doxygen reference documentation to be too detailed. For example, as a user, why must I know that within_bounds<LowerType, UpperType, LowerExclType, UpperExclType, CompareType> derives from compressed_pair<LowerType, LowerExclType>? From my perspective, it's an implementation detail, and therefore just noise. I'd rather see Boostbook-integrated Doxygen references, a la Boost.Xpressive. Also, in "Object remembering its past extreme values" you have bounded<> qualified by cv::, it seems without ever declaring cv. From your note about assuming ::boost::constrained_value everywhere, it seems you could just leave it off.
- What is your evaluation of the potential usefulness of the library?
It seems quite useful. Better support for floating point values types, along with explicit notes on when such types fall flat, would make it even more useful.
- Did you try to use the library? With what compiler? Did you have any problems?
Yes, with GCC 4.1.0. Problem noted above.
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I spent about 4 hours reading docs and implementation, and writing small amounts of test code.
- Are you knowledgeable about the problem domain?
Yes. Zach Laine

Zach Laine wrote:
The docs include some example code on "using constrained objects in debug mode only", and make reference to using unconstrained<> to allow use of the .value() member function. Why not replace the .value() member function with a free function boost::constrained_value::value(), and then write something like this?
#ifndef NDEBUG typedef bounded_int<int, 0, 100>::type my_int; #else typedef int my_int; # if IM_FINE_WITH_MACROS # define value(x) x # else inline int value(int x) { return x; } # endif #endif
Then, as long as the user always writes "value(x)", letting ADL pick up the boost::constrained_value::value() free function, the macro/inline function above will silently kick in instead if defined. This gets rid of any requirements on the quality of the optimizer in order to get performance just like an int. Did you already try this and find it problematic?
Intuitively, I'd say this is problematic - the value(int) version wouldn't get picked up by ADL. Sebastian

On Mon, Dec 1, 2008 at 11:32 AM, Sebastian Redl <sebastian.redl@getdesigned.at> wrote:
Zach Laine wrote:
The docs include some example code on "using constrained objects in debug mode only", and make reference to using unconstrained<> to allow use of the .value() member function. Why not replace the .value() member function with a free function boost::constrained_value::value(), and then write something like this?
#ifndef NDEBUG typedef bounded_int<int, 0, 100>::type my_int; #else typedef int my_int; # if IM_FINE_WITH_MACROS # define value(x) x # else inline int value(int x) { return x; } # endif #endif
Then, as long as the user always writes "value(x)", letting ADL pick up the boost::constrained_value::value() free function, the macro/inline function above will silently kick in instead if defined. This gets rid of any requirements on the quality of the optimizer in order to get performance just like an int. Did you already try this and find it problematic?
Intuitively, I'd say this is problematic - the value(int) version wouldn't get picked up by ADL.
Why is ADL an issue when the parameter type is int? The value(int) overload is only declared when we don't care about ADL, since we're not using constrained_value types. Am I missing something? Zach

Zach Laine wrote:
On Mon, Dec 1, 2008 at 11:32 AM, Sebastian Redl <sebastian.redl@getdesigned.at> wrote:
Zach Laine wrote:
The docs include some example code on "using constrained objects in debug mode only", and make reference to using unconstrained<> to allow use of the .value() member function. Why not replace the .value() member function with a free function boost::constrained_value::value(), and then write something like this?
#ifndef NDEBUG typedef bounded_int<int, 0, 100>::type my_int; #else typedef int my_int; # if IM_FINE_WITH_MACROS # define value(x) x # else inline int value(int x) { return x; } # endif #endif
Then, as long as the user always writes "value(x)", letting ADL pick up the boost::constrained_value::value() free function, the macro/inline function above will silently kick in instead if defined. This gets rid of any requirements on the quality of the optimizer in order to get performance just like an int. Did you already try this and find it problematic?
Intuitively, I'd say this is problematic - the value(int) version wouldn't get picked up by ADL.
Why is ADL an issue when the parameter type is int? The value(int) overload is only declared when we don't care about ADL, since we're not using constrained_value types. Am I missing something?
The value overload for int would have to be in the global scope, or else it won't be found and you'll get compile errors. Defining a function called "value" in the global scope sounds like a big no-no to me. Can you imagine a name more likely to collide with something else? Sebastian

On Mon, Dec 1, 2008 at 11:45 AM, Sebastian Redl <sebastian.redl@getdesigned.at> wrote:
Zach Laine wrote:
On Mon, Dec 1, 2008 at 11:32 AM, Sebastian Redl <sebastian.redl@getdesigned.at> wrote:
Zach Laine wrote:
The docs include some example code on "using constrained objects in debug mode only", and make reference to using unconstrained<> to allow use of the .value() member function. Why not replace the .value() member function with a free function boost::constrained_value::value(), and then write something like this?
#ifndef NDEBUG typedef bounded_int<int, 0, 100>::type my_int; #else typedef int my_int; # if IM_FINE_WITH_MACROS # define value(x) x # else inline int value(int x) { return x; } # endif #endif
Then, as long as the user always writes "value(x)", letting ADL pick up the boost::constrained_value::value() free function, the macro/inline function above will silently kick in instead if defined. This gets rid of any requirements on the quality of the optimizer in order to get performance just like an int. Did you already try this and find it problematic?
Intuitively, I'd say this is problematic - the value(int) version wouldn't get picked up by ADL.
Why is ADL an issue when the parameter type is int? The value(int) overload is only declared when we don't care about ADL, since we're not using constrained_value types. Am I missing something?
The value overload for int would have to be in the global scope, or else it won't be found and you'll get compile errors.
Defining a function called "value" in the global scope sounds like a big no-no to me. Can you imagine a name more likely to collide with something else?
Quite true. But the code above is user-supplied, so the user can write their code so that such collisions do not occur, if they need identical behavior to int in NDEBUG mode, or the library author could change the spelling of value() to be something less likely to cause problems, or both. Zach

Zach Laine wrote:
Quite true. But the code above is user-supplied, so the user can write their code so that such collisions do not occur, if they need identical behavior to int in NDEBUG mode, or the library author could change the spelling of value() to be something less likely to cause problems, or both.
Yeah, that would be possible. But you know? I think I'll stick with unconstrained. :-) Sebastian

On Mon, Dec 1, 2008 at 11:57 AM, Sebastian Redl <sebastian.redl@getdesigned.at> wrote:
Zach Laine wrote:
Quite true. But the code above is user-supplied, so the user can write their code so that such collisions do not occur, if they need identical behavior to int in NDEBUG mode, or the library author could change the spelling of value() to be something less likely to cause problems, or both.
Yeah, that would be possible. But you know? I think I'll stick with unconstrained. :-)
I agree. In fact, I'd stick with the constrained versions. But there are always some users who will want to throw a switch and get back to plain ints, so there is verifiably no performance penalty. For those users, unconstrained might or might not cut it, depending on their compiler. Zach

Zach Laine wrote:
On Mon, Dec 1, 2008 at 11:57 AM, Sebastian Redl <sebastian.redl@getdesigned.at> wrote:
Zach Laine wrote:
Quite true. But the code above is user-supplied, so the user can write their code so that such collisions do not occur, if they need identical behavior to int in NDEBUG mode, or the library author could change the spelling of value() to be something less likely to cause problems, or both.
Yeah, that would be possible. But you know? I think I'll stick with unconstrained. :-)
I agree. In fact, I'd stick with the constrained versions. But there are always some users who will want to throw a switch and get back to plain ints, so there is verifiably no performance penalty. For those users, unconstrained might or might not cut it, depending on their compiler.
In my experience working with a fixed-point class creating a class that has no performance penalty over built-in types is possible but requires extremely careful review of the generated code. In some cases the compiler will work fine on simple tests and punt in more complex code that you're likely to find in production code. I could very much see users wanting to fallback on builtin types in performance critical code because verifying that there is no abstraction penalty is a lot of work. but I suppose they could do this by adding a layer to facilitate that change on top of constrained. -- Michael Marcin

Uh, that was a "Yes" vote, though I forgot to say so explicitly. Zach

Hi Zach,
From: Zach Laine
There seems to be no way to specify compile-time bounds with one bound unconstrained. This seems like a useful thing to be able to express.
Yes, the assumption is that bounded objects have two bounds, not only one. However, you may easily achieve what you ask for: typedef bounded_int<int, 0, boost::integer_traits<int>::const_max>::type non_negative_int;
Why not replace the .value() member function with a free function boost::constrained_value::value(), and then write something like this?
I think Sebastian has already explained the problem. However, if you really want to avoid using unconstrained, you may use explicit casts instead of value() member function: #ifndef NDEBUG typedef bounded_int<int, 0, 100>::type my_int; #else typedef int my_int; #endif // works with x as either bounded_int or int // without the need to call value() my_int y = std::max(static_cast<int>(x), 2); Yes, I know it's not perfect. I'd rather use unconstrained too. :P
The note in the docs about the dangers inherent in using the library with floating point types is well taken. However, it would be nice if you provided more support for such uses. For instance, how about providing a type that just checks for NaNs, or one that checks the constraint every time you call .value(), so that eventually you'll catch a range violation, even if it's masked by the underlying value residing in a register for a while?
There's already an assert every time a constrained object is modified -- is it something that you mean here?
I have only a passing knowledge of the issues involved, so I realize these suggestions may be naive. [snip more on the topic of FP]
I'm afraid my knowlegde in this area is also not sufficient, so I don't even want to open this can of worms. I think that even a partial support for FP would encourage people to use it without full understanding, and sooner or later this may put them into trouble. I'd rather leave the warning shouting "don't use built-in floating point types with this library (until you really know what you're doing)" to encourage people to look for other solutions than FP. Anyway, if there are some FP arithmetic experts out there -- you are welcome to express your opinion. ;-)
The implementation seems reasonable, but there are no tests to give me a warm-and-fuzzy that everything works as it seems to on cursory examination.
As I mentioned before, detailed regression tests will be added after the review. Currently the examples file (libs\constrained_value\doc\src\examples.cpp) can be used instead -- it uses most of the functionality of the library and should compile and run returning 0.
For instance, just glancing over the code, in bounded.hpp, the implementations of within_bounds::is_below() and within_bounds::is_above() appear to be wrong. If lower_bound_excluded() or upper_bound_excluded() is true in the respective functions, shouldn't the result always be false?
No, it shouldn't. Why do you think so?
Instead, the *_bound_excluded() == false case in both functions has the exact same semantics as the *_bound_excluded() == true case.
It seems you've misunderstood the point. If a bound is excluded then it means that the range is open. For example, if the upper bound is excluded, the range is [lower, upper). If none of the bounds are excluded, the range is [lower, upper]. Excluding a bound doesn't mean the bound does not exist, it means its value does not belong to the range. Does this clarify your doubts?
Sure enough, when I wrote the small test app below, very similar to one of the tutorial examples, I got an unexpected exception: [snip code] b_type bounded(b_type::constraint_type(-5, 5, true, true)); [snip more code] bounded = -6; // throws (!)
I wouldn't say the exception is unexpected, your allowed range is (-5, 5) and you assign -6. The library does exactly what it is supposed to.
Changing the else cases of within_bounds::is_below() and within_bounds::is_above() to always return false fixed the problem.
Sure it "fixes" the problem, because then any value is correct (is NOT below the lower bound and is NOT above the upper bound => is within the allowed range). ;-)
Also, in constrained.hpp, the BOOST_DEFINE_CONSTRAINED_ASSIGNMENT_OPERATOR macro seems a little odd. Why are _op_ and _op_name_ passed in, when _op_ is used to create _op_name_ via token pasting, and _op_name_ is never used?
_op_name_ is used in the documentation comment and is needed so Doxygen generates proper output. Otherwise the %= operator would have invalid documentation, since % is Doxygen's special character and must be escaped.
I consider the fully-generated Doxygen reference documentation to be too detailed. For example, as a user, why must I know that within_bounds<LowerType, UpperType, LowerExclType, UpperExclType, CompareType> derives from compressed_pair<LowerType, LowerExclType>? From my perspective, it's an implementation detail, and therefore just noise.
I agree with you. If you know which Doxygen option turns showing private base classes off, let me know. ;-)
I'd rather see Boostbook-integrated Doxygen references, a la Boost.Xpressive.
Using Doxygen final output instead of the Boostbook one was a conscious decision. The reasons are: - Boostbook does not support some of Doxygen tags used in the code (e.g., @param) and leaves those sections of text out without even a warning, similarily it leaves out brief descriptions (at least in some cases); - personally, I find it way easier to navigate through the documentation created by Doxygen than Boostbook, and it is also much more readable to me due to its layout and formatting.
Also, in "Object remembering its past extreme values" you have bounded<> qualified by cv::, it seems without ever declaring cv. From your note about assuming ::boost::constrained_value everywhere, it seems you could just leave it off.
Thanks for spotting this, to be fixed.
I spent about 4 hours reading docs and implementation, and writing small amounts of test code.
Thank you for your time and your vote. ;-) Best regards, Robert

On Mon, Dec 1, 2008 at 5:30 PM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
// works with x as either bounded_int or int // without the need to call value() my_int y = std::max(static_cast<int>(x), 2);
Yes, I know it's not perfect. I'd rather use unconstrained too. :P
Fair enough.
The note in the docs about the dangers inherent in using the library with floating point types is well taken. However, it would be nice if you provided more support for such uses. For instance, how about providing a type that just checks for NaNs, or one that checks the constraint every time you call .value(), so that eventually you'll catch a range violation, even if it's masked by the underlying value residing in a register for a while?
There's already an assert every time a constrained object is modified -- is it something that you mean here?
Not quite. As the quote in your Rationale section points out, the same variable can have more than one value at different points in time, without any mutating operations being performed on it, if it moves out of a register into a memory location. On x86, and probably elsewhere, the registers have greater precision (80 bits for double) than the memory locations (64 bits for double). This means that testing against bounds at the point of modification is not enough. What might be enough, depending on the use case, would be to test against bounds every time the underlying value is made available to the outside world. So that would mean placing a check in value(), in operator double(), etc. Yet another use case would be "close enough is good enough". If the bounds are within a user-defined epsilon of either boundary. In any case, it sounds like this is not something that you're interested in putting in to the library.
For instance, just glancing over the code, in bounded.hpp, the implementations of within_bounds::is_below() and within_bounds::is_above() appear to be wrong. If lower_bound_excluded() or upper_bound_excluded() is true in the respective functions, shouldn't the result always be false?
No, it shouldn't. Why do you think so?
Because I was reading too fast ;). I misunderstood the *_bound_excluded() functions to indicate whether a bound exists on that side, not whether the bound itself is part of the interval. Zach

From: Zach Laine
Yet another use case would be "close enough is good enough". If the bounds are within a user-defined epsilon of either boundary.
If I understand correctly, this does not solve the problem either. Let's assume you have two values: x and y, where x = y + eps (eps being the user-defined margin of error). One comparison of x and y would indicate their equality (the difference is not greater than eps), while another one might not if x got truncated in the meantime (and y didn't).
In any case, it sounds like this is not something that you're interested in putting in to the library.
If you mean built-in FP support -- not at the moment and not by myself. Best regards, Robert

On Mon, Dec 1, 2008 at 5:30 AM, Jeff Garland <jeff@crystalclearsoftware.com> wrote:
The review of the Robert Kawulak's Constrained Value library begins today December 1, 2008, and will end on December 10th -- I will be the review manager. Please post reviews to the developer list.
Hi Robert, This seems like a very useful library, and after a cursory look of the documentation I feel that it has nicely well-rounded functionality. I have a couple of questions at this point (mostly extreme nit-picks about things that confused me when trying to think about how all I could use this library). In the basic definitions you have: "Constrained object is a wrapper for another object. It can be used just like the underlying object, with one exception: it can be assigned only a value which conforms to a specified constraint." ... "It can be used just like the underlying object": I have a suspicion that it can't be used "just like" the underlying object in all circumstances :-) I assume you can't call member functions of the underlying object if it's a class type (with the same syntax), or provide a constrained<int> as an argument to a function that takes an int &. Could you provide a slightly more precise explanation? The examples I looked at all use the constrained object in operator expressions. Is it that it can be used just like the underlying object in (most) operator expressions? In your wrapping iterator example you have: *((iter++).value()) ... so I assume you can't do *(iter++). (if so, why not?) "it can be assigned only a value which conforms to a specified constraint": when you say assigned, I'm thinking of the assignment operator, but you constrain more than that. Perhaps there is a more inclusive way of saying this? (maybe "it can only hold values which conform to a specified constraint"?) In your example "Object remembering its past extreme values", the policy is changing the constraint object directly. But, in your tutorial, you have: "Constraint of a constrained object cannot be accessed directly for modification, because the underlying value could become invalid according to the modified constraint. Therefore the constraint of a constrained object is immutable and change_constraint() function has to be used in order to modify the constraint. ..." Is the example violating how the library should be used? The value() function returns the underlying object by const &... so, I'm assuming that the constraint is not allowed to depend on any mutable parts of the underlying object's state? Thanks, Stjepan

Hi Stjepan, You have a very nice gift of catching all possible inaccuracies. :D
From: Stjepan Rajko "It can be used just like the underlying object": I have a suspicion that it can't be used "just like" the underlying object in all circumstances :-) I assume you can't call member functions of the underlying object if it's a class type (with the same syntax), or provide a constrained<int> as an argument to a function that takes an int &.
Of course you're right, this is an informal definition and expresses rather a desire, a design goal, which of course cannot be fully achieved due to the language limitations.
Could you provide a slightly more precise explanation?
I'll try. Some hints? ;-)
*((iter++).value()) ... so I assume you can't do *(iter++). (if so, why not?)
No, you can't. This is because (iter++) is of type constrained<...>, and while it is implicitly convertible to the underlying iterator type, it doesn't have the * operator (actually, it doesn't have any non-mutating operators overloaded). It couldn't have the * operator, because in general case it couldn't know what should be the return type.
"it can be assigned only a value which conforms to a specified constraint": when you say assigned, I'm thinking of the assignment operator, but you constrain more than that. Perhaps there is a more inclusive way of saying this? (maybe "it can only hold values which conform to a specified constraint"?)
Again, you got me here. Maybe "it can only be given values..."? My intention was to stress the fact, that the constraint checking happens each time the object is actually modified (and it is usually modified through the assignment operators, although not exclusively).
In your example "Object remembering its past extreme values", the policy is changing the constraint object directly. But, in your tutorial, you have: "Constraint of a constrained object cannot be accessed directly for modification, because the underlying value could become invalid according to the modified constraint. Therefore the constraint of a constrained object is immutable and change_constraint() function has to be used in order to modify the constraint. ..." Is the example violating how the library should be used?
No. From the perspective of a constrained object's user it's true that the constraint cannot be accessed directly for modification in any way. OTOH the error policy is allowed to modify anything within the constrained object when invoked (as long as the value remains constraint-conforming). This is what the policy in the example does.
The value() function returns the underlying object by const &... so, I'm assuming that the constraint is not allowed to depend on any mutable parts of the underlying object's state?
The constraint may depend on any state, mutable or not -- it's the constrained object's task to make sure that the value is immutable for the "outside world" (and it does so by providing only value access methods returning a const reference). Thanks for feedback, Robert

On Mon, Dec 1, 2008 at 6:34 PM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
Hi Stjepan,
You have a very nice gift of catching all possible inaccuracies. :D
If I do, it's only because I have a lot of experience in making inaccuracies :-)
From: Stjepan Rajko "It can be used just like the underlying object": I have a suspicion that it can't be used "just like" the underlying object in all circumstances :-) I assume you can't call member functions of the underlying object if it's a class type (with the same syntax), or provide a constrained<int> as an argument to a function that takes an int &.
Of course you're right, this is an informal definition and expresses rather a desire, a design goal, which of course cannot be fully achieved due to the language limitations.
Sure, but I can't guess which limitations you decided not to deal with, and which limitations you cleverly circumvented by making certain assumptions.
Could you provide a slightly more precise explanation?
I'll try. Some hints? ;-)
It seems like we have the following: * the constrained object holds the underlying object * the underlying object can only be given values according to a constraint * the constrained object can replace the underlying object in certain operator expressions (not non-mutating operators, except for stream insertion / extraction) * the constrained object provides const access to the underlying object So, perhaps: Constrained object is a wrapper for another object. It holds the underlying object, and can only be given values which conform to a specified constraint. Thus the set of possible values of a constrained object is a subset of possible values of the underlying object. A constrained object guarantees that its underlying value is constraint-conforming at all times, since its construction until its destruction. The constrained object can be used just like the underlying object in traditionally mutating operator expressions (link to more info) and stream insertions / extractions. It also provides const access to the underlying object via a value() member function. Or, a more compressed version: Constrained object is a wrapper for another object, and can only be given values which conform to a specified constraint. A constrained object guarantees that its underlying value is constraint-conforming at all times, since its construction until its destruction. The constrained object can be used just like the underlying object in certain operator expressions (link to more info), and provides const access to the underlying object via a value() member function. With whatever you choose to go with, I don't mind that it is informal just as long as it not inaccurate or potentially misleading.
*((iter++).value()) ... so I assume you can't do *(iter++). (if so, why not?)
No, you can't. This is because (iter++) is of type constrained<...>, and while it is implicitly convertible to the underlying iterator type, it doesn't have the * operator (actually, it doesn't have any non-mutating operators overloaded). It couldn't have the * operator, because in general case it couldn't know what should be the return type.
Ah, I see. You focused on the typically mutating operators because there you can reasonably assume that the return type should be the constrained object. This would be good to add to the docs, if it's not already there.
"it can be assigned only a value which conforms to a specified constraint": when you say assigned, I'm thinking of the assignment operator, but you constrain more than that. Perhaps there is a more inclusive way of saying this? (maybe "it can only hold values which conform to a specified constraint"?)
Again, you got me here. Maybe "it can only be given values..."? My intention was to stress the fact, that the constraint checking happens each time the object is actually modified (and it is usually modified through the assignment operators, although not exclusively).
I like that better.
In your example "Object remembering its past extreme values", the policy is changing the constraint object directly. But, in your tutorial, you have: "Constraint of a constrained object cannot be accessed directly for modification, because the underlying value could become invalid according to the modified constraint. Therefore the constraint of a constrained object is immutable and change_constraint() function has to be used in order to modify the constraint. ..." Is the example violating how the library should be used?
No. From the perspective of a constrained object's user it's true that the constraint cannot be accessed directly for modification in any way. OTOH the error policy is allowed to modify anything within the constrained object when invoked (as long as the value remains constraint-conforming). This is what the policy in the example does.
OK, that makes sense. The policy is the one place that guarantees to leave the object in a valid constrained state, so it is the one place that is allowed to directly change the constraint. This would also be good to mention or reference when you talk about change_constraint (since as a user of the constrained object, I could be providing the policy myself).
The value() function returns the underlying object by const &... so, I'm assuming that the constraint is not allowed to depend on any mutable parts of the underlying object's state?
The constraint may depend on any state, mutable or not -- it's the constrained object's task to make sure that the value is immutable for the "outside world" (and it does so by providing only value access methods returning a const reference).
Sorry, I meant `mutable` as in the mutable keyword. For example: struct observable_int { // initialization omitted int observe() const { m_times_observed++; return m_value; } unsigned times_observed() const { return m_times_observed; } private: int m_value; mutable unsigned m_times_observed; // initialized to 0 } // One could think that this would be a reasonable constraint struct is_viewed_few_times { bool operator () (const observable_int &x) const { return x.times_observed()<10; } }; constrained<observable_int, is_viewed_few_times> x; // but it is not enforced for(int i=0; i<20; i++) x.value().observe(); // never complains Speaking of access to the underlying object in situations where you need non-const access to it... you could provide a member function that takes a unary Callable as a parameter, and calls the Callable with a copy of the underlying object as the argument. After the call returns, it assigns the (perhaps modified) value of the copy back to the underlying object (through the policy / checking the constraint). AFAICT, your guarantee is still never violated, and this would provide a really useful piece of functionality. Instead of using a copy you could also use the underlying object as the argument directly, but that weakens your guarantee (and if the Callable keeps an address of the object, throws the guarantee out the window). Best, Stjepan

From: Stjepan Rajko
Ah, I see. You focused on the typically mutating operators because there you can reasonably assume that the return type should be the constrained object. This would be good to add to the docs, if it's not already there.
To be done.
OK, that makes sense. The policy is the one place that guarantees to leave the object in a valid constrained state, so it is the one place that is allowed to directly change the constraint. This would also be good to mention or reference when you talk about change_constraint (since as a user of the constrained object, I could be providing the policy myself).
To be done.
Sorry, I meant `mutable` as in the mutable keyword. For example:
struct observable_int { // initialization omitted
int observe() const { m_times_observed++; return m_value; }
unsigned times_observed() const { return m_times_observed; }
private: int m_value; mutable unsigned m_times_observed; // initialized to 0 }
// One could think that this would be a reasonable constraint struct is_viewed_few_times { bool operator () (const observable_int &x) const { return x.times_observed()<10; } };
constrained<observable_int, is_viewed_few_times> x;
// but it is not enforced for(int i=0; i<20; i++) x.value().observe(); // never complains
You're right, this is not a valid usage of constrained. The rule is that the result of the constraint invocation for the value must always be identical as long as you don't access either of them as non-const. Here the value and the constraint are accessed as const only, yet the result of the constraint invocation may be different for two subsequent invocations. I guess this is another thing that should be stated explicitly in the docs? ;-)
Speaking of access to the underlying object in situations where you need non-const access to it... you could provide a member function that takes a unary Callable as a parameter, and calls the Callable with a copy of the underlying object as the argument. After the call returns, it assigns the (perhaps modified) value of the copy back to the underlying object (through the policy / checking the constraint). AFAICT, your guarantee is still never violated, and this would provide a really useful piece of functionality.
Maybe I've missed something, but this is not too different from what you can already do (having constrained x and callable f): // copy the value, modify, assign x = f(x.value()); The difference is that here f is responsible for making the copy. Are there some other important factors that would justify adding the member you describe?
Instead of using a copy you could also use the underlying object as the argument directly, but that weakens your guarantee (and if the Callable keeps an address of the object, throws the guarantee out the window).
No, we don't want this. ;-) Best regards, Robert

On Tue, Dec 2, 2008 at 6:53 PM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
You're right, this is not a valid usage of constrained. The rule is that the result of the constraint invocation for the value must always be identical as long as you don't access either of them as non-const. Here the value and the constraint are accessed as const only, yet the result of the constraint invocation may be different for two subsequent invocations. I guess this is another thing that should be stated explicitly in the docs? ;-)
Yes ;-) - I think the docs should be more specific on the requirements on the underlying type and constraint required to guarantee the guarantee (perhaps in a separate section that focuses on just the requirements and in detail). Hopefully, there is a concise way of stating the requirements. You might also want to consider the following cases: class unconstrainable1 { public: unconstrainable1() { s_last_constructed = this; } unconstrainable1 *last_constructed() const { return s_last_constructed; } private: static unconstrainable1 *s_last_constructed; some_state m_state; }; The above is a forced example, but the same principle (where the class provides non-const access to itself or its state in its constructor) applies in more realistic cases (like objects that register themselves in a registry). class unconstrainable2 { unconstrainable2(some_state &state) : m_state_ptr(&state) {} some_state &state() const { return *m_state_ptr; } private: // can't design constraints based on *m_state_ptr; some_state *m_state_ptr; } Perhaps a concise way to describe a requirement (addressing both these cases, as well as the const mutable problem) is to say that the constraint must depend only on what is mutable by expressions that require a non-const reference to the underlying object? Are there also requirements that the way in which the underlying object is CopyConstructable and/or Swappable maintain the constraint? (I know a lot of these might seem obvious, but it's good to have an accurate list of things that might go wrong when you're thinking about making a type constrainable, or something *is* going wrong and you're trying to figure out why)
Speaking of access to the underlying object in situations where you need non-const access to it... you could provide a member function that takes a unary Callable as a parameter, and calls the Callable with a copy of the underlying object as the argument. After the call returns, it assigns the (perhaps modified) value of the copy back to the underlying object (through the policy / checking the constraint). AFAICT, your guarantee is still never violated, and this would provide a really useful piece of functionality.
Maybe I've missed something, but this is not too different from what you can already do (having constrained x and callable f):
// copy the value, modify, assign x = f(x.value());
The difference is that here f is responsible for making the copy. Are there some other important factors that would justify adding the member you describe?
if f is void f(value_type &v); then you need: value_type temp = x.value(); f(temp); x = temp; If this is a frequent use case, I'd prefer to be able to write call_using_copy(f, x); or call_using_copy(&f, x); or something like that. I guess it doesn't have to be a member (although that would be ok too). In any case, this is not a big deal, as it can be added as a free function. I just looked at the code (it's very nice to look at!), and am trying to get a grasp on the policy design. At first I had some doubts about it, but am getting more and more convinced that you have the design right. This is what I understand: * the policy gets called iff there is a problem * a problem happens when the underlying object is constructed with an invalid value, in which case the policy gets called with that invalid value as both the first and second parameters * a problem happens when an invalid value wants to be assigned to the underlying object, in which case the policy gets called with the current (valid) value as the first argument, and the new (invalid) value as the second argument * the first argument must satisfy the constraint when/if the policy returns. OK, that seems pretty crisp to me. Is there anything else to it? Best, Stjepan

From: Stjepan Rajko On Tue, Dec 2, 2008 at 6:53 PM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
The rule is that the result of the constraint invocation for the value must
always be identical as
long as you don't access either of them as non-const. [...] Perhaps a concise way to describe a requirement (addressing both these cases, as well as the const mutable problem) is to say that the constraint must depend only on what is mutable by expressions that require a non-const reference to the underlying object?
Not exactly. Constraint may depend on its own state too, but again it's not an allowed situation if constraint may be somehow altered by a const reference and change its judgement for unchanged value.
Are there also requirements that the way in which the underlying object is CopyConstructable and/or Swappable maintain the constraint?
Implicit, yes.
value_type temp = x.value(); f(temp); x = temp;
If this is a frequent use case, I'd prefer to be able to write call_using_copy(f, x); or call_using_copy(&f, x); or something like that.
So is it a frequent use case? I have no idea. I never needed this.
I just looked at the code (it's very nice to look at!), and am trying to get a grasp on the policy design. At first I had some doubts about it, but am getting more and more convinced that you have the design right. This is what I understand: * the policy gets called iff there is a problem * a problem happens when the underlying object is constructed with an invalid value, in which case the policy gets called with that invalid value as both the first and second parameters * a problem happens when an invalid value wants to be assigned to the underlying object, in which case the policy gets called with the current (valid) value as the first argument, and the new (invalid) value as the second argument * the first argument must satisfy the constraint when/if the policy returns.
Exactly. As to the last point -- more generally, the first argument must satisfy the third argument (which can also be modified by the policy). Regards, Robert

----- Original Message ----- From: "Jeff Garland" <jeff@crystalclearsoftware.com> To: <boost@lists.boost.org>; <boost-users@lists.boost.org> Sent: Monday, December 01, 2008 1:30 PM Subject: [boost] [review][constrained_value] Review of Constrained Value Library begins today Hi, There is a thing that i don't like in the design, the fact that you can change the constraint at runtime. I would prefer to have two separated hierarchies, one for constrained values that preserv its constraints, error handling, ... staticaly, and one for those tyhe constraint can be changed at runtime. I expect that the preserving constrained values be implemented in a more space and time efficient way. constrained_value ::= static_type | dynamic_type For the constrained values that can change its constraint at runtime I see two cases, the constraint is attached to the instance, which is your case, or it attached to a type. mutating_type :== by_instance | by_type When attached to a type, the type needs to maintain the set of instances. Instead of changing the type a split operation can be provided resulting in a transfer of the instances satisfying the new constraint to the new type. Of course this meens more space and time consumming, but ... Best regards, Vicente

Salut Vicente, :)
From: vicente.botet
There is a thing that i don't like in the design, the fact that you can change the constraint at runtime.
If you use a constraint that works statically then there is no way to change it. It is bound to the constrained type.
I would prefer to have two separated hierarchies, one for constrained values that preserv its constraints, error handling, ... staticaly, and one for those tyhe constraint can be changed at runtime.
I wouldn't prefer to have two separate hierarchies with almost identical functionality and differing only in details.
I expect that the preserving constrained values be implemented in a more space and time efficient way.
Did you find any problems with suboptimal work of the current implementation for the static cases?
For the constrained values that can change its constraint at runtime I see two cases, the constraint is attached to the instance, which is your case, or it attached to a type.
mutating_type :== by_instance | by_type
When attached to a type, the type needs to maintain the set of instances. Instead of changing the type a split operation can be provided resulting in a transfer of the instances satisfying the new constraint to the new type. Of course this meens more space and time consumming, but ...
There was a similar idea on the users list and I'm still not convinced this leads to something good. If you request to change the constraint of a whole type and some instances don't obey your request, then what is the point in doing this? Best regards, Robert

Hola Robert! ----- Original Message ----- From: "Robert Kawulak" <robert.kawulak@gmail.com> To: <boost@lists.boost.org> Sent: Friday, December 05, 2008 11:40 AM Subject: Re: [boost] [review][constrained_value] Review of ConstrainedValueLibrary begins today
Salut Vicente, :)
From: vicente.botet
There is a thing that i don't like in the design, the fact that you can change the constraint at runtime.
If you use a constraint that works statically then there is no way to change it. It is bound to the constrained type.
Right.
I would prefer to have two separated hierarchies, one for constrained values that preserv its constraints, error handling, ... staticaly, and one for those tyhe constraint can be changed at runtime.
I wouldn't prefer to have two separate hierarchies with almost identical functionality and differing only in details.
Well we can have a single type that cover with the whole domain, but we will need more metaprogramming.
I expect that the preserving constrained values be implemented in a more space and time efficient way.
Did you find any problems with suboptimal work of the current implementation for the static cases?
I expect that a constrained integer will have the same size as an int, i.e. sizeof(int). Which is the size of an instance of the constrained class? See below one possible implementation of static_constrained. Of course, the implementation is not complete.
For the constrained values that can change its constraint at runtime I see two cases, the constraint is attached to the instance, which is your case, or it attached to a type.
mutating_type :== by_instance | by_type
When attached to a type, the type needs to maintain the set of instances. Instead of changing the type a split operation can be provided resulting in a transfer of the instances satisfying the new constraint to the new type. Of course this meens more space and time consumming, but ...
There was a similar idea on the users list and I'm still not convinced this leads to something good. If you request to change the constraint of a whole type and some instances don't obey your request, then what is the point in doing this?
I'm not requesting you to implement constrained values for which the user can change the constraint globaly at runtime. It was only explorating the domain. Instead of changing the constraint of a whole type I was suggesting to provide a split operation that takes the instances satisfying the new constraints and transfer them from the old type to the new type. I have no concrete use case in mind seen the implied performances for the split operation but perhaps this could be useful to someone. Best, Vicente template < typename ValueType, typename ConstraintPolicy = boost::function1<bool, const ValueType &>, typename ErrorPolicy = throw_exception<>
struct constrained_type { typedef ValueType value_type; typedef ConstraintPolicy constraint_type; typedef ErrorPolicy error_handler_type;
constrained_type() : cp_(), ep_() {} constrained_type(constraint_type c) : cp_(c) {} constrained_type(constraint_type c, error_handler_type eh) : cp_(c), ep_(eh) {} ConstraintPolicy cp_; ErrorPolicy ep_; }; template < typename ConstrainedTraits
struct static_constrained { typedef typename ConstrainedTraits::type::value_type value_type; typedef typename ConstrainedTraits::type::constraint_type constraint_type; typedef typename ConstrainedTraits::type::error_handler_type error_handler_type; static_constrained(const value_type & v) : value_(v) { _initialize(); } const value_type & value() const { return value_; } operator const value_type & () const { return value(); } const constraint_type & constraint() const { return ConstrainedTraits::value.cp_; } const error_handler_type & error_handler() const { return ConstrainedTraits::value.ep_; } private: void _initialize() { if( !constraint()(value()) ) { error_handler()(value(), value(), constraint()); } } value_type value_; }; struct even_traits { typedef constrained_type<int, is_even> type; static const type value; }; const even_traits::type even_traits::value; typedef static_constrained<even_traits> even_type; int main() { even_type a(2); std::cout << "sizeof(even_type)=" << sizeof(even_type) << std::endl; even_type b(1); // throws }

From: vicente.botet
I would prefer to have two separated hierarchies, one for constrained values that preserv its constraints, error handling, ... staticaly, and one for those tyhe constraint can be changed at runtime.
I wouldn't prefer to have two separate hierarchies with almost identical functionality and differing only in details.
Well we can have a single type that cover with the whole domain, but we will need more metaprogramming.
We already have a single type that covers both static and dynamic constraints, so what is the point? Did I misunderstood something?
I expect that a constrained integer will have the same size as an int, i.e. sizeof(int). Which is the size of an instance of the constrained class?
Here are some examples: GCC 4.3.2: 4 = sizeof (int) 4 = sizeof (bounded_int<int, 0, 128>::type) 12 = sizeof (bounded<int, int, int>::type) 4 = sizeof (constrained<int, is_even>) MSVC 8.0 SP1: 4 = sizeof (int) 8 = sizeof (bounded_int<int, 0, 128>::type) 20 = sizeof (bounded<int, int, int>::type) 8 = sizeof (constrained<int, is_even>) I don't know why MSVC cannot opimise the size as well as GCC, but anyway the library allows for perfect size optimisation with some compilers.
See below one possible implementation of static_constrained. Of course, the implementation is not complete. [snip] typedef static_constrained<even_traits> even_type;
int main() { even_type a(2); std::cout << "sizeof(even_type)=" << sizeof(even_type) << std::endl; even_type b(1); // throws }
So how is this different from: typedef constrained<int, is_even> even_type; ? Best regards, Robert

----- Original Message ----- From: "Robert Kawulak" <robert.kawulak@gmail.com> To: <boost@lists.boost.org> Sent: Saturday, December 06, 2008 4:31 AM Subject: Re: [boost] [review][constrained_value] ReviewofConstrainedValueLibrary begins today
From: vicente.botet
I would prefer to have two separated hierarchies, one for constrained values that preserv its constraints, error handling, ... staticaly, and one for those tyhe constraint can be changed at runtime.
I wouldn't prefer to have two separate hierarchies with almost identical functionality and differing only in details.
Well we can have a single type that cover with the whole domain, but we will need more metaprogramming.
We already have a single type that covers both static and dynamic constraints, so what is the point? Did I misunderstood something?
I'm sorry. I didn't see that you provide already static constraints. >From the documentation it was not clear to me that with your library you can do bounded<int>::type dyn_v; change_lower_bound(v, -5); but not bounded_int<int, 0, 128>::type _v; change_lower_bound(v, -5); // do not compiles I supose that I've missed the difference between 'bounded_int' and 'bounded<int>' and that I have skipped this sentence in the documentation: "The trick is to "convert" a value into a type, i.e. create a type that can be converted to the desired type yielding the value" It would be great if the documentation state explicitly that change_lower_bound(v, -5); do not compiles for bounded_int. And why not find a better name for bounded_int.
I expect that a constrained integer will have the same size as an int, i.e. sizeof(int). Which is the size of an instance of the constrained class?
Here are some examples:
GCC 4.3.2:
4 = sizeof (int) 4 = sizeof (bounded_int<int, 0, 128>::type) 12 = sizeof (bounded<int, int, int>::type) 4 = sizeof (constrained<int, is_even>)
MSVC 8.0 SP1:
4 = sizeof (int) 8 = sizeof (bounded_int<int, 0, 128>::type) 20 = sizeof (bounded<int, int, int>::type) 8 = sizeof (constrained<int, is_even>)
I don't know why MSVC cannot opimise the size as well as GCC, but anyway the library allows for perfect size optimisation with some compilers.
I hope you will find how to solve this issue. This is exaclty what I was locking for, but it was no evident to me your library provided it already I see now that this is really a well designed library. Thanks, Vicente

From: vicente.botet
It would be great if the documentation state explicitly that change_lower_bound(v, -5); do not compiles for bounded_int.
I'll try to somehow make this more obvious in the docs.
And why not find a better name for bounded_int.
Any suggestions?
This is exaclty what I was locking for, but it was no evident to me your library provided it already I see now that this is really a well designed library.
Nice to hear this. ;-) Best regards, Robert

Robert Kawulak wrote:
From: vicente.botet
...
We already have a single type that covers both static and dynamic constraints, so what is the point? Did I misunderstood something?
I expect that a constrained integer will have the same size as an int, i.e. sizeof(int). Which is the size of an instance of the constrained class?
Here are some examples:
GCC 4.3.2:
4 = sizeof (int) 4 = sizeof (bounded_int<int, 0, 128>::type) 12 = sizeof (bounded<int, int, int>::type) 4 = sizeof (constrained<int, is_even>)
MSVC 8.0 SP1:
4 = sizeof (int) 8 = sizeof (bounded_int<int, 0, 128>::type) 20 = sizeof (bounded<int, int, int>::type) 8 = sizeof (constrained<int, is_even>)
I don't know why MSVC cannot opimise the size as well as GCC, but anyway the library allows for perfect size optimisation with some compilers.
Another reviewer mentioned he saw a problem in the implementations use of EBO. I can't find that posting, but I thought it was Paul Bristow or John Maddock. Perhaps that explains the MSVC size issue. Jeff

On Sat, Dec 6, 2008 at 9:32 AM, Jeff Flinn <TriumphSprint2000@hotmail.com> wrote:
Robert Kawulak wrote:
From: vicente.botet
...
We already have a single type that covers both static and dynamic constraints, so what is the point? Did I misunderstood something?
I expect that a constrained integer will have the same size as an int, i.e. sizeof(int). Which is the size of an instance of the constrained class?
Here are some examples:
GCC 4.3.2:
4 = sizeof (int) 4 = sizeof (bounded_int<int, 0, 128>::type) 12 = sizeof (bounded<int, int, int>::type) 4 = sizeof (constrained<int, is_even>)
MSVC 8.0 SP1:
4 = sizeof (int) 8 = sizeof (bounded_int<int, 0, 128>::type) 20 = sizeof (bounded<int, int, int>::type) 8 = sizeof (constrained<int, is_even>)
I don't know why MSVC cannot opimise the size as well as GCC, but anyway the library allows for perfect size optimisation with some compilers.
Another reviewer mentioned he saw a problem in the implementations use of EBO. I can't find that posting, but I thought it was Paul Bristow or John Maddock. Perhaps that explains the MSVC size issue.
It was John Maddock: http://tinyurl.com/6mnzxz He also suggests that the strategy he proposes would perhaps get EBO on more compilers. Stjepan

From: Stjepan Rajko
Another reviewer mentioned he saw a problem in the implementations use of EBO. I can't find that posting, but I thought it was Paul Bristow or John Maddock. Perhaps that explains the MSVC size issue.
It was John Maddock: http://tinyurl.com/6mnzxz
He also suggests that the strategy he proposes would perhaps get EBO on more compilers.
His suggestion was not based on the actual implementation (please see my reply to the post). I don't see a way to optimise this more, but if somebody shows that it's possible, then I'll gladly implement it. Best regards, Robert

My simple review: I spent two hours, read the document once. Looked at constrained.hpp once. The code is well documented, I don't have any technical concerns at first glance. I would like DEBUG sorted out though. Would I typedef all the bounded types? I am not sure, I would rather have an option to disable it using a macro as well. I tried to compile a test program using gcc 4.4 and linux, but I don't have utility/swap.hpp so I gave up. I assume this in a boost beta/sandbox somewhere. Here are my ideas: In my own code, I already have a class called cyclic_iterator that uses the boost iterator facade. It does the same thing as this wrapping iterator does. I have a class called defaulted<typename T, T value>, that will default initialize a value so I don't forget to do it in the constructor. This library reminds me of that. You can combine it if you want. :) Or add something like this: class A { A() { // fails } must_initialize<int> t; }; In OpenGL, they use GLenum for enums, but have a list of #defines with ints, so there is no type checking. I can see when this bounded class would be useful. Checking GL_FROM GL_TO. Constant expression checks: If boost eventually uses c++0x, I'm sure you could do a static_assert with constexpr overloading and have many compile time checks. I definitely think "wrapping iterator" needs to be in boost somewhere, and before I made cyclic_iterator I was surprised it wasn't. wrapping_int is useful. I like effort put into this project, thank you Robert. Overall I would accept it, it's simple and lightweight. Can't be too harmful. Chris

Hi Chris,
From: Chris
I spent two hours, read the document once.
Thank you for your time.
I would like DEBUG sorted out though. Would I typedef all the bounded types? I am not sure, I would rather have an option to disable it using a macro as well.
You mean the possibility to turn all constrained objects into unconstrained by defining one macro? I initially considered this, but I rejected the idea because you may want to turn some checks off while leaving some of them on in one program (note, that constrained values are useful not only for debugging). However, I see that this functionality may be useful for people doing only debug checks and maybe I'll add it if I find an easy and not too ugly way to do this.
I tried to compile a test program using gcc 4.4 and linux, but I don't have utility/swap.hpp so I gave up. I assume this in a boost beta/sandbox somewhere.
http://svn.boost.org/svn/boost/branches/release/boost/utility/swap.hpp
I have a class called defaulted<typename T, T value>, that will default initialize a value so I don't forget to do it in the constructor. This library reminds me of that. You can combine it if you want. :)
It might be possible to use bounded_int<defaulted<int, 10>, 10, 20>::type, which should probably work if defaulted<T, ...> is implicitly convertible to T (but also would require you to put calls to value() here and there).
Or add something like this: class A {
A() { // fails }
must_initialize<int> t; };
The problem is that the fact whether the default value is valid or not depends on the constraint used and this can't be verified at compile-time in most cases. Probably it's possible to implement a (a bit complicated, I'm afraid) mechanism disabling the default constructor in cases like bounded_int<int, 10, 20>::type and this may be worth considering in the future. Best regards, Robert

Robert Kawulak wrote:
I would like DEBUG sorted out though. Would I
typedef all the bounded types? I am not sure, I would rather have an option to disable it using a macro as well.
You mean the possibility to turn all constrained objects into unconstrained by defining one macro? I initially considered this, but I rejected the idea because you may want to turn some checks off while leaving some of them on in one program (note, that constrained values are useful not only for debugging).
I understand. Quick question, if I made a NullErrorPolicy, and had compiler optimization on. Shouldn't the compiler remove all bounded checks? (empty) There would be no need for me to worry about optimization and slowness. This would be probably be the proper way rather than myint types; #ifdef MYDEBUG typedef NullErrorPolicy Policy; #else typedef throw_exception <http://student.agh.edu.pl/%7Ekawulak/constrained_value/reference/structboost_1_1constrained__value_1_1throw__exception.html><my_exception> Policy; #endif Chris

From: Chris Quick question, if I made a NullErrorPolicy, and had compiler optimization on. Shouldn't the compiler remove all bounded checks? (empty) There would be no need for me to worry about optimization and slowness.
This would be probably be the proper way rather than myint types;
#ifdef MYDEBUG typedef NullErrorPolicy Policy; #else typedef throw_exception <http://student.agh.edu.pl/%7Ekawulak/constrained_value/refere nce/structboost_1_1constrained__value_1_1throw__exception.html
<my_exception> Policy; #endif
And why not simply as described in http://tinyurl.com/6bda78 ?

Robert Kawulak wrote:
From: Chris Quick question, if I made a NullErrorPolicy, and had compiler optimization on. Shouldn't the compiler remove all bounded checks? (empty) There would be no need for me to worry about optimization and slowness.
This would be probably be the proper way rather than myint types;
#ifdef MYDEBUG typedef NullErrorPolicy Policy; #else typedef throw_exception <http://student.agh.edu.pl/%7Ekawulak/constrained_value/refere nce/structboost_1_1constrained__value_1_1throw__exception.html
<my_exception>
Policy; #endif
And why not simply as described in http://tinyurl.com/6bda78 ?
Because if I have 100 uses of constrained, I don't have 100 typedefs or more templates. I only need that one Policy typedef, and in my code: #ifdef MYDEBUG typedef NullErrorPolicy Policy; #else typedef throw_exception<my_exception> Policy; #endif struct A { bounded_int<int, 0, 100, Policy> a; bounded_int<int, 50, 100, Policy> b; bounded_int<int, 2, 3, Policy> c; }; It would be the same for compiler optimization as unconstrained, wouldn't it? It should remove any useless code.

From: Chris #ifdef MYDEBUG typedef NullErrorPolicy Policy; #else typedef throw_exception<my_exception> Policy; #endif
struct A {
bounded_int<int, 0, 100, Policy> a; bounded_int<int, 50, 100, Policy> b; bounded_int<int, 2, 3, Policy> c;
};
It would be the same for compiler optimization as unconstrained, wouldn't it? It should remove any useless code.
In theory yes, but in practice this is a difficult optimisation task -- see http://article.gmane.org/gmane.comp.lib.boost.devel/174845/ .

From: Robert Kawulak
From: Chris Or add something like this: class A {
A() { // fails }
must_initialize<int> t; };
The problem is that the fact whether the default value is valid or not depends on the constraint used and this can't be verified at compile-time in most cases.
This is even more tricky, because in some cases the error policy is able to adjust the value to make it constraint conforming. So should the default construction be disabled in such cases too or not?

Robert Kawulak wrote:
From: Robert Kawulak
From: Chris Or add something like this: class A {
A() { // fails }
must_initialize<int> t; };
The problem is that the fact whether the default value is valid or not depends on the constraint used and this can't be verified at compile-time in most cases.
This is even more tricky, because in some cases the error policy is able to adjust the value to make it constraint conforming. So should the default construction be disabled in such cases too or not?
Sorry I did not clarify that there were no default template parameters. (no error policy) must_initialize was not referring to any of your constrained classes. It was just a simple class to be added to your library if you wanted, that forces the user to initialize a variable. That is the compile time constraint. template <typename T> class must_initialize { public: must_initialize(T t) : _value(t) { } // operater overloads here private: must_initialize(); T _value; };
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

From: Chris Sorry I did not clarify that there were no default template parameters. (no error policy) must_initialize was not referring to any of your constrained classes. It was just a simple class to be added to your library if you wanted, that forces the user to initialize a variable. That is the compile time constraint.
template <typename T> class must_initialize { public: must_initialize(T t) : _value(t) { } // operater overloads here
private: must_initialize(); T _value; };
I'm not sure I got it -- how must_initialize would be supposed to be used? Like this? constrained<must_initialize<int>, is_odd> odd_int; odd_int x; // error odd_int y(1); // OK This would indeed make sense, but I think this class should be added to Boost.Utility (like value_initialized) rather than here... Best regards, Robert

Greetings, Nice library. I vote for inclusion with the proviso that Robert accept floating support provided by those of us that need it.
- What is your evaluation of the design?
This is a very general design which I believe covers most use cases. I would need floating point support and I would be glad to help as I am able. My view is that you just can't actually use any of the built- in operators, but the same operations exist if you figure in a little epsilon. Note that although a very small number like 1e-10 works most of the time, epsilon should really be proportional to the size of the arguments. Surely this is already covered in Boost somewhere? Or please correct me, float fans, if I'm underestimating the problem. Anyway, I don't think the design would need to be modified in the slightest, there just need to be some convenience predicates to plug in. It sounds like Robert is willing to integrate any contributions, and I don't have any doubt that they will materialize before the library is actually released. Similarly, the ability to specify multiple constraints is vital, but does not need to be directly supported because predicates can be combined using STL and better. I am glad that a method is described for non-integer compile-time fixed bounds. Too bad it takes so much typing, but I don't know how that can be avoided without macros. :-p
- What is your evaluation of the implementation?
I haven't looked at the code, just glanced at the examples. It is imperative that constrained objects not take any more space than the objects, if the predicate and error policy are dataless. I agree with the use case of mapping constrained objects onto memory that interfaces with other languages such as C. The final version should have sizeof tests, which should pass on the major compilers.
- What is your evaluation of the documentation?
Very well written. But perhaps it doesn't have to be as breezy and light after the first page, where that approach is perfect. I would like to see the space costs spelled out in each section. I didn't have any trouble understanding that anything you specify as a template parameter is free, anything that is changeable at runtime is going to cost exactly what you'd expect, but sometimes it is nice to be reassured. Maybe I just prefer a description to a tutorial. I like the level of detail of everything from "Bounding objects with open ranges" on. All of my concerns were alleviated, but only after reading all that ways in. Each compile time feature should say "this doesn't take any extra space", each run-time "this costs X and Y." Ditto in the examples. Floating point FUD needs to be excised from the documentation, once Robert gets some help implementing that.
- What is your evaluation of the potential usefulness of the library?
I think this would be very useful to me - it is one of those things that is often done ad-hoc but is much better with all the operators and with space efficiency guarantees.
- Did you try to use the library? With what compiler? Did you have any problems?
No - I see how it should work and trust that it does. ;-)
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
Followed the discussion and read the documentation closely. Wrote a bit more than I expected to.
- Are you knowledgeable about the problem domain?
Yep, I've been guilty of ad-hoc implementations, and I once helped maintain something similar. As other people mentioned, I think the author should consider the interactions with other libraries such as Probability and Units. I'm not saying that I see any issues, but there might be synergy. Gordon

From: Gordon Woodhull
My view is that you just can't actually use any of the built- in operators, but the same operations exist if you figure in a little epsilon. Note that although a very small number like 1e-10 works most of the time, epsilon should really be proportional to the size of the arguments.
The epsilon solution has already been proposed, but as I understand this (correct me if I'm wrong) it wouldn't work either: > From: Zach Laine > Yet another use case would be "close enough is good enough". If the > bounds are within a user-defined epsilon of either boundary. If I understand correctly, this does not solve the problem either. Let's assume you have two values: x and y, where x = y + eps (eps being the user-defined margin of error). One comparison of x and y would indicate their equality (the difference is not greater than eps), while another one might not if x got truncated in the meantime (and y didn't).
The final version should have sizeof tests, which should pass on the major compilers.
Sounds like a good idea.
I would like to see the space costs spelled out in each section. I didn't have any trouble understanding that anything you specify as a template parameter is free, anything that is changeable at runtime is going to cost exactly what you'd expect, but sometimes it is nice to be reassured.
If I write everywhere that a feature implies no space cost, everyone will take this for granted and may be surprised when it turns out that on their compiler it is much worse than expected. The docs already say: "The implementation takes advantage of potential capability of the compiler to perform EBO (the Empty Base-class Optimization), so for example the following expression should be true: sizeof( bounded_int<int, 0, 100>::type ) == sizeof( int ) However, lack of EBO capability may cause constrained objects to have significantly larger size than the corresponding underlying value types." Thanks for your time and best regards, Robert

On Sat, Dec 6, 2008 at 6:30 AM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
From: Gordon Woodhull
My view is that you just can't actually use any of the built- in operators, but the same operations exist if you figure in a little epsilon. Note that although a very small number like 1e-10 works most of the time, epsilon should really be proportional to the size of the arguments.
The epsilon solution has already been proposed, but as I understand this (correct me if I'm wrong) it wouldn't work either:
> From: Zach Laine
> Yet another use case would be "close enough is good enough". If the > bounds are within a user-defined epsilon of either boundary.
If I understand correctly, this does not solve the problem either. Let's assume you have two values: x and y, where x = y + eps (eps being the user-defined margin of error). One comparison of x and y would indicate their equality (the difference is not greater than eps), while another one might not if x got truncated in the meantime (and y didn't).
I tried to suggest a way in which the library can deal with this here: http://tinyurl.com/6hlb8o Do you find problems with that strategy? Stjepan

From: Stjepan Rajko
I tried to suggest a way in which the library can deal with this here: http://tinyurl.com/6hlb8o
Do you find problems with that strategy?
I've seen your post. Sorry for no reply yet, but I need some time to think about the idea to understand it well enough. ;-) Best regards, Robert

On Sun, Dec 7, 2008 at 10:49 AM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
From: Stjepan Rajko
I tried to suggest a way in which the library can deal with this here: http://tinyurl.com/6hlb8o
Do you find problems with that strategy?
I've seen your post. Sorry for no reply yet, but I need some time to think about the idea to understand it well enough. ;-)
Yeah, in retrospect I realized that I wasn't describing my assumptions / starting points very well. My reply to Gordon's post offers some additional thoughts but I think also falls short of being very clear. I think the fundamental shift is the separation between the invariant (which is what you would like the library to guarantee) and a test (which is a test related to the invariant). The current design / documentation roll both of these into a single entity - the constraint. What I am proposing requires that these two be separated, and that the only requirement be that a passing test guarantees the invariant (but a failing test need not imply that the invariant is not satisfied). The test is what is implemented (just like the current constraint), but the invariant is only documented/guaranteed. Hence, a test that always fails would be a legal test no matter what the invariant is (just like a policy that always throws is a valid policy no matter what the invariant is). Of course, the library is the most useful when the test checks for the invariant exactly, but in cases like floating point comparison, the invariant is somewhat of a moving target and a perfect test is difficult. Allowing imperfect tests (that still guarantee the invariant) provides a way to deal with such cases. At the same time, your library is still maintaining the same guarantee, because the test is providing it for you. This is just like the policy providing the same guarantee in cases where the test fails (but separating the invariant from the test will require you to make a choice here - should the policy guarantee the invariant only, or should it also guarantee a passing test?) Like I said, the most useful scenario is when test <==> invariant. This is what you currently have and I think that should remain the focus of the library. As far as what could to be done, I am only suggesting you document and discuss what happens in the case where only test ==> invariant holds. As my own understanding of this idea has evolved quite a bit in the past couple of days, my posts on the subject are probably not very consistent - but hopefully the above description makes them a bit more digestible and you can extrapolate something that you think might work for the library (or find a reason why the library shouldn't support it). Best, Stjepan

a passing test guarantees the invariant (but a failing test need not imply that the invariant is not satisfied). The test is what is implemented (just like the current constraint), but the invariant is only documented/guaranteed.
Hear, hear! In this case, the invariant is what you see in the code but it gets coarsened by the test to account for error.

Robert wrote:
The epsilon solution has already been proposed, but as I understand this (correct me if I'm wrong) it wouldn't work either:
From: Zach Laine
Yet another use case would be "close enough is good enough". If the bounds are within a user-defined epsilon of either boundary.
If I understand correctly, this does not solve the problem either. Let's assume you have two values: x and y, where x = y + eps (eps being the user-defined margin of error). One comparison of x and y would indicate their equality (the difference is not greater than eps), while another one might not if x got truncated in the meantime (and y didn't).
Thanks, I understand the problem now. I think we are all agreed that the library is orthogonal to any concerns about floating point and nothing needs to be changed in the code. IMO the warning should be toned down in the documentation. I wish the definition that Stjepan suggests were possible, but I don't see how a test can be designed that only only switches from unsatisfied to satisfied. Boost.Test has a good start on float predicates http://tinyurl.com/57u2sf - but no inequality and they require epsilon to be passed in or stored. I'll see if I can adapt them before Constrained Value is finalized... might be opening a can of worms, and any advice would be appreciated.
If I write everywhere that a feature implies no space cost, everyone will take this for granted and may be surprised when it turns out that on their compiler it is much worse than expected.
In that case, I would like it if the documentation said a feature "should not" take any extra space, linking to the explanation. In turn, the explanation can mention the sizeof test and link to the results so that people can find out quickly whether their platform will cooperate. I know it's a drag, but I think the features which do take extra space should mention that too. Also it might be worthwhile to point out that shared runtime mutable constraints with no overhead per constrained object are possible, using static members. (Or maybe that's too weird.) I might implement epsilon this way, as my impression is that numeric_limits<>::epsilon() is not useful for this purpose. Gordon

I wrote:
Also it might be worthwhile to point out that shared runtime mutable constraints with no overhead per constrained object are possible, using static members. (Or maybe that's too weird.) I might implement epsilon this way, as my impression is that numeric_limits<>::epsilon() is not useful for this purpose.
Of course I won't use this technique but something more like what's described in "Compile-time fixed bounds" - don't know what I was thinking. Runtime modification of epsilon is just silly.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Gordon Woodhull Sent: 06 December 2008 23:51 To: boost@lists.boost.org Subject: Re: [boost] [review][constrained_value] Review of Constrained ValueLibrary begins today
I wrote:
Also it might be worthwhile to point out that shared runtime mutable constraints with no overhead per constrained object are possible, using static members. (Or maybe that's too weird.) I might implement epsilon this way, as my impression is that numeric_limits<>::epsilon() is not useful for this purpose.
Of course I won't use this technique but something more like what's described in "Compile-time fixed bounds" - don't know what I was thinking. Runtime modification of epsilon is just silly.
I'd just like to point out that there are more than the 'near-epilson computational noise' reasons, discussed so far, why it is useful to be able to make 'fuzzier' floating-point comparisons. There are many computations that involve iteration - but for speed only to a user-chosen precision - which may be *much* more than epsilon for the floating-point type. Or even physical measurement uncertainty, or both. This is why BOOST_CHECK_CLOSE has a parameter for 'close enough' - and has proved invaluable. There are times when fixing this both at compile time or at run-time might be most useful. Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

Replying to a day's messages in my own order. :-) On Dec 8, 2008, at 3:14 PM, Paul A. Bristow wrote:
Gordon wrote
Also it might be worthwhile to point out that shared runtime mutable constraints with no overhead per constrained object are possible, using static members. (Or maybe that's too weird.) I might implement epsilon this way, as my impression is that numeric_limits<>::epsilon() is not useful for this purpose.
Of course I won't use this technique but something more like what's described in "Compile-time fixed bounds" - don't know what I was thinking. Runtime modification of epsilon is just silly.
I'd just like to point out that there are more than the 'near-epilson computational noise' reasons, discussed so far, why it is useful to be able to make 'fuzzier' floating-point comparisons.
Yes, I am now convinced that all three use cases are valid in different situations - there are times when you would want an epsilon paired with each value, there are times when you want a class of floats which are associated with a single runtime epsilon, and there are times when epsilon can be chosen at compile time with a policy. My point earlier was simply that the Boost.Test predicates won't work out of the box because they require epsilon as a runtime argument - epsilon needs to be built into the predicate in one of the three ways in order to make it STL compatible. It is interesting that size of predicates is not usually an issue because they are passed to algorithms or bigger containers - since constrained_value is a container of one suddenly it matters a lot. Robert wrote:
If an "exact floating point" type could be provided (out of scope of this library), being a wrapper for float/double and making sure that its underlying value is always truncated, you could perform comparisons (and all the other operations) that are repeatable, without the possibility that a comparison that once succeeded will later fail. Does it sound sensible?
I consider this a quick-and-dirty solution which would probably work for a lot of situations. It does sound like it would be consistent; however you still have the regular-old problem with floats, that they're almost never equal except once in a blue moon. I hope to find a way to implement it without assembly. Kim Barrett wrote:
inline double exact(double x) { struct { volatile double x; } xx = { x }; return xx.x; }
The idea is to force the value to make a round trip through a memory location of the "correct" size. The use of volatile should prevent the compiler from optimizing away the trip through memory.
Interesting! I still don't trust the compiler not to optimize it away, but it's definitely worth a try. Robert wrote:
The "delta" (difference between extended and truncated value) may have a very big value for big numbers and very small for small ones, so epsilon should rather be scaled according to the magnitude of compared numbers.
The Knuth method implemented by Boost.Test multiplies epsilon by each of the values and compares the difference in the values with the result, so avoiding the scale problem. Robert wrote:
Thorsten wrote:
I totally disagree. People have to deal with floats anyway. That is a seperate issue. The advice should be removed IMO, and bounded_float provided.
It should be provided, but Boost should first include some set of mechanisms to deal with the FP issues. They are too general to be implemented within this library and they are not tightly coupled with the concept of constrained types. I see this as an analogy to arithmetic overflows prevention, which is also too general and too orthogonal to this library.
Yes, floating point predicates should be a separate library, and the floating point FUD should be removed from the documentation as well. My yes vote is still conditional on your cooperation with us on letting floating points work, because the library would not be useful to me otherwise. I am now regretting that I didn't look at the code, because I didn't know that there were assertions testing the invariants in a different way from the predicates. Is this only in the bounded part of the library? I thought the value was tested after every change using the predicate, and then called the error policy if the predicate failed. That's the way I think it should be - you don't need to add any extra invariant value to that, it's just perfect. Like Stjepan said:
As far as "if the test guarantees...the library guarantees...", it should be no more complicated to understand that "if X is thread-safe then something<X> is thread-safe", or something similar regarding exception safety.
This is the magic of C++, that templates can be used in unforeseen ways because they take on the qualities of their arguments. Generally I don't think a library should assert on any user input - even an invalid/inconsistent predicate! I figured this was the point of having an error policy - everyone has their own idea how they want to handle errors. Assertions are forbidden in a lot of corporate environments. <joke>Perhaps there should be an predicate consistency policy.</joke> I will look at the code tomorrow. Two last points: 1. I also like the monitored values use case and hope that you will take the time to consider it before submitting a "final" version. (Libraries are never truly finished.) 2. You don't have to worry about NaN - users can choose a predicate that's appropriate for their application. Personally I would always use a predicate that consistently rejects NaN, because I'd want the error pointed out to me ASAP. But that should be implemented as a separate check that is combined in, e.g. using std::logical_and<> (if that hasn't been superceded by something in Boost). I guess we have to wait for decltype to be able to use lambda expressions as predicates here? Gordon

On Monday 08 December 2008 22:26:40 Gordon Woodhull wrote:
Yes, floating point predicates should be a separate library, and the  floating point FUD should be removed from the documentation as well.
The points made about floating point are not FUD. They are facts of life for people working with high precision floating point computations; in fact, a lot of such computations go through severe contortions to avoid/mitigate said problems without compromising performance too much. Just as an example, try summing an array of 1000 normally distributed doubles naively and with atlas (or any other optimized BLAS); you will see that the actual result (not the computation time) differs because the optimized BLAS takes into account the vagaries of floating point computation. Given that there are so many facets of floating point computations that are unclear even to those on this list, providing a library with a lot of built-in assumptions about epsilons and the like would be a support headache for a long time. It is very easy to be misled about the effects of floating point computations on commodity hardware. Regards, Ravi PS: This discussion reminds me very much about the floating point serialization discussion a few weeks back. Working with floating points requires deep domain expertise in the general case; the best solution, in my humble opinion, is to simply wrap the work of the HPC community rather than reinvent the (very complicated) wheel.

From: Gordon Woodhull
My yes vote is still conditional on your cooperation with us on letting floating points work, because the library would not be useful to me otherwise.
Technically, there's nothing in the code that would disallow the use of floats, only that there is the warning in the docs. I have already stated (though maybe not clearly enough, so I will try again now ;-), that I will explain possible issues with floats and make the warning sound less categorical. Is this what you expect?
I am now regretting that I didn't look at the code, because I didn't know that there were assertions testing the invariants in a different way from the predicates.
The assertions test the invariant exactly the same way, they use the predicate.
Is this only in the bounded part of the library?
No, the constrained class contains those assertions.
I thought the value was tested after every change using the predicate, and then called the error policy if the predicate failed.
Value should be changed _after_ calling the predicate to ensure strong excepiton guarantee.
Generally I don't think a library should assert on any user input - even an invalid/inconsistent predicate!
If the user of a library breaks the contract, then why not? I'd rather prefer to see a failed assertion to know as soon as possible that I accidentally used the library in an inproper way.
I figured this was the point of having an error policy - everyone has their own idea how they want to handle errors.
Error policy and the assert are two different things. Error policy guarantees the invariant, the assertion verifies it (and thus verifies the contract). Assigning an invalid value is a different kind of error than breaking the contract.
Assertions are forbidden in a lot of corporate environments.
Isn't it what BOOST_DISABLE_ASSERTS is for if you want to prevent a Boost library from using asserts?
I guess we have to wait for decltype to be able to use lambda expressions as predicates here?
Why? There are already examples with lambda predicates in the docs. Best regards, Robert

On Dec 9, 2008, at 7:28 PM, Robert Kawulak wrote:
From: Gordon Woodhull
My yes vote is still conditional on your cooperation with us on letting floating points work, because the library would not be useful to me otherwise.
Technically, there's nothing in the code that would disallow the use of floats, only that there is the warning in the docs. I have already stated (though maybe not clearly enough, so I will try again now ;-), that I will explain possible issues with floats and make the warning sound less categorical. Is this what you expect?
Yes, that is exactly what I'm looking for. As many have pointed out, maybe the Fear Uncertainty and Doubt are for good reasons, but this could be worded more gently so that beginning users don't get the impression that they shouldn't be using floats at all. However, there might be one problem with the code:
The assertions test the invariant exactly the same way, they use the predicate.
Is this only in the bounded part of the library?
No, the constrained class contains those assertions.
I've looked at the code now (it's very clear!), and I understand what the assertions do: they double-check that the error handler is fulfilling the contract of not returning if the constraint fails. I think this is a valuable feature in debug mode but would not be wanted 1) in performance critical apps (the extra test matters) 2) if we've established that there's no way a floating point predicate is always going to provide consistent results. (I know you're laying your hopes on consistent truncation.) Of course, if you allow the asserts to be disabled by macro or policy, then you're also allowing the monitored values use case. I understand that you don't want to support this (certainly the nomenclature is wrong), but it would be nice if you would allow people to experiment. The error policy could somehow declare that it doesn't want to be double-checked, e.g. by deriving from please_dont_double_check_me; there doesn't need to be a separate parameter to constrained_value. BTW, I find the ignore example confusing because it's checking whether the old value still satisfies the constraint. (Doesn't it know this by now?) To be honest, I had to compile it and say "huh?" to figure this out. This results in three invocations of the predicate on a successful assign.
Value should be changed _after_ calling the predicate to ensure strong excepiton guarantee.
Duh, right. Fast fingers.
I figured this was the point of having an error policy - everyone has their own idea how they want to handle errors.
Error policy and the assert are two different things. Error policy guarantees the invariant, the assertion verifies it (and thus verifies the contract). Assigning an invalid value is a different kind of error than breaking the contract.
I hope that you will consider loosening the contract in a later version to allow more use cases. Gordon

From: Gordon Woodhull
I've looked at the code now (it's very clear!), and I understand what the assertions do: they double-check that the error handler is fulfilling the contract of not returning if the constraint fails.
Right.
I think this is a valuable feature in debug mode but would not be wanted 1) in performance critical apps (the extra test matters)
I thought in that case one would turn assertions off globally.
2) if we've established that there's no way a floating point predicate is always going to provide consistent results. (I know you're laying your hopes on consistent truncation.)
Right, but I will try to elegantly solve it for this case too. ;-)
Of course, if you allow the asserts to be disabled by macro or policy, then you're also allowing the monitored values use case. I understand that you don't want to support this (certainly the nomenclature is wrong), but it would be nice if you would allow people to experiment.
Got it.
BTW, I find the ignore example confusing because it's checking whether the old value still satisfies the constraint. (Doesn't it know this by now?) To be honest, I had to compile it and say "huh?" to figure this out.
The explanation in the tutorial covers this, is it not clear enough and should be improved?
This results in three invocations of the predicate on a successful assign.
Yes, one of them being the assertion. The policy in the example is protective and tries to prevent construction of invalid object even in non-debug mode. It could be empty and then this situation would be handled by the assertion, leaving only one predicate call in release mode. Best regards, Robert

On Sat, Dec 6, 2008 at 4:07 PM, Gordon Woodhull <gordon@woodhull.com> wrote:
Robert wrote:
The epsilon solution has already been proposed, but as I understand this (correct me if I'm wrong) it wouldn't work either:
> From: Zach Laine
> Yet another use case would be "close enough is good enough". If the > bounds are within a user-defined epsilon of either boundary.
If I understand correctly, this does not solve the problem either. Let's assume you have two values: x and y, where x = y + eps (eps being the user-defined margin of error). One comparison of x and y would indicate their equality (the difference is not greater than eps), while another one might not if x got truncated in the meantime (and y didn't).
Thanks, I understand the problem now.
I think we are all agreed that the library is orthogonal to any concerns about floating point and nothing needs to be changed in the code. IMO the warning should be toned down in the documentation. I wish the definition that Stjepan suggests were possible, but I don't see how a test can be designed that only only switches from unsatisfied to satisfied.
The epsilon is what makes the difference. Suppose that the invariant condition that we want to enforce is: x < y The problem is that x (and perhaps y, but let's ignore that for simplicity) can at a later point in time go up or down by some dx as a result of truncation. If the condition function tests for x < y, the following things can happen when testing before truncation (I might have mess up some < or <= in there): 1. if x + dx < y, then the condition passes, and it will always pass even after truncation 2. if y <= x - dx, then the condition fails, and it will always fail even after truncation 3. if x < y <= x + dx, then the condition passes, but after truncation it can fail (*this is the problem*) 4, if x - dx < y <= x, then the condition will fail, but after truncation it might pass (this is unfortunate, but does not break the invariant - in any case, it can trigger the policy and the policy can either throw or force truncation and retest or whatever is appropriate). If we keep the invariant at x < y, but the condition actually tests for x + epsilon < y where epsilon >= delta, then you have x + epsilon < y ==> x < y (a passing test guarantees the invariant), as well as x + epsilon < y ==> x + dx < y (a passing condition test guarantees that the validity of the *invariant* won't change). Sure, the passing test does not guarantee that the results of the *test* don't change (the problem pointed out in the quoted text at the beginning), but we don't care about that - we just care that the passed test guarantees that the desired *invariant* does not change. In effect, we are throwing out case 3 above at the expense of expanding the interval in which case 4 (a much more acceptable case) occurs. Another way of describing this would be to say that the library should not necessarily require that the condition test passes if and only if the invariant is satisfied - it should only require that the test fails if the invariant is not satisfied (but if the invariant is satisfied, the test is allowed to fail). Stjepan

On Dec 6, 2008, at 7:17 PM, Stjepan Rajko wrote:
Another way of describing this would be to say that the library should not necessarily require that the condition test passes if and only if the invariant is satisfied - it should only require that the test fails if the invariant is not satisfied (but if the invariant is satisfied, the test is allowed to fail).
That makes sense to me. I don't know if this is a strict weak ordering as the documentation now requires, but I think it will work for 99.99999999999999% of cases. ;) My intention (when I find some time after the end of the semester) is to use the Boost.Test predicates to define less_eps, greater_eps, etc. I will change the epsilon input into a policy with a reasonable default, although the documentation claims that the appropriate epsilon is always application-dependent. I suppose there should also be runtime changeable versions. IIUC then all the inequality predicates are just combinations of regular < and > with the closeness/equality check, where the latter trumps the former. Intuitively (the way I think ;) this means that if it's too close it "should be considered equal" not just because of float rounding but because there is always error. I also want to find out if there is a way to reliably truncate values on purpose before any comparison - would that not produce consistent results? I don't think I am willing to maintain assembly code for a lot of platforms however. I am no float expert but I always need this. Would appreciate any help, especially poking holes in the design. Robert, it sounds like you're willing to explain the problem better in the documentation and hopefully we'll have some solutions for people too. Gordon

From: Stjepan Rajko
The epsilon is what makes the difference. Suppose that the invariant condition that we want to enforce is:
x < y
The problem is that x (and perhaps y, but let's ignore that for simplicity) can at a later point in time go up or down by some dx as a result of truncation.
If the condition function tests for x < y, the following things can happen when testing before truncation (I might have mess up some < or <= in there):
1. if x + dx < y, then the condition passes, and it will always pass even after truncation 2. if y <= x - dx, then the condition fails, and it will always fail even after truncation 3. if x < y <= x + dx, then the condition passes, but after truncation it can fail (*this is the problem*) 4, if x - dx < y <= x, then the condition will fail, but after truncation it might pass (this is unfortunate, but does not break the invariant - in any case, it can trigger the policy and the policy can either throw or force truncation and retest or whatever is appropriate).
If we keep the invariant at x < y, but the condition actually tests for x + epsilon < y where epsilon >= delta, then you have x + epsilon < y ==> x < y (a passing test guarantees the invariant), as well as x + epsilon < y ==> x + dx < y (a passing condition test guarantees that the validity of the *invariant* won't change). Sure, the passing test does not guarantee that the results of the *test* don't change (the problem pointed out in the quoted text at the beginning), but we don't care about that - we just care that the passed test guarantees that the desired *invariant* does not change. In effect, we are throwing out case 3 above at the expense of expanding the interval in which case 4 (a much more acceptable case) occurs.
Another way of describing this would be to say that the library should not necessarily require that the condition test passes if and only if the invariant is satisfied - it should only require that the test fails if the invariant is not satisfied (but if the invariant is satisfied, the test is allowed to fail).
So what's the conclusion in the context of separation of invariant and the test? That we may end up having bounded float with value a bit greater than the upper bound, but that's fine, because the difference will never exceed some user-defined epsilon? Is the epsilon constant? The "delta" (difference between extended and truncated value) may have a very big value for big numbers and very small for small ones, so epsilon should rather be scaled according to the magnitude of compared numbers. Did I get things right so far? Then why complicate things with epsilon at all? If we allow for values outside of the bounds but only a "delta" away, we may simply stay with the "<" comparison. Even better, I would love to see a solution to force truncation of a value, so the comparisons are always performed on truncated values and we may stay with the "test == invariant" approach. And another issue is NaN -- it breaks the strict weak ordering, so it may or may not be allowed as a valid value depening on the direction of comparison ("<" or ">"). I guess NaN should not be an allowed value in any case, but I have no idea yet how to enforce this without float-specific implementation of within_bounds. Regards, Robert

On Mon, Dec 8, 2008 at 9:12 AM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
So what's the conclusion in the context of separation of invariant and the test? That we may end up having bounded float with value a bit greater than the upper bound, but that's fine, because the difference will never exceed some user-defined epsilon? Is the epsilon constant? The "delta" (difference between extended and truncated value) may have a very big value for big numbers and very small for small ones, so epsilon should rather be scaled according to the magnitude of compared numbers.
I know little about floats and what the values of the deltas are and how they depend on the value of the float, but: The invariant is still: x < y The exact outcome depends on the policy. If the policy forces truncation and retests, there is no epsilon in the outcome either (it only exists in the test).
Did I get things right so far?
I think so.
Then why complicate things with epsilon at all? If we allow for values outside of the bounds but only a "delta" away, we may simply stay with the "<" comparison. Even better, I would love to see a solution to force truncation of a value, so the comparisons are always performed on truncated values and we may stay with the "test == invariant" approach.
If you are sticking with test == invariant just for the sake of test == invariant (rather than a lack of time to investigate and document the other case), I think you are settling to sell your library for way shorter than you can. And like I've pointed out before, there is no reason why you shouldn't make this a focus of the library and treat it specially (the test==invariant case provides some really nice benefits, like you can test for the invariant exactly and assert after the policy check). You can reserve the word "constraint" for when constraint=test=invariant. You can even do this: constrained_value<T, constraint, policy> expands to what_is_now_constrained_value_minus_the_assert<T, constraint, always_assert_after<policy> >
And another issue is NaN -- it breaks the strict weak ordering, so it may or may not be allowed as a valid value depening on the direction of comparison ("<" or ">"). I guess NaN should not be an allowed value in any case, but I have no idea yet how to enforce this without float-specific implementation of within_bounds.
I haven't taken a close look at bounded values, I'm just thinking of them as a specific case of constrained values. What is your invariant here? That ((min <= value) && (value <= max)) or that !((value < min) || (max < value))? Why do you need a strict weak ordering for either one? I believe NaN will fail the first test but pass the second one - if that is true, why is NaN a problem if you use the first test? (sorry if I'm missing something, like I said I'm not well versed in the details of floats) Best, Stjepan

From: Stjepan Rajko
So what's the conclusion in the context of separation of invariant and the test? That we may end up having bounded float with value a bit greater than the upper bound, but that's fine, because the difference will never exceed some user-defined epsilon? Is the epsilon constant? The "delta" (difference between extended and truncated value) may have a very big value for big numbers and very small for small ones, so epsilon should rather be scaled according to the magnitude of compared numbers.
I know little about floats and what the values of the deltas are and how they depend on the value of the float, but:
The invariant is still: x < y
Are we still talking about the case when test ==> invariant? I'm confused -- if we allow for a value (x) being a "delta" bigger than the upper bound (y), then why the invariant should be "x < y" rather than "x - delta < y"?
If you think about it, you are already separating the test from the invariant in your advanced examples. Think about the object that uses the library to keep track of it's min/max. The test checks for whether you have crossed the previous min/max. Sure, you could say the invariant is the same: "the object is between the min and max present in the constraint object". But really, what kind of guarantee is this? If I need to look at the constraint to figure out what I'm being guaranteed, I might as well look at the value itself and see where it stands. I would consider this as "no invariant". There, you already have docs for this case :-)
I would say the invariant is still there, but it is "inverted" -- in typical bounded objects it is: "the value always lays within the bounds", while here it is "the bounds always contain the value". When the value is going to be modified, the error policy ensures that the invariant is still upheld by modifying the constraint (in contrast to the more common case when it would modify the value). The test here is always equal to the invariant, so it doesn't seem to be a representative example for test ==> invariant concept.
If you are sticking with test == invariant just for the sake of test == invariant (rather than a lack of time to investigate and document the other case), I think you are settling to sell your library for way shorter than you can.
Please forgive me my resistance, but I stick with test == invariant because I believe that as a person responsible for a library I have to think 100 times and be really convinced before I add/change anything. I wouldn't have so much doubts if I see that there are useful and general applications that would outweigh the added complexity (I hope you agree that it will be more difficult to explain test ==> invariant approach to the users?) and some extra work needed. So far you've shown one application that is dealing with the FP issue using epsilon, but we don't know yet if this approach is leading to (best or any) solution of the problem. Are there any other use cases that I should consider? Maybe it's best to leave it as is for now, and when you test whether the approach is really sound and useful, we could make the necessary changes (before the first official release)?
And another issue is NaN -- it breaks the strict weak ordering, so it may or may not be allowed as a valid value depening on the direction of comparison ("<" or ">"). I guess NaN should not be an allowed value in any case, but I have no idea yet how to enforce this without float-specific implementation of within_bounds.
I haven't taken a close look at bounded values, I'm just thinking of them as a specific case of constrained values. What is your invariant here? That ((min <= value) && (value <= max)) or that !((value < min) || (max < value))? Why do you need a strict weak ordering for either one? I believe NaN will fail the first test but pass the second one - if that is true, why is NaN a problem if you use the first test? (sorry if I'm missing something, like I said I'm not well versed in the details of floats)
The problem with NaN is that any comparison with this value yields false. So: NaN < x == false NaN > x == false NaN <= x == false ... and so on. This violates the rules of strict weak ordering, which guarantee that we can perform tests for bounds inclusion without surprises. For example, when x == NaN, the following "obvious" statement may be false: (l < x && x < u) ==> (l < u) Maybe the requirement could be loosened if I find a generic way to implement the bounds inclusion test which always returns false for NaN. Currently, to test x for inclusion in a closed range [lower, upper], we have: !(x < lower) && !(upper < x) While for an open range (lower, upper): (lower < x) && (x < upper) Now, if we try to check if NaN is within the closed range, we get true, while for the open range we get false. Therefore NaN belongs to the subset, but does not belong to the superset, which is obviously a contradiction. I'm not sure if such properties of NaN could lead to broken invariant, but surely it would be good to avoid the strange results. Best regards, Robert

On Mon, Dec 8, 2008 at 4:39 PM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
From: Stjepan Rajko
The invariant is still: x < y
Are we still talking about the case when test ==> invariant? I'm confused -- if we allow for a value (x) being a "delta" bigger than the upper bound (y), then why the invariant should be "x < y" rather than "x - delta < y"?
OK, if you make your test be "x < y" then you can guarantee "x - delta < y". But the user doesn't want to deal with the deltas - that is the whole issue here. So, you make your test "x + epsilon < y" and you can guarantee "x < y".
If you think about it, you are already separating the test from the invariant in your advanced examples. Think about the object that uses the library to keep track of it's min/max. The test checks for whether you have crossed the previous min/max. Sure, you could say the invariant is the same: "the object is between the min and max present in the constraint object". But really, what kind of guarantee is this? If I need to look at the constraint to figure out what I'm being guaranteed, I might as well look at the value itself and see where it stands. I would consider this as "no invariant". There, you already have docs for this case :-)
I would say the invariant is still there, but it is "inverted" -- in typical bounded objects it is: "the value always lays within the bounds", while here it is "the bounds always contain the value". When the value is going to be modified, the error policy ensures that the invariant is still upheld by modifying the constraint (in contrast to the more common case when it would modify the value). The test here is always equal to the invariant, so it doesn't seem to be a representative example for test ==> invariant concept.
That depends on what you claim the invariant to be. If you claim that the invariant is equivalent to the test, then it is not a representative case. If you claim that the invariant is "nothing", then it is a representative case. My point was that (unless accessing the constraint costs less than accessing the value) there is no additional benefit in saying that the invariant is equivalent to the test compared to simply saying there is no invariant. Granted, there is no additional benefit in the other direction either, but that's kind of my point - there is no benefit from the stated invariant.
If you are sticking with test == invariant just for the sake of test == invariant (rather than a lack of time to investigate and document the other case), I think you are settling to sell your library for way shorter than you can.
Please forgive me my resistance, but I stick with test == invariant because I believe that as a person responsible for a library I have to think 100 times and be really convinced before I add/change anything. I wouldn't have so much doubts if I see that there are useful and general applications that would outweigh the added complexity (I hope you agree that it will be more difficult to explain test ==> invariant approach to the users?) and some extra work needed.
Depends on the user. Fundamentally, all that you would have to communicate in the docs (and assure yourself of) is that "if the test guarantees the invariant, and the policy guarantees the invariant, the library guarantees the invariant". If a user is unclear, they can fall back to your great discussion of the test == invariant case, and stick with that type of use. As far as "if the test guarantees...the library guarantees...", it should be no more complicated to understand that "if X is thread-safe then something<X> is thread-safe", or something similar regarding exception safety.
So far you've shown one application that is dealing with the FP issue using epsilon, but we don't know yet if this approach is leading to (best or any) solution of the problem.Are there any other use cases that I should consider?
Hmm.. I think I mentioned more examples than just the FP case (e.g, monitored values with no invariant - your library doesn't have to call it a 'monitored_value' for me to use it as such, but if you require test == invariant I won't because you can break my code, e.g. with an assert whenever you'd like and I can't complain).
Maybe it's best to leave it as is for now, and when you test whether the approach is really sound and useful, we could make the necessary changes (before the first official release)?
I've already voted to accept the library, so I believe that the library is a valuable addition as it stands (modulo conditions, and as far as FP goes, I said I find the exact(value) solution acceptable). As far as what is best, I don't know.
And another issue is NaN -- it breaks the strict weak ordering, so it may or may not be allowed as a valid value depening on the direction of comparison ("<" or ">"). I guess NaN should not be an allowed value in any case, but I have no idea yet how to enforce this without float-specific implementation of within_bounds.
I haven't taken a close look at bounded values, I'm just thinking of them as a specific case of constrained values. What is your invariant here? That ((min <= value) && (value <= max)) or that !((value < min) || (max < value))? Why do you need a strict weak ordering for either one? I believe NaN will fail the first test but pass the second one - if that is true, why is NaN a problem if you use the first test? (sorry if I'm missing something, like I said I'm not well versed in the details of floats)
The problem with NaN is that any comparison with this value yields false. So:
NaN < x == false NaN > x == false NaN <= x == false ... and so on.
This violates the rules of strict weak ordering, which guarantee that we can perform tests for bounds inclusion without surprises. For example, when x == NaN, the following "obvious" statement may be false:
(l < x && x < u) ==> (l < u)
Why would I care that l < u? If I'm using a bounded type with lower bound l and upper bound u, presumably I just care that (l < x && x < u).
Maybe the requirement could be loosened if I find a generic way to implement the bounds inclusion test which always returns false for NaN. Currently, to test x for inclusion in a closed range [lower, upper], we have:
!(x < lower) && !(upper < x)
But as the NaN case illustrates, !(x < lower) && !(upper < x) is not the same as (lower <= x) && (x <= upper). If I'm using a bounded type, I would want the latter. In non-numerical settings, and with custom comparisons, the two might have nothing to do with each other. I think bounded_value should *always* use the test (and invariant) compare(lower, x) && compare(x, upper). If you want boundaries excluded, use < for compare. If you want them included, use <=. If the type doesn't offer <=, use "<(a, b) || ==(a, b)".
While for an open range (lower, upper):
(lower < x) && (x < upper)
Now, if we try to check if NaN is within the closed range, we get true, while for the open range we get false. Therefore NaN belongs to the subset, but does not belong to the superset, which is obviously a contradiction. I'm not sure if such properties of NaN could lead to broken invariant, but surely it would be good to avoid the strange results.
Agreed, but I think using the test/constraint as I suggest above would avoid strange results in more cases (using two completely different types of tests also strikes me as a source of unexpected behavior). And you don't have to worry about NaNs. Unless I'm missing something. Stjepan

On Mon, Dec 8, 2008 at 6:22 PM, Stjepan Rajko <stjepan.rajko@gmail.com> wrote:
On Mon, Dec 8, 2008 at 4:39 PM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
Maybe the requirement could be loosened if I find a generic way to implement the bounds inclusion test which always returns false for NaN. Currently, to test x for inclusion in a closed range [lower, upper], we have:
!(x < lower) && !(upper < x)
But as the NaN case illustrates, !(x < lower) && !(upper < x) is not the same as (lower <= x) && (x <= upper). If I'm using a bounded type, I would want the latter. In non-numerical settings, and with custom comparisons, the two might have nothing to do with each other. I think bounded_value should *always* use the test (and invariant)
compare(lower, x) && compare(x, upper).
If you want boundaries excluded, use < for compare. If you want them included, use <=. If the type doesn't offer <=, use "<(a, b) || ==(a, b)".
OK, now I am getting both hasty and brain-dead. "!<(b,a)", as you have it, is a perfectly fine comparison for many cases. <(a,b) || ==(a,b) might work better for the NaN problem, but not for others. Stjepan

From: Stjepan Rajko
Are we still talking about the case when test ==> invariant? I'm confused -- if we allow for a value (x) being a "delta" bigger than the upper bound (y), then why the invariant should be "x < y" rather than "x - delta < y"?
OK, if you make your test be "x < y" then you can guarantee "x - delta < y". But the user doesn't want to deal with the deltas - that is the whole issue here. So, you make your test "x + epsilon < y" and you can guarantee "x < y".
OK, got it.
So far you've shown one application that is dealing with the FP issue using epsilon, but we don't know yet if this approach is leading to (best or any) solution of the problem.Are there any other use cases that I should consider?
Hmm.. I think I mentioned more examples than just the FP case (e.g, monitored values with no invariant
Sorry, I must have had a temporary amnesia. ;-)
This violates the rules of strict weak ordering, which guarantee that we can perform tests for bounds inclusion without surprises. For example, when x == NaN, the following "obvious" statement may be false:
(l < x && x < u) ==> (l < u)
Why would I care that l < u? If I'm using a bounded type with lower bound l and upper bound u, presumably I just care that (l < x && x < u).
I just wanted to show that NaN comparison produces surprising results, there's nothing paticularily important about this example.
If you want boundaries excluded, use < for compare. If you want them included, use <=. If the type doesn't offer <=, use "<(a, b) || ==(a, b)".
The problem is that we want to compare the values using the supplied comparison predicate, which represents the "<" relation. This is a simple and generic approach used, for examle, in STL. However, comparison with NaN doesn't quite fit in this model.
And you don't have to worry about NaNs. Unless I'm missing something.
But if I find a way to make the test behave consistently in the presence of NaNs (e.g., always indicating that NaN doesn't belong to a range), then why not. ;-) Best regards, Robert

----- Original Message ----- From: "Jeff Garland" <jeff@crystalclearsoftware.com> To: <boost@lists.boost.org>; <boost-users@lists.boost.org> Sent: Monday, December 01, 2008 1:30 PM Subject: [boost] [review][constrained_value] Review of Constrained Value Library begins today Hi, what do you think about replacing the typedef by a specific class so instead of writing bounded<int>::type v; we write bounded<int> v; The advantage to have a specific class (see below) is that it allows to add specific members that have a sens, e. g. change_lower_bound can be added to bounded<int, int, int> but not to bounded_int<int, 0, 100>. The liability is that we need to repeat a lot of constructors. Of course this can be done by the user itself as constrained is a public class but at the end what is more important is the user interface. This technique has already been used at least on Boost.MultiIndex and Boost.Flyweight. Best regards, Vicente ================================================ /// The class of constrained for bounded object types (using within_bounds constraint policy). template < typename ValueType, typename LowerType = ValueType, typename UpperType = LowerType, typename ErrorPolicy = throw_exception<>, typename LowerExclType = boost::mpl::false_, typename UpperExclType = LowerExclType, typename CompareType = std::less<ValueType>
struct bounded : constrained< ValueType, within_bounds< LowerType, UpperType, LowerExclType, UpperExclType, CompareType >, ErrorPolicy > { /// The basetype-typedef alias. typedef constrained< ValueType, within_bounds< LowerType, UpperType, LowerExclType, UpperExclType, CompareType >, ErrorPolicy > basetype; // add all the constructors and specific functions // ... };

From: vicente.botet
what do you think about replacing the typedef by a specific class so instead of writing bounded<int>::type v; we write bounded<int> v;
I was already thinking about this, but I'm not sure if this is that good idea.
The advantage to have a specific class (see below) is that it allows to add specific members that have a sens, e. g. change_lower_bound can be added to bounded<int, int, int> but not to bounded_int<int, 0, 100>.
OTOH the classes will have members that make no sense in some cases, e.g. change_bounds_inclusion for clipping (assuming it will derive from bounded, otherwise the other useful functions would have to be duplicated in both classes). Therefore I'd rather stick with the current solution to avoid defining six classes with lots of constructors and many identical members... Regards, Robert

Here is my review for the proposed library:
- What is your evaluation of the design?
I think the design is simple, elegant, and allows a lot of flexibility. The only problem I see is that it doesn't address the frequently requested support for floating point constraints. And with that, the problem is not in the implementation-related aspects of the design, but in what the library sets out as requirements for the behavior of the constraint. I think permitting constraints that can "spontaneously" switch from unsatisfied to satisfied (and documenting what that means in terms of guarantees the library makes) would be a good thing. One thing I want to mention is that there is a slightly higher abstraction that this library almost addresses, what can maybe be called "monitored values" - values where you need to do something when the value changes (or is first constructed). In the case of this library, "something" is checking a constraint and calling a policy if the constraint is violated. In other cases, it might be calling a boost.signal, in another, writing to a log. The author might consider (eventually) altering the design of the library to offer this higher level of abstraction, as he already solves the problem of hooking into a value change reasonably well (so, in addition to constrained and bounded classes, offer a monitored class using which constrained is implemented). This would allow much wider use of the library. (I don't know if there are requirements for a "MonitoredValue" library that are significantly different / in addition to what the current libraries, that I'm not thinking of at the moment)
- What is your evaluation of the implementation?
I looked at the implementation of the constrained class, and found it very satisfactory. Well implemented, well documented code that was a pleasure to review.
- What is your evaluation of the documentation?
Great as a tutorial, and great as class / function reference. The only part I found lacking is the discussion of exact requirements on the underlying value type / constraint (my previous posts elaborate on this). Robert - I think the notion of "spontaneous" changes of the constraint extends beyond the floating point case and perhaps be used to discus other "spontaneous" changes or changes beyond the control of whatever the library has control over (like the use_count of a shared_ptr).
- What is your evaluation of the potential usefulness of the library?
Very useful. Beyond the constrained and wrapping built-in types, the documentation suggests some very creative uses of the library, and the flexibility of the library promises a lot more. When it can take advantage of declspec, the usefulness will further increase even more. Lambda might kick it up a notch as well (Robert, in my suggestion to provide syntactic sugar support for the calling of a f(value_type &) function, my unstated point was that f could be any callable - including a lambda expression).
- Did you try to use the library? With what compiler? Did you have any problems?
Nope. No problems :-)
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I read through the documentation a few times, followed the discussion on the list, and looked at the implementation of the constrained class.
- Are you knowledgeable about the problem domain?
My experience in using functionality related to the library consists of inserting asserts here and there, and manually wrapping values when they need to wrap. I recall plenty of situations where having a library like this would have been wonderful.
Please state in your review, whether you think the library should be accepted as a Boost library.
I believe the library should be accepted. I found the library quite feature-complete for its domain (in reading the docs, every time I found myself thinking "this library should support x", a few paragraphs down I would find out it does), while preserving simplicity and elegance. Some of the things I would definitely like to see in the final version are: * document exact requirements on the value, constraint and policy types, and how they relate to library's guarantees * the library should *legally* allow for a reasonable floating point solution (it would be great if it actually provided one, but allowing one would be good enough for me) * (seconding suggestions by other reviewers): provide tests (including sizeof tests) Nice work, Robert! Stjepan

On Sat, Dec 6, 2008 at 10:31 AM, Stjepan Rajko <stjepan.rajko@gmail.com> wrote:
One thing I want to mention is that there is a slightly higher abstraction that this library almost addresses, what can maybe be called "monitored values" - values where you need to do something when the value changes (or is first constructed). In the case of this library, "something" is checking a constraint and calling a policy if the constraint is violated. In other cases, it might be calling a boost.signal, in another, writing to a log.
The author might consider (eventually) altering the design of the library to offer this higher level of abstraction, as he already solves the problem of hooking into a value change reasonably well (so, in addition to constrained and bounded classes, offer a monitored class using which constrained is implemented). This would allow much wider use of the library. (I don't know if there are requirements for a "MonitoredValue" library that are significantly different / in addition to what the current libraries, that I'm not thinking of at the moment)
I am rethinking this suggestion. If the library ends up separating the invariant from the test, then the following would provide support for a purely monitored value with the implementation the library already offers: * "set" the invariant to none (i.e., guarantee nothing additional to the invariant offered by the underlying type) * use a test that always returns false * use a policy class that does whatever the task of the monitoring is (e.g., send a boost::signal, write to a log, ...) I think I like that better than my original suggestion, because you can now also have conditionally monitored values (i.e., you only want to send a signal or log in certain circumstances). This is convincing me even more that the current design of the library is spot-on, with the exception of separating the current "constraint" into separately considered "invariant" and "test" (or whatever terms might fit better). I strongly propose the following structure (a reiteration of what I've already posted): * there is an invariant, which is only documented * there is a test which is implemented - a passing test guarantees the invariant holds (and will remain holding until the object is explicitly changed), and a failing test causes the policy to get invoked * the policy guarantees that the invariant will hold when/if it returns (and can change the test / invariant if it wants to, where a change in the invariant again only occurs in the documentation of the behavior) The above treatment covers the current behavior of the library, allows dealing with issues such as the floating point case, *and* support for purely monitored values. AFAICT, It requires no changes to the implementation. Stjepan
- What is your evaluation of the implementation?
I looked at the implementation of the constrained class, and found it very satisfactory. Well implemented, well documented code that was a pleasure to review.
- What is your evaluation of the documentation?
Great as a tutorial, and great as class / function reference.
The only part I found lacking is the discussion of exact requirements on the underlying value type / constraint (my previous posts elaborate on this). Robert - I think the notion of "spontaneous" changes of the constraint extends beyond the floating point case and perhaps be used to discus other "spontaneous" changes or changes beyond the control of whatever the library has control over (like the use_count of a shared_ptr).
- What is your evaluation of the potential usefulness of the library?
Very useful. Beyond the constrained and wrapping built-in types, the documentation suggests some very creative uses of the library, and the flexibility of the library promises a lot more. When it can take advantage of declspec, the usefulness will further increase even more. Lambda might kick it up a notch as well (Robert, in my suggestion to provide syntactic sugar support for the calling of a f(value_type &) function, my unstated point was that f could be any callable - including a lambda expression).
- Did you try to use the library? With what compiler? Did you have any problems?
Nope. No problems :-)
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I read through the documentation a few times, followed the discussion on the list, and looked at the implementation of the constrained class.
- Are you knowledgeable about the problem domain?
My experience in using functionality related to the library consists of inserting asserts here and there, and manually wrapping values when they need to wrap. I recall plenty of situations where having a library like this would have been wonderful.
Please state in your review, whether you think the library should be accepted as a Boost library.
I believe the library should be accepted. I found the library quite feature-complete for its domain (in reading the docs, every time I found myself thinking "this library should support x", a few paragraphs down I would find out it does), while preserving simplicity and elegance.
Some of the things I would definitely like to see in the final version are:
* document exact requirements on the value, constraint and policy types, and how they relate to library's guarantees * the library should *legally* allow for a reasonable floating point solution (it would be great if it actually provided one, but allowing one would be good enough for me) * (seconding suggestions by other reviewers): provide tests (including sizeof tests)
Nice work, Robert!
Stjepan

From: Stjepan Rajko
I think I get your idea. However, it is still a fresh mental model and I'm not confident about all the hidden problems that may be involved. Anyway, here are a few loose comments of mine.
The above treatment covers the current behavior of the library, allows dealing with issues such as the floating point case
Actually, I would see something different as a perfect solution to the FP problem. If an "exact floating point" type could be provided (out of scope of this library), being a wrapper for float/double and making sure that its underlying value is always truncated, you could perform comparisons (and all the other operations) that are repeatable, without the possibility that a comparison that once succeeded will later fail. Does it sound sensible? And the "exact floating point" could be implemented as a monitored value that would truncate the value on assignment... Isn't it getting crazy? :D
AFAICT, It requires no changes to the implementation.
Not exactly: - there are asserts checking for the invariant, which then should be removed, - given the modified set of concepts and assumptions, the current names in the code do not necessarily fit the purposes (e.g., constraint is not constraint anymore, it is the trigger of the monitor callback, which in turn is the former error policy). I also think that it should be analysed whether the current design really fits monitored values in an optimal way. The implementation seems optimal for constrained values, which doesn't have to be the case for monitored values. And the other way round -- maybe a monitored value wouldn't be a best choice for implementation of constrained value? I think the idea of monitored values as the generalisation of constrained values seems reasonable and elegant. Unfortunately I doubt I would have time and resources to transform Constrained Value library into Monitored Value library (this library already consumes more of my free time than I have :P). However, I think that if such library is created in the future, then Constrained Value library may be re-implemented in terms of it (as an extension). One more thought -- monitored value may be actually a case of even more general idea, something similar to transparent proxy in .Net, or at least something that could be called a "universal wrapper". This wrapper would allow to define callbacks invoked every time the value is being set (as in monitored value) and get. (I think somebody might have already been discussing this idea with me, but it would be a long time ago and I can't remember.) Best regards, Robert

On Sun, Dec 7, 2008 at 6:50 PM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
From: Stjepan Rajko
I think I get your idea. However, it is still a fresh mental model and I'm not confident about all the hidden problems that may be involved. Anyway, here are a few loose comments of mine.
The above treatment covers the current behavior of the library, allows dealing with issues such as the floating point case
Actually, I would see something different as a perfect solution to the FP problem. If an "exact floating point" type could be provided (out of scope of this library), being a wrapper for float/double and making sure that its underlying value is always truncated, you could perform comparisons (and all the other operations) that are repeatable, without the possibility that a comparison that once succeeded will later fail. Does it sound sensible?
Yes, and it would fit with what your library currently supports. It seems like a class like that would be uber-useful anyway (considering how much attention problems with floating point comparisons got in this review, I am wondering why I never considered this or saw this considered in the past).
And the "exact floating point" could be implemented as a monitored value that would truncate the value on assignment... Isn't it getting crazy? :D
I find that such craziess often accompanies well designed libraries ;-)
AFAICT, It requires no changes to the implementation.
Not exactly: - there are asserts checking for the invariant, which then should be removed,
Yes, you are right.
- given the modified set of concepts and assumptions, the current names in the code do not necessarily fit the purposes (e.g., constraint is not constraint anymore, it is the trigger of the monitor callback, which in turn is the former error policy).
I agree here as well. Your nomenclature fits your use case very well, whereas a more abstracted nomenclature would probably be more vague in all of the individual scenarios it supported.
I also think that it should be analysed whether the current design really fits monitored values in an optimal way. The implementation seems optimal for constrained values, which doesn't have to be the case for monitored values. And the other way round -- maybe a monitored value wouldn't be a best choice for implementation of constrained value?
I am inclined to think that the implementation you have now (minus the asserts and the nomenclature) covers all of the uses fairly well. Existence of in-between use cases (conditionally monitored values, e.g., log whenever the temperature is over 35 degrees Celsius) leads me to believe that neither of {monitored value, conditional value} should be implemented in terms of the other. I think they should be implemented under a common abstraction, and I think your implementation implements that abstraction (again, minus the asserts and the nomenclature).
I think the idea of monitored values as the generalisation of constrained values seems reasonable and elegant. Unfortunately I doubt I would have time and resources to transform Constrained Value library into Monitored Value library (this library already consumes more of my free time than I have :P). However, I think that if such library is created in the future, then Constrained Value library may be re-implemented in terms of it (as an extension).
The nomenclature problems, the need to revamp the documentation, and uncertainties about what lies hidden in the changes are probably reasons enough to justify stick with what you have, at least for now. I think your library will have a field day with c++0x - in addition to all other goodies you can take advantage of, you can solve the nomenclature problem elegantly with template typedefs (should you choose to extend the scope of the library).
One more thought -- monitored value may be actually a case of even more general idea, something similar to transparent proxy in .Net, or at least something that could be called a "universal wrapper". This wrapper would allow to define callbacks invoked every time the value is being set (as in monitored value) and get. (I think somebody might have already been discussing this idea with me, but it would be a long time ago and I can't remember.)
Looks like you have your work cut out for you for quite a while :-) For now, I would be plenty happy if you just took out the asserts, or made them optional (with defaulting to asserts, if you wish). That way I can at least start experimenting with your library in a monitored_value context, and let you know how it goes (I have use cases for this). Best, Stjepan

From: Stjepan Rajko
I am inclined to think that the implementation you have now (minus the asserts and the nomenclature) covers all of the uses fairly well. Existence of in-between use cases (conditionally monitored values, e.g., log whenever the temperature is over 35 degrees Celsius) leads me to believe that neither of {monitored value, conditional value} should be implemented in terms of the other. I think they should be implemented under a common abstraction, and I think your implementation implements that abstraction (again, minus the asserts and the nomenclature).
I think the current implementation is not the most suitable one for monitored values. The asignment operator is: if( constraint()(v) ) _value() = v; else error_handler()(_value(), v, _constraint()); So calling the monitor (error_handler) excludes assignment of the value, unless the monitor performs the assignment by itself. This is a bit clumsy -- e.g., having the case with logging the temperature if it exceeds a treshold, monitor would not only have to log, but also to assign the value. I see implementation of monitored values' assignment a bit different: if( _monitor(_value, new_value) ) // monitor decides whether the value should be assigned _value = new_value; // but does not perform the assignment by itself Then, conditionally-monitored values extend this by defining the following monitor callback: if( _condition(new_value) ) // no need to invoke the monitor return true; else // invoke the monitor return _monitor(old_value, new_value, _condition); And finally, constrained value would be a conditionally-monitored value, where the condition is the constraint and the inner monitor callback is the error policy.
For now, I would be plenty happy if you just took out the asserts, or made them optional (with defaulting to asserts, if you wish). That way I can at least start experimenting with your library in a monitored_value context, and let you know how it goes (I have use cases for this).
By saying about making the invariant asserts optional you mean something like wrapping them in a conditional compilation macro (like BOOST_CONSTRAINED_VALUE_NO_INVARIANT_ASSERTS) to be able to turn them off globally? I still can't convince myself to the idea of separating invariant from the test. IMO guarantee of the invariant is a strong point of the library. Giving up the guarantee not only may make the library more complicated for the users, but it may also lower the value of the library as a debugging device (since there will be less checks for coherence of the given set of policies, more opportunities to make a bug). Best regards, Robert

On Mon, Dec 8, 2008 at 9:35 AM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
From: Stjepan Rajko
I am inclined to think that the implementation you have now (minus the asserts and the nomenclature) covers all of the uses fairly well. Existence of in-between use cases (conditionally monitored values, e.g., log whenever the temperature is over 35 degrees Celsius) leads me to believe that neither of {monitored value, conditional value} should be implemented in terms of the other. I think they should be implemented under a common abstraction, and I think your implementation implements that abstraction (again, minus the asserts and the nomenclature).
I think the current implementation is not the most suitable one for monitored values. The asignment operator is:
if( constraint()(v) ) _value() = v; else error_handler()(_value(), v, _constraint());
So calling the monitor (error_handler) excludes assignment of the value, unless the monitor performs the assignment by itself. This is a bit clumsy -- e.g., having the case with logging the temperature if it exceeds a treshold, monitor would not only have to log, but also to assign the value.
I see implementation of monitored values' assignment a bit different:
if( _monitor(_value, new_value) ) // monitor decides whether the value should be assigned _value = new_value; // but does not perform the assignment by itself
Then, conditionally-monitored values extend this by defining the following monitor callback:
if( _condition(new_value) ) // no need to invoke the monitor return true; else // invoke the monitor return _monitor(old_value, new_value, _condition);
And finally, constrained value would be a conditionally-monitored value, where the condition is the constraint and the inner monitor callback is the error policy.
Then you have a different mechanism for monitored vs. conditionally monitored values. Instead, you could have a wrapper for the policy that does the assignment for you. purely_monitored_value<T, action> expands to: what_is_now_called_constrained<T, always_false, always_assign<action> >
For now, I would be plenty happy if you just took out the asserts, or made them optional (with defaulting to asserts, if you wish). That way I can at least start experimenting with your library in a monitored_value context, and let you know how it goes (I have use cases for this).
By saying about making the invariant asserts optional you mean something like wrapping them in a conditional compilation macro (like BOOST_CONSTRAINED_VALUE_NO_INVARIANT_ASSERTS) to be able to turn them off globally?
That would be fine.
I still can't convince myself to the idea of separating invariant from the test. IMO guarantee of the invariant is a strong point of the library. Giving up the guarantee not only may make the library more complicated for the users, but it may also lower the value of the library as a debugging device (since there will be less checks for coherence of the given set of policies, more opportunities to make a bug).
You do *not* give up the invariant. You are just saying that the test and the invariant are not the same thing. You still always guarantee the invariant. Not only that, but allowing wiggle room between the test and invariant allows you to guarantee invariants where before you couldn't. Furthermore, I think that the case where the test and invariant are the same thing is very important, and should take a prominent place in the library documentation. For example: take what you have in the documentation right now that depends on test <==> invariant, and call it "Perfectly Constrained Values" or "Verifiably/testably Constrained Values" or something. Then briefly discuss the test ==> invariant case as "look, you can do this too, just understand that the test doesn't test for the invariant exactly". If you think about it, you are already separating the test from the invariant in your advanced examples. Think about the object that uses the library to keep track of it's min/max. The test checks for whether you have crossed the previous min/max. Sure, you could say the invariant is the same: "the object is between the min and max present in the constraint object". But really, what kind of guarantee is this? If I need to look at the constraint to figure out what I'm being guaranteed, I might as well look at the value itself and see where it stands. I would consider this as "no invariant". There, you already have docs for this case :-) Best, Stjepan

Hi, An example of a date type would be welcome to show change upper bound at runtime, and see how the values of a variable (month) impact the constraints on another variable (day). Robert, what do you think of adding it to the documentation? Vicente

----- Original Message ----- From: "Jeff Garland" <jeff@crystalclearsoftware.com> To: <boost@lists.boost.org>; <boost-users@lists.boost.org> Sent: Monday, December 01, 2008 1:30 PM Subject: [boost] [review][constrained_value] Review of Constrained Value Library begins today Hi, Apologies for the late review - What is your evaluation of the design? It's a simple library with a simple and elegant implementation of a simple concept. However, I have some improvements suggestions on the design: * I find that the extreme example "bounded object with memory" is not a good example of constrained_value because not constrained at all. It is the example of what can be done with the current interface but that shouldn't be able to be done with a constrained class, i.e. define an unconstrained type. I don't understand why an error policy can modify the value or the constraint. I would prefer an error policy taking only two in-parameters (by value or const&) the new value and the constraint. * The wrapping and cliping classes are not exactly constrained values, they are adapting a value of a type to a constrained value. So I would prefer to have a different class for them, e.g. constrain_adaptor, taking a constraint adaptor that would take the value by reference and adapt it to the constraint. I don't think constrain_adaptor will need an error policy. * I don't know if you will add monitored values to your library. IMO it will be better to have them in a separated class. We can monitor values that are constrained or not. "bounded object with memory" could be better considerd as a monitored value. * The class bounded_int should take another name as e.g. static_bounded. * I dont like the free functions to change the constraints as change_lower_bound. I think that it is preferable to have specific classes and no typedefs. Classes as bounded can define the specific operations for changing the bounds. Even if there will be a lot of repeated code, what is more important is the user interface. * The operators have not always a sens for the application. For example, constrained<int, is_even> x; ++x; I would prefer to have a compile error in this particular case instead of an exception. It will be quite interesting to show how a user can catch this error at compile time and avoid this runtime error. Maybe we need a parameter to state that arithmetic operators are available. template<typename V , typename C , typename E > constrained< V, C, E , with_arithmetic_operators> & operator++ (constrained< V, C, E,with_arithmetic_operators> &c) the bounded hierarchy should have with_arithmetic_operators. * constrained must be default constructible. The library should provide a mean to specify a default value. E. g. constrained<default<int, 1>, is_odd > * The default value for excluding the bound is false, which mean that the bound is included. There is no example of excluding bounds. Here is one typedef bounded_int<int, 0, 100, throw_exception<>, true, true>::type t; I think that it will more clear if instead of stating exclusion we state inclusion, and the default be true, So we can write typedef static_bounded<int, 0, 100, throw_exception<>, false, true>::type t; Even in this case the bool values are not enough declarative. It would even be beter to define two templates, open, close (see below) typedef static_bounded<int, open<0>, close<100> >::type t; * I would like the library manage constraints between several variables, e.g the range of valid days values depends on the month. We can need to set/get each varaible independently or all at once. The concept of tuple seems interesting. typedef static_bounded<int,1,12> month; typedef constrained<int, between_1_31> day; struct adapt_day_to _month_constraint { bool operator(day& d, month& m) { switch (m) { case 1: case 3: case 5: case 8: case 10: case 12: d.change_constraint(between_1_31); break; case 2: d.change_constraint(between_1_28); break; default: d.change_constraint(between_1_30); } } }; typedef constrained_tuple<day, month, adapt_day_to _month_constraint > date; date d; assert(d.get<day>==1); assert(d.get<month>==1); d.get<month>=2; d.get<day>=30; // exception The current interface could also be used as typedef constrained<tuple<day, month>, adapt_day_to _month_constraint > date; but we can not assign the value independently. Of course the user can implement hisself the constrained_tuple class. If I have time I will try to implemnet it. * The number of parameters of the bounded class is already to high. template < typename ValueType, typename LowerType = ValueType, typename UpperType = LowerType, typename ErrorPolicy = throw_exception<>, typename LowerExclType = boost::mpl::false_, typename UpperExclType = LowerExclType, typename CompareType = std::less<ValueType> > struct bounded; One possibility could be to group the bound related parameters in a bound traits class: template <typename ValueType, bool Closed=true> struct bound_traits { typedef ValueType value_type; typedef boost::mpl::bool_<Closed> incl_type; }; We can define the open/close helper classes as follows: template <typename ValueType=undefined> struct open: bound_traits<ValueType, false> {}; template <typename ValueType> struct close : bound_traits<ValueType, true> {}; So the class bounded could pass from 7 to 5 parameters. template < typename ValueType, typename LowerTraits = close<ValueType>, typename UpperTraits = LowerTraits, typename ErrorPolicy = throw_exception<>, typename CompareType = std::less<ValueType> > struct bounded; and used as follows typedef bounded< int, open<>, close<> >::type t; which should be equivalent to afterr managing the undefined type typedef bounded< int, open<int>, close<int> >::type t; The same could be applied to bounded_int template < typename ValueType, ValueType LowerBound, ValueType UpperBound, typename ErrorPolicy = throw_exception<>, bool LowerExcl = false, bool UpperExcl = LowerExcl, typename CompareType = std::less<ValueType> > struct bounded_int; I would want to be able to write typedef static_bounded<int, open_c<0>, close_c<100> >::type t; template <typename ValueType, ValueType Bound, bool Closed> struct static_bound_traits { typedef boost::mpl::integral_c<ValueType, Bound> value_type; typedef boost::mpl::bool_<Closed> incl_type; }; template <typename ValueType, ValueType Bound> struct open_c; template <typename ValueType, ValueType Bound> struct close_c; template <int Bound> struct open_c<int, Bound>: static_bound_traits<int, Bound, false> {}; template <int Bound> struct close_c<int, Bound>: static_bound_traits<int, Bound, true> {}; template < typename ValueType, typename LowerBound, typename UpperBound, typename ErrorPolicy = throw_exception<>, typename CompareType = std::less<ValueType> > struct static_bounded; typedef static_bounded<char, open_c<char, 0>, close_c<char,100> >::type t; Boost.Parameters could also be useful. - What is your evaluation of the implementation? I was very impresed the way static and dynamic constrained types has been solved. This is a very elegant implementation. I would like the library ensures that the size of the constrained class be equal to the underlying type, at least for those that can't change neither the constraint nor the error policy at runtime. - What is your evaluation of the documentation? Good enough. Adding more examples will be welcome as: * Application domain classes using (by inheritance and by containement) a constrained class to see what the user needs to do when it needs to add some methods, and don't want some inherited methods. *Date type would show the change upper bound at runtime, and see how the values of a variable (month) impact the constraints on another variable (day). *The example of a low static bounded and runtime dynamic bounded could be a good one to show how the constrained class can be specialized. - What is your evaluation of the potential usefulness of the library? Constrained values are used by a lot of applications. Boost.ContrainedValues do it in an elegant way. So yes, it will very usefull. - Did you try to use the library? With what compiler? Did you have any problems? No, I have no taken the time. - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? In-depth study of the documentation, implementation and the review posts. - Are you knowledgeable about the problem domain? Except for the floating point issues the constrained value domain is quite simple, so yes I consider I know it. I'm not sure the library should be accepted in his current state. I would like to say that it should be accepted IF some of the following suggestions are taken in account, as * separation between constrained and constraint_adaptor * use of classes instead of typedefs and removal of the free functions * specialization of the operators with a possible parameter with_arithmetic_operators. * adding a default value template parameter to the value type. * reduction of the number of parameters of the class bounded (posible use of open<> and close<>) * reduction of the number of parameters of the class bounded_int (posible use of open_c<> and close_c<>) Hopping some of the suggestions could improve your library, Vicente

From: vicente.botet
* I find that the extreme example "bounded object with memory" is not a good example of constrained_value because not constrained at all.
Yes, as I have already explained, this is not supposed to be a typical ("good") example. This is supposed to be an "extreme" example. The introduction to the examples section says "In this section you will find some more advanced, useful or crazy examples of what can be done using the library. For introductory examples, see Tutorial section." and I think this example fits there well.
I don't understand why an error policy can modify the value or the constraint. I would prefer an error policy taking only two in-parameters (by value or const&) the new value and the constraint.
I have already answered to this in the other post.
* The wrapping and cliping classes are not exactly constrained values, they are adapting a value of a type to a constrained value.
Ditto.
* I don't know if you will add monitored values to your library. IMO it will be better to have them in a separated class.
Or, if it's reasonable, implement constrained objects in terms of monitored objects.
We can monitor values that are constrained or not. "bounded object with memory" could be better considerd as a monitored value.
I agree.
* The class bounded_int should take another name as e.g. static_bounded.
This may be a confusing name too. I'd rather name a class using static within_bounds predicate as static_bounded (something similar to what Jesse Perla was asking for on the users group).
* I dont like the free functions to change the constraints as change_lower_bound. I think that it is preferable to have specific classes and no typedefs. Classes as bounded can define the specific operations for changing the bounds.
I disagree here. This would mean unnecessary duplication of code and I'm definitely opponent of it.
* The operators have not always a sens for the application. For example, constrained<int, is_even> x; ++x;
I would prefer to have a compile error in this particular case instead of an exception. It will be quite interesting to show how a user can catch this error at compile time and avoid this runtime error.
Passing over presumable complexity of the implementation, this would be quite inconsistent behaviour. Why "++x" should behave in a different way than "x += 1"? What if the user wants this to be a runtime error or wants to use the object in a generic function (wich would contain the "++x" expression, but not necessarily invoke it)?
template<typename V , typename C , typename E > constrained< V, C, E , with_arithmetic_operators> & operator++ (constrained< V, C, E,with_arithmetic_operators> &c)
the bounded hierarchy should have with_arithmetic_operators.
Then for the even object we should also ban, e.g., *=, although it perfectly makes sense?
* constrained must be default constructible. The library should provide a mean to specify a default value. E. g. constrained<default<int, 1>, is_odd >
I think this is too general utility to belong to this library (it's similar to value_initialized in Boost.Utility).
* The default value for excluding the bound is false, which mean that the bound is included. There is no example of excluding bounds. Here is one
What do you exactly mean by saying "there is no example"? The tutorial contains a section titled "Bounded objects with open ranges".
I think that it will more clear if instead of stating exclusion we state inclusion, and the default be true,
So we can write typedef static_bounded<int, 0, 100, throw_exception<>, false, true>::type t;
A value-initialised bool has the value of false. Therefore, bounds exclusion is used rather than bounds inclusion, so the default is always "bounds included" when the bounds inclusion indicators are default-constructed.
* I would like the library manage constraints between several variables, e.g the range of valid days values depends on the month. We can need to set/get each varaible independently or all at once. The concept of tuple seems interesting.
Indeed, sounds interesting, but I wonder if it would be needed frequently enough to make this part of the library.
The current interface could also be used as
typedef constrained<tuple<day, month>, adapt_day_to _month_constraint > date;
but we can not assign the value independently.
That's right. What I'd rather do is to define a simple class containing the constrained day and month, and providing functions to get/set the members. The setter for the month would additionally adjust the upper bound of the day.
Of course the user can implement hisself the constrained_tuple class. If I have time I will try to implemnet it.
If you do, I'd love to hear from you how did it go.
* The number of parameters of the bounded class is already to high.
Most of them (ordered from most to least used) have default values, so I don't think this is a serious problem. Your solution reduces them from 7 to 5, which may be seen as not much... However, it looks interesting.
Even in this case the bool values are not enough declarative. It would even be beter to define two templates, open, close (see below) typedef static_bounded<int, open<0>, close<100> >::type t;
I don't know if this is an optimal solution if for the most common case, when the bounds are included, you have to write more than: typedef static_bounded<int, 0, 100>::type t;
Adding more examples will be welcome as: * Application domain classes using (by inheritance and by containement) a constrained class to see what the user needs to do when it needs to add some methods, and don't want some inherited methods.
I don't know if such example is indeed needed, sounds a bit like showing how to derive from std::string...
*Date type would show the change upper bound at runtime, and see how the values of a variable (month) impact the constraints on another variable (day).
Might be a nice example, let me consider this.
*The example of a low static bounded and runtime dynamic bounded could be a good one to show how the constrained class can be specialized.
Ditto.
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? In-depth study of the documentation, implementation and the review posts.
Thanks for your time for the review and the discussions. Best regards, Robert

----- Original Message ----- From: "Robert Kawulak" <robert.kawulak@gmail.com> To: <boost@lists.boost.org> Sent: Wednesday, December 10, 2008 4:22 AM Subject: Re: [boost] [review][constrained_value] Review of ConstrainedValueLibrary begins today
* I don't know if you will add monitored values to your library. IMO it will be better to have them in a separated class.
Or, if it's reasonable, implement constrained objects in terms of monitored objects.
Yes, this seam OK. A constrained object can be seen as a monitored one.
* The operators have not always a sens for the application. For example, constrained<int, is_even> x; ++x;
I would prefer to have a compile error in this particular case instead of an exception. It will be quite interesting to show how a user can catch this error at compile time and avoid this runtime error.
Passing over presumable complexity of the implementation, this would be quite inconsistent behaviour. Why "++x" should behave in a different way than "x += 1"?
I'm not said that.
What if the user wants this to be a runtime error or wants to use the object in a generic function (wich would contain the "++x" expression, but not necessarily invoke it)?
I'm asking that the user must be able to choose if he wants arithmetic operators or not with a parameter with_arithmetic_operators.
template<typename V , typename C , typename E > constrained< V, C, E , with_arithmetic_operators> & operator++ (constrained< V, C, E,with_arithmetic_operators> &c)
the bounded hierarchy should have with_arithmetic_operators.
Then for the even object we should also ban, e.g., *=, although it perfectly makes sense?
It should be up to the user to choose if he want arithmetic operator for his constrained<int, is_even, throw_exception<>, with_arithmetic_operators>
* constrained must be default constructible. The library should provide a mean to specify a default value. E. g. constrained<default<int, 1>, is_odd >
I think this is too general utility to belong to this library (it's similar to value_initialized in Boost.Utility).
Maybe, this could be a general utility, but your library make some constrained values not default constructibles. What the user can do waiting for this utility? I would like to see a class odd that is default constructible in the documentation.
* The default value for excluding the bound is false, which mean that the bound is included. There is no example of excluding bounds. Here is one
What do you exactly mean by saying "there is no example"? The tutorial contains a section titled "Bounded objects with open ranges".
I think that it will more clear if instead of stating exclusion we state inclusion, and the default be true,
So we can write typedef static_bounded<int, 0, 100, throw_exception<>, false, true>::type t;
A value-initialised bool has the value of false. Therefore, bounds exclusion is used rather than bounds inclusion, so the default is always "bounds included" when the bounds inclusion indicators are default-constructed.
Why do you talk about value initializetion. The parmeter have already a default value. It is enough to change the meaning of the boolean parameter.
* I would like the library manage constraints between several variables, e.g the range of valid days values depends on the month. We can need to set/get each varaible independently or all at once. The concept of tuple seems interesting.
Indeed, sounds interesting, but I wonder if it would be needed frequently enough to make this part of the library.
The current interface could also be used as
typedef constrained<tuple<day, month>, adapt_day_to _month_constraint > date;
but we can not assign the value independently.
That's right. What I'd rather do is to define a simple class containing the constrained day and month, and providing functions to get/set the members. The setter for the month would additionally adjust the upper bound of the day.
It is OK for me. Could you add this example please?
* The number of parameters of the bounded class is already to high.
Most of them (ordered from most to least used) have default values, so I don't think this is a serious problem. Your solution reduces them from 7 to 5, which may be seen as not much... However, it looks interesting.
Even in this case the bool values are not enough declarative. It would even be beter to define two templates, open, close (see below) typedef static_bounded<int, open<0>, close<100> >::type t;
I don't know if this is an optimal solution if for the most common case, when the bounds are included, you have to write more than:
typedef static_bounded<int, 0, 100>::type t;
When I read static_bounded<int, 0, 100> I need to know which are the defaults to know if the bound are in or not. When a read static_bounded<int, open<0>, close<100> > there is no issue. It is explicit. What is more readable |1,100| or [1,100], (1,100)?
Adding more examples will be welcome as: * Application domain classes using (by inheritance and by containement) a constrained class to see what the user needs to do when it needs to add some methods, and don't want some inherited methods.
I don't know if such example is indeed needed, sounds a bit like showing how to derive from std::string...
Perhaps. But your library adds a lot of arithmetic operators and I would like to see in the documentation how a user can use the constrained class when he don't want these operators to be defined.
*Date type would show the change upper bound at runtime, and see how the values of a variable (month) impact the constraints on another variable (day).
Might be a nice example, let me consider this.
I think this example is a must. Every one think on the Date data type when talking abount constrained values. Regards, Vicente

From: vicente.botet
* The operators have not always a sens for the application. For example, constrained<int, is_even> x; ++x;
I would prefer to have a compile error in this particular case instead of an exception. It will be quite interesting to show how a user can catch this error at compile time and avoid this runtime error.
Passing over presumable complexity of the implementation, this would be quite inconsistent behaviour. Why "++x" should behave in a different way than "x += 1"?
I'm not said that.
You haven't mentioned so far that you want *all* operators to be turned off. You said "in this particular case", so I assumed you want "++x" to fail to compile while "x += 1" to throw. Sorry for misinterpretation.
Then for the even object we should also ban, e.g., *=, although it perfectly makes sense?
It should be up to the user to choose if he want arithmetic operator for his constrained<int, is_even, throw_exception<>, with_arithmetic_operators>
I don't see much value in only giving an "all or nothing" option (since most of the other operators may still be useful) and I don't think it would be reasonable to implement possibility of selective exclusion of only some of the operators.
* constrained must be default constructible. The library should provide a mean to specify a default value. E. g. constrained<default<int, 1>, is_odd >
I think this is too general utility to belong to this library (it's similar to value_initialized in Boost.Utility).
Maybe, this could be a general utility, but your library make some constrained values not default constructibles. What the user can do waiting for this utility?
Ask maintainer of Boost.Utility to add it? I think it is a better solution than adding it somewhere where it doesn't belong...
I think that it will more clear if instead of stating exclusion we state inclusion, and the default be true,
So we can write typedef static_bounded<int, 0, 100, throw_exception<>, false, true>::type t;
A value-initialised bool has the value of false. Therefore, bounds exclusion is used rather than bounds inclusion, so the default is always "bounds included" when the bounds inclusion indicators are default-constructed.
Why do you talk about value initializetion. The parmeter have already a default value. It is enough to change the meaning of the boolean parameter.
For compile-time indicators of bounds exclusion/inclusion we have to rely on default construction (we cannot initialise mpl::true_ with a bool value). Run-time indicators have to behave the same way to allow for full interchangeability, so default-constructed bool should represent the most common use case. If we had inclusion indicators, the following: typedef within_bounds<int, int, bool, bool> dynamic_bounds; constrained<int, dynamic_bounds> x(dynamic_bounds(-10, 10)); Would construct x with the range (-10, 10) rather than [-10, 10]. We could make within_bounds constructor take two extra arguments that default to true, but then the same example with compile-time indicators would not compile. Apart from that I consider the choice of inclusion vs. exclusion quite arbitrary and there's no point in arguing which one is better.
* The number of parameters of the bounded class is already to high.
Most of them (ordered from most to least used) have default values, so I don't think this is a serious problem. Your solution reduces them from 7 to 5, which may be seen as not much... However, it looks interesting.
Even in this case the bool values are not enough declarative. It would even be beter to define two templates, open, close (see below) typedef static_bounded<int, open<0>, close<100> >::type t;
I don't know if this is an optimal solution if for the most common case, when the bounds are included, you have to write more than:
typedef static_bounded<int, 0, 100>::type t;
When I read static_bounded<int, 0, 100> I need to know which are the defaults to know if the bound are in or not. When a read static_bounded<int, open<0>, close<100> > there is no issue. It is explicit. What is more readable |1,100| or [1,100], (1,100)?
It is readable, but not always readability == usability. I'll try to make a complete implementation and see how it works first.
*Date type would show the change upper bound at runtime, and see how the values of a variable (month) impact the constraints on another variable (day).
Might be a nice example, let me consider this.
I think this example is a must. Every one think on the Date data type when talking abount constrained values.
I wouldn't call it "a must", it's simply one of many use cases and it doesn't show any functionality of the library that is not already covered in the tutorial. However, it's quite nice and, as I said, I'll think about adding it (after trying to implement it). Best regards, Robert
participants (12)
-
Chris
-
Gordon Woodhull
-
Jeff Flinn
-
Jeff Garland
-
Michael Marcin
-
Paul A. Bristow
-
Ravi
-
Robert Kawulak
-
Sebastian Redl
-
Stjepan Rajko
-
vicente.botet
-
Zach Laine