[review][constrained_value] Review of Constrained Value Library begins today

Hi all, The review of the Robert Kawulak's Constrained Value library begins today December 1, 2008, and will end on December 10th -- I will be the review manager. Please post reviews to the developer list. Here's the library synopsis: The Boost Constrained Value library contains class templates useful for creating constrained objects. A simple example is an object representing an hour of a day, for which only integers from the range [0, 23] are valid values. bounded_int<int, 0, 23>::type hour; hour = 20; // OK hour = 26; // exception! Behavior in case of assignment of an invalid value can be customized. The library has a policy-based design to allow for flexibility in defining constraints and behavior in case of assignment of invalid values. Policies may be configured at compile-time for maximum efficiency or may be changeable at runtime if such dynamic functionality is needed. The library can be downloaded from the here: http://rk.go.pl/f/constrained_value.zip The documentation is also available online here: http://rk.go.pl/r/constrained_value --------------------------------------------------- Please state in your review, whether you think the library should be accepted as a Boost library. Additionally, please consider the following aspects in your review of the library: - What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Did you try to use the library? With what compiler? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain? Thanks, Jeff

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeff Garland Sent: 01 December 2008 13:00 To: boost@lists.boost.org Subject: [boost] [review][constrained_value] Review of Constrained Value Library begins today
The review of the Robert Kawulak's Constrained Value library begins today December 1, 2008, and will end on December 10th -- I will be the review manager. Please post reviews to the developer list. - What is your evaluation of the design?
OK. The floating point issue has been ducked for now, wisely I think, and with sufficient explanation - for now. Though the concepts could (can) also be applied very usefully to floating-point types. Before constrained FP can be unleashed on unsuspecting users, I believe we need to establish separately a clear package of functions for handling approximate comparisons, something using the ideas in BOOST_CHECK_CLOSE. I would very much like to see this as a future Boost library component. With sensible defaults that handle the few epsilon uncertainties that arise from conversion from FP registers and memory, and round off from calculations, it would be useful in practice without exposing users unfamiliar with floating-point to too much risk of nasty surprises.
- What is your evaluation of the implementation?
Looks OK. BUT a suite of tests (promised) is essential before release. These should not only include the simple features (for which tests may seem a bit over the top but give a warm feeling), but also some of the 'fancier' features that are much more likely to cause trouble, and for which tests are more likely to highlight problems.
- What is your evaluation of the documentation?
Very good. (I found no spelling mistakes ;-) ) And Robert did not fall into the trap of thinking that using Doxygen meant that you didn't need to write any other documentation. The usage and rationale and design compromises were discussed sufficiently.
- What is your evaluation of the potential usefulness of the library?
Very useful. An essential building block for reliable code.
- Did you try to use the library? Used OK previously.
- How much effort did you put into your evaluation?
An hour or so (re-)reading the docs.
- Are you knowledgeable about the problem domain?
Slightly. Paul PS In the absence of a decent numbers of reviewers, it would very helpful to judge the value of software (and of reviews) to know the size of the 'user base' of packages submitted for reviews. Is there any way we can get this information? --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

OK here we go with my review. First off, I do think the library should be accepted into Boost (modulo comments below), and I'd like to congratulate Robert on a very nicely presented submission.
- What is your evaluation of the design?
It looks good to me, however, I'm probably not a potential user of the library, but the design "looks right" based on reading the docs.
- What is your evaluation of the implementation?
Only a quick glance at the source, and a bit of time stepping through the some examples, but all the bases seem to be covered. However, I did notice that the empty-base-optimisation has been incorrectly applied - so for example sizeof(bounded_int<int, 0, 100>::type) is 8 when compiled with msvc (ie even with EBO support). I believe you would need to use something like: compressed_pair<compressed_pair<int, policy1>, policy2> as the single data member to completely optimise away the overhead when both policies are empty (the usual case). Also I'm a little surprised that there are no tests as yet: rather lets down an otherwise nice submission. **** I believe the review manager should not allow full acceptance until a decent set of tests are provided ****
- What is your evaluation of the documentation?
Overall very good, with a nice tutorial that's really all you need to read to get started. But... no documentation on the concepts used by the library, and of course suitable concept archetypes should be used to test the library.
- What is your evaluation of the potential usefulness of the library?
- Did you try to use the library? With what compiler? Did you have any
Hard to say, but I suspect quite useful for certain domains. I believe the usefullness would be increased if it supported bounded floating point values - I believe the rationale for not supporting this (arguments to assignment may change due to rounding when stored) can be overcome - but would require careful testing. problems? Just a quick test with msvc, had there been a set of tests I would have run them with more compilers...
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
A couple of hours reading docs, browsing source, and building a couple of examples.
- Are you knowledgeable about the problem domain?
Not especially :-( John.

On Wed, Dec 3, 2008 at 10:32 AM, John Maddock <john@johnmaddock.co.uk>wrote:
Also I'm a little surprised that there are no tests as yet: rather lets down an otherwise nice submission.
**** I believe the review manager should not allow full acceptance until a decent set of tests are provided ****
Completely agree -- any acceptance will be conditioned on the addition of tests. I actually asked Robert about this prior to the review because I could see the lack of tests becoming an issue for reviewers. I decided to go forward with the review since I agreed with his point that the examples basically exercise most of the libraries features. Also, http://www.boost.org/community/reviews.html#Review_Manager didn't provide clear guidance on this point. It's possible there should be a clearer policy on this point for the future. Jeff

----- Original Message ----- From: "Jeff Garland" <azswdude@gmail.com> To: <boost@lists.boost.org> Sent: Wednesday, December 03, 2008 9:47 PM Subject: Re: [boost] [review][constrained_value] Review of Constrained ValueLibrary begins today
On Wed, Dec 3, 2008 at 10:32 AM, John Maddock <john@johnmaddock.co.uk>wrote:
Also I'm a little surprised that there are no tests as yet: rather lets down an otherwise nice submission.
**** I believe the review manager should not allow full acceptance until a decent set of tests are provided ****
Completely agree -- any acceptance will be conditioned on the addition of tests. I actually asked Robert about this prior to the review because I could see the lack of tests becoming an issue for reviewers. I decided to go forward with the review since I agreed with his point that the examples basically exercise most of the libraries features. Also,
http://www.boost.org/community/reviews.html#Review_Manager
didn't provide clear guidance on this point. It's possible there should be a clearer policy on this point for the future.
Hi, FYI, http://www.boost.org/community/reviews.html#Review_Manager contains "The Review Manager: Checks the submission to make sure it really is complete enough to warrant formal review. See the Boost Library Requirements and Guidelines. If necessary, work with the submitter to verify the code compiles and runs correctly on several compilers and platforms." Boost Library Requirements and Guidelines contains "Provide sample programs or confidence tests so potential users can see how to use your library. Provide a regression test program or programs which follow the Test Policies and Protocols." Test Policies and Protocols contains "Test Policy Required Every Boost library should supply one or more suitable test programs to be exercised by the Boost regression test suite. In addition to the usual compile-link-run tests expecting successful completion, compile-only or compile-and-link-only tests may be performed, and success for the test may be defined as failure of the steps. Test program execution must report errors by returning a non-zero value. They may also write to stdout or stderr, but that output should be relatively brief. Regardless of other output, a non-zero return value is the only way the regression test framework will recognize an error has occurred. Note that test programs to be included in the status tables must compile, link, and run quickly since the tests are executed many, many, times. Libraries with time consuming tests should be divided into a fast-execution basic test program for the status tables, and a separate full-coverage test program for exhaustive test cases. The basic test should concentrate on compilation issues so that the status tables accurately reflect the library's likelihood of correct compilation on a platform. If for any reason the usual test policies do not apply to a particular library, an alternate test strategy must be implemented. A Jamfile to drive the regression tests for the library." Best, Vicente

Hi John,
However, I did notice that the empty-base-optimisation has been incorrectly applied - so for example sizeof(bounded_int<int, 0, 100>::type) is 8 when compiled with msvc (ie even with EBO support).
AFAICT the EBO support is tuned fine and on GCC it works as expected. I too was getting a bit worse results with MSVC, but I suspect this is rather the fault of MSVC. However, if I missed something, I'd be grateful for pointing me to the place which is the problem.
Also I'm a little surprised that there are no tests as yet: rather lets down an otherwise nice submission.
**** I believe the review manager should not allow full acceptance until a decent set of tests are provided ****
Conditional acceptance would be OK for me and wouldn't change anything, because I wouldn't submit the code to SVN without completing the regression tests anyway. I decided to do this after the review, because I want to make the tests having a stable interface (and the review is likely to change it).
But... no documentation on the concepts used by the library, and of course suitable concept archetypes should be used to test the library.
Which concepts do you have in mind -- the policies or things like CopyConstructible? The policies are described in the documentation of constrained class template. Did I miss something there?
A couple of hours reading docs, browsing source, and building a couple of examples.
Thank you for your time and your vote. ;-) Regards, Robert

Robert Kawulak wrote:
Hi John,
However, I did notice that the empty-base-optimisation has been incorrectly applied - so for example sizeof(bounded_int<int, 0, 100>::type) is 8 when compiled with msvc (ie even with EBO support).
AFAICT the EBO support is tuned fine and on GCC it works as expected. I too was getting a bit worse results with MSVC, but I suspect this is rather the fault of MSVC. However, if I missed something, I'd be grateful for pointing me to the place which is the problem.
At present you have: compressed_pair<empty1, empty2> member1; T member2; Whether or not "member1" is treated as empty by the compiler depends upon the ABI used. If you used a single data member consisting of nested compressed_pair's: compressed_pair< compressed_pair<definitely_not_empty, maybe_empty1>, maybe_empty2> single member; Then you would get EBO on more compilers IMO.
But... no documentation on the concepts used by the library, and of course suitable concept archetypes should be used to test the library.
Which concepts do you have in mind -- the policies or things like CopyConstructible? The policies are described in the documentation of constrained class template. Did I miss something there?
I was thinking of the interface required to be provided by the constraint-checking and error-handling template parameters, perhaps I missed it? I realise it was mostly covered by the examples/tutorial, but I didn't see a reference page for these? Cheers, John.

Hi John,
From: John Maddock
At present you have:
compressed_pair<empty1, empty2> member1; T member2;
No, it's different: class constrained { struct helper : compressed_pair<empty1, empty2> { T value; } helper member; }
I was thinking of the interface required to be provided by the constraint-checking and error-handling template parameters, perhaps I missed it? I realise it was mostly covered by the examples/tutorial, but I didn't see a reference page for these?
The interfaces are described here http://tinyurl.com/5wnfpj in the Parameters section. Morover, there may be additional requirements imposed by particular functions (e.g., swap requires that the policies are swappable) -- they are mentioned in the documentation of those functions. Best regards, Robert

Hi Robert, Jeff Garland skrev:
Hi all,
The review of the Robert Kawulak's Constrained Value library begins today December 1, 2008, and will end on December 10th -- I will be the review manager. Please post reviews to the developer list.
I'm basing this review on reading the docs, mostly. 1. I don't like that the exeption object broken_contrait takes an std::string argument since that may throw std::bad_alloc. I'm fine with that in my own code, but I think libraries should try to avoid it. See also http://www.boost.org/community/error_handling.html 2. I prefer functions like change_constraint(c, is_positive()); to be members. We had similar discussion for some function in the Bimap review. Basically, it is nicer for the user to press . to get a list of functions than to search the boost documentation for the right function. This comment applies to all the free-standing functions. 3. In this code: ignoring_even_int j(1); // exception! (cannot ignore construction with invalid value) is there no way to get an assertion instead of the exception? 4. Is your library powerfull enough to implement something like http://biology.nmsu.edu/software/probability/index.html ? If not, what would it take to make that possible? 5. The docs say "Constrained objects can be used as a debugging tool to verify that a program operates only on an assumed subset of possible values. However, it is usually desirable to check the assumptions only in debug mode and avoid the checks in release code for performance reasons" Have you made any benchmarks? In particular, I would be interested in those for bounded_int or bounded_float. (bounded_float/bounded_double,bounded_unsigned etc. are provided, no?) 6. My main use case would be to use constrained floating point values, but the docs state: "Why C++'s floating point types shouldn't be used with bounded objects? [...] there exist a number of myths of what IEEE-compliance really entails from the point of view of program semantics. We shall discuss the following myths, among others: [...] "If x < 1 tests true at one point, then x < 1 stays true later if I never modify x." I don't fully understand this...AFAIK, this is not allowed for an IEEE complient implementation. How could we ever implement *anything* with respect to floats then? Anyway, here's my review vote: *************************'**** I think this library should be accepted into Boost provided that minor changes are done. The docs are nice and clear, and the library gives some quite interesting examples of how powerful the library is. Nice work! I do think that the above issues should be thought really hard about, especially 1,2,3 and 6. If to is not possible, I change my vote to no. best regards -Thorsten

Hi Thorsten,
From: Thorsten Ottosen
1. I don't like that the exeption object broken_contrait takes an std::string argument since that may throw std::bad_alloc. I'm fine with that in my own code, but I think libraries should try to avoid it. See also
It does so because its base class (std::logic_error) requires this (it doesn't have a default constructor). I think this is fine for the default case, while you may change the type of exceptions thrown very easily should the possibility of throwing std::bad_alloc be a serious issue.
2. I prefer functions like
change_constraint(c, is_positive());
to be members. We had similar discussion for some function in the Bimap review. Basically, it is nicer for the user to press . to get a list of functions than to search the boost documentation for the right function.
This comment applies to all the free-standing functions.
Better editors' support is indeed nice, but on the other hand it is advised[*] to prefer minimal and simple interface of a class, and I strongly agree with this. If something can be done without the need to access the private data directly, then it should't be implemented as a member function. This improves encapsulation and helps to maintain order in the code. [*] Herb Sutter, Andrei Alexandrescu: C++ Coding Standards, item 44. Prefer writing nonmember nonfriend functions.
3. In this code:
ignoring_even_int j(1); // exception! (cannot ignore construction with invalid value)
is there no way to get an assertion instead of the exception?
Sure there is, simply provide an error policy similar to the one used in the example, but with assertion instead of throw. Also, if you provide error policy with empty invocation operator, constrained class would fire an assertion by itself (it verifies every result of error policy invocation). I made an example with exception rather than assertion consciously, because it's better (safer) if people follow an example that works always, not only in debug mode.
4. Is your library powerfull enough to implement something like
http://biology.nmsu.edu/software/probability/index.html
? If not, what would it take to make that possible?
After a brief glance (sorry, I have no time right now for deeper analysis) I suspect that Constrbained Value library could provide the basic building blocks for this library, but right now I can't tell how much the designs of those two libraries are compatible.
5. The docs say
"Constrained objects can be used as a debugging tool to verify that a program operates only on an assumed subset of possible values. However, it is usually desirable to check the assumptions only in debug mode and avoid the checks in release code for performance reasons"
Have you made any benchmarks? In particular, I would be interested in those for bounded_int or bounded_float.
Yes, I've compiled some simple examples (with GCC 4.3.2) to assembly code with optimisation on (-O3) and compared the result to the result of compilation of the same code, but with unconstrained replaced by the underlying type. The assembly code was identical. For example, the code: int fun(int i) { unconstrained<int>::type u(i); u++; return u; } compared to identical version, but with unconstrained<int>::type replaced by int.
(bounded_float/bounded_double,bounded_unsigned etc. are provided, no?)
As to bounded_float/bounded_double, see point 6. As to bounded_unsigned - no, because bounded_int works with any type values of which can be used as template arguments (the 'int' stands for 'integral' rather than 'int' alone). So you don't need another type, just use bounded_int<unsigned, ...>.
6. My main use case would be to use constrained floating point values, but the docs state:
"Why C++'s floating point types shouldn't be used with bounded objects? [...] there exist a number of myths of what IEEE-compliance really entails from the point of view of program semantics. We shall discuss the following myths, among others: [...] "If x < 1 tests true at one point, then x < 1 stays true later if I never modify x."
I don't fully understand this...AFAIK, this is not allowed for an IEEE complient implementation. How could we ever implement *anything* with respect to floats then?
I also don't like this, but this is the reality (see http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.18 too). It's very important to realise that floating-point operations may be not repeatable. Unfortunately, to be able to have a reliable constrained object, the comparison of two values *must* be repeatable. I've heard of libraries for "reliable floating-point computations" and of compiler switches that turn some surprising floating-point optimisations off, but I still don't feel competent enough in this respect to find a way that *guarantees* proper working of constrained objects. If somebody does -- you are welcome to help. ;-)
Anyway, here's my review vote: *************************'****
I think this library should be accepted into Boost provided that minor changes are done. The docs are nice and clear, and the library gives some quite interesting examples of how powerful the library is. Nice work!
I do think that the above issues should be thought really hard about, especially 1,2,3 and 6. If to is not possible, I change my vote to no.
Thanks for your time. I hope we can work out a consensus on the items upon which we don't agree. ;-) Best regards, Robert

Robert Kawulak skrev:
Hi Thorsten,
2. I prefer functions like
change_constraint(c, is_positive());
to be members. We had similar discussion for some function in the Bimap review. Basically, it is nicer for the user to press . to get a list of functions than to search the boost documentation for the right function.
This comment applies to all the free-standing functions.
Better editors' support is indeed nice, but on the other hand it is advised[*] to prefer minimal and simple interface of a class, and I strongly agree with this. If something can be done without the need to access the private data directly, then it should't be implemented as a member function. This improves encapsulation and helps to maintain order in the code.
[*] Herb Sutter, Andrei Alexandrescu: C++ Coding Standards, item 44. Prefer writing nonmember nonfriend functions.
It has been advocated in other context as well, e.g. by scott meyers. The author of the bimap library was, initially, against it, citing some of the same sources as you do. He ended up being very happy with the members IIRC. The discussion in [*] and other places is not great IMO. (a) They claim that "encapsulation" is reduced by "minimizing dependency". And they do so without having defined what the two terms mean (precisely). But often the dependency is illusional: to implement the free-standing function efficiently it needs to be made a friend. In that case, the only valid reason for making it free-standing is because we must (e.g. to get Looking at your code, template <typename V, typename C, typename E, typename T> void change_constraint(constrained<V, C, E> & c, const T & new_constraint) { constrained<V, C, E> tmp(c.value(), new_constraint, c.error_handler()); c.swap(tmp); } this seems *very inefficient* compared to only changing the constraint. (b) The second claim is that free-standing functions break apart monolothic classes. That may be, but you don't have a monolothic class (you're not even close). (c) Then they claim it improves genericity which may or may not be true, but in either case does not apply here. Added to that, we get all the usual problems with ADL with free-standing functions. And then finally my point, that the boost namespace is poluted with names that makes it near impossible to find the right function, and so, by making it a member that task is trivial. Finally, if we do want to use the encapsulation metaphor, this locality provides better encapsulation than free-standing functions. In conclusion: making something a non-member is rarely a good idea.
3. In this code:
ignoring_even_int j(1); // exception! (cannot ignore construction with invalid value)
is there no way to get an assertion instead of the exception?
Sure there is, simply provide an error policy similar to the one used in the example, but with assertion instead of throw.
Ok, I misunderstood the example, then.
Also, if you provide error policy with empty invocation operator, constrained class would fire an assertion by itself (it verifies every result of error policy invocation). I made an example with exception rather than assertion consciously, because it's better (safer) if people follow an example that works always, not only in debug mode.
Whether this is "safer" is certainly a matter of definition.
4. Is your library powerfull enough to implement something like
http://biology.nmsu.edu/software/probability/index.html
? If not, what would it take to make that possible?
After a brief glance (sorry, I have no time right now for deeper analysis) I suspect that Constrbained Value library could provide the basic building blocks for this library, but right now I can't tell how much the designs of those two libraries are compatible.
Ok. I hope you have time for a more thorough look.
5. The docs say
"Constrained objects can be used as a debugging tool to verify that a program operates only on an assumed subset of possible values. However, it is usually desirable to check the assumptions only in debug mode and avoid the checks in release code for performance reasons"
Have you made any benchmarks? In particular, I would be interested in those for bounded_int or bounded_float.
Yes, I've compiled some simple examples (with GCC 4.3.2) to assembly code with optimisation on (-O3) and compared the result to the result of compilation of the same code, but with unconstrained replaced by the underlying type. The assembly code was identical.
Excellent.
For example, the code:
int fun(int i) { unconstrained<int>::type u(i); u++; return u; }
compared to identical version, but with unconstrained<int>::type replaced by int.
(bounded_float/bounded_double,bounded_unsigned etc. are provided, no?)
As to bounded_float/bounded_double, see point 6. As to bounded_unsigned - no, because bounded_int works with any type values of which can be used as template arguments (the 'int' stands for 'integral' rather than 'int' alone). So you don't need another type, just use bounded_int<unsigned, ...>.
Sorry, missed that.
6. My main use case would be to use constrained floating point values, but the docs state:
"Why C++'s floating point types shouldn't be used with bounded objects? [...] there exist a number of myths of what IEEE-compliance really entails from the point of view of program semantics. We shall discuss the following myths, among others: [...] "If x < 1 tests true at one point, then x < 1 stays true later if I never modify x."
I don't fully understand this...AFAIK, this is not allowed for an IEEE complient implementation. How could we ever implement *anything* with respect to floats then?
I also don't like this, but this is the reality (see http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.18 too). It's very important to realise that floating-point operations may be not repeatable. Unfortunately, to be able to have a reliable constrained object, the comparison of two values *must* be repeatable.
I've heard of libraries for "reliable floating-point computations" and of compiler switches that turn some surprising floating-point optimisations off, but I still don't feel competent enough in this respect to find a way that *guarantees* proper working of constrained objects. If somebody does -- you are welcome to help. ;-)
I guess we should come up with something. For my own work I would find something like bounded_float<float,0,1> probability and similar types useful. I suspect it is useful even though it is notexact in all corner cases. -Thorsten

Hi Thorsten, 2008/12/5 Thorsten Ottosen <thorsten.ottosen@dezide.com>:
It has been advocated in other context as well, e.g. by scott meyers. The author of the bimap library was, initially, against it, citing some of the same sources as you do. He ended up being very happy with the members IIRC.
The discussion in [*] and other places is not great IMO.
Apparently there are groups of people that prefer either of the approaches, but I wouldn't like the review to become an ideological discussion on general style guidelines like this one.
template <typename V, typename C, typename E, typename T> void change_constraint(constrained<V, C, E> & c, const T & new_constraint) { constrained<V, C, E> tmp(c.value(), new_constraint, c.error_handler()); c.swap(tmp); }
this seems *very inefficient* compared to only changing the constraint.
You can't simply change the constraint if the current value does not conform to it.You have to check that and eventually copy the value and the constraint, call the error policy for them and assign them back. In most common cases this seems not much more efficient than copying the whole object and swapping it (considering the typical case when one or both of the policies are empty or have trivial copy). Having said that, I agree with you at this point -- implementing change_constraint as a member might allow for more efficient implementation for some cases. This is an argument that might convince me to make it a member and I will investigate this possibility. But in general I still prefer writing non-members if there's no apparent reason why not to do so.
(b) The second claim is that free-standing functions break apart monolothic classes. That may be, but you don't have a monolothic class (you're not even close).
So how many functions that are unnecessarily members of a class must the class have to call it monolithic and to un-member those functions? :P
And then finally my point, that the boost namespace is poluted with names that makes it near impossible to find the right function, and so, by making it a member that task is trivial.
Note that this function is in constrained_value namespace and does not pollute boost namespace. It is not that hard to find it, your editor may also help when you type '::' (although I agree that typing an object's name and '.' may be slightly more convenient).
In conclusion: making something a non-member is rarely a good idea.
In contrast: making everything a member is rarely a good idea. ;-) Best regards, Robert

Robert Kawulak skrev:
Hi Thorsten,
2008/12/5 Thorsten Ottosen <thorsten.ottosen@dezide.com>:
It has been advocated in other context as well, e.g. by scott meyers. The author of the bimap library was, initially, against it, citing some of the same sources as you do. He ended up being very happy with the members IIRC.
The discussion in [*] and other places is not great IMO.
Apparently there are groups of people that prefer either of the approaches, but I wouldn't like the review to become an ideological discussion on general style guidelines like this one.
Well, sometimes that is needed too.
template <typename V, typename C, typename E, typename T> void change_constraint(constrained<V, C, E> & c, const T & new_constraint) { constrained<V, C, E> tmp(c.value(), new_constraint, c.error_handler()); c.swap(tmp); }
this seems *very inefficient* compared to only changing the constraint.
You can't simply change the constraint if the current value does not conform to it.You have to check that and eventually copy the value and the constraint, call the error policy for them and assign them back. In most common cases this seems not much more efficient than copying the whole object and swapping it (considering the typical case when one or both of the policies are empty or have trivial copy).
Having said that, I agree with you at this point -- implementing change_constraint as a member might allow for more efficient implementation for some cases. This is an argument that might convince me to make it a member and I will investigate this possibility.
It seems to me that it must be possible to check the new constraint without creating a new object. Often T is int or some other small build-in type, but I guess someone might use the library with a more heavy type.
But in general I still prefer writing non-members if there's no apparent reason why not to do so.
(b) The second claim is that free-standing functions break apart monolothic classes. That may be, but you don't have a monolothic class (you're not even close).
So how many functions that are unnecessarily members of a class must the class have to call it monolithic and to un-member those functions? :P
The classical example is std::string which is also what Sutter and Alexandrescu give as an example. But here the problem is that std::string are reimplementing generic algorithms. You don't have a monolithic class. Just like std::vector is not monolithic because erase is a member and not a free-standing function. This is another issue: functions that change the invariant-defining state of objects should be members.
And then finally my point, that the boost namespace is poluted with names that makes it near impossible to find the right function, and so, by making it a member that task is trivial.
Note that this function is in constrained_value namespace and does not pollute boost namespace.
Ok.
It is not that hard to find it, your editor may also help when you type '::' (although I agree that typing an object's name and '.' may be slightly more convenient).
In conclusion: making something a non-member is rarely a good idea.
In contrast: making everything a member is rarely a good idea. ;-)
Not when your modifying state, nor when you want to make the interface most easy to use. I'll give a good example of when a member should not have been added. Again, let's look at the (new) C++ standard library: they have added cbegin() and cend() members to all containers such that one can get a const_iterator from a mutable object. Instead, two generic function templates could have provided the same functionality with an O(1) coding effort vs. the chosen O(n) coding effort. -Thorsten

From: Thorsten Ottosen
this seems *very inefficient* compared to only changing the constraint.
You can't simply change the constraint if the current value does not conform to it.You have to check that and eventually copy the value and the constraint, call the error policy for them and assign them back. In most common cases this seems not much more efficient than copying the whole object and swapping it (considering the typical case when one or both of the policies are empty or have trivial copy).
Having said that, I agree with you at this point -- implementing change_constraint as a member might allow for more efficient implementation for some cases. This is an argument that might convince me to make it a member and I will investigate this possibility.
It seems to me that it must be possible to check the new constraint without creating a new object.
Yes, it is possible to check the constraint without this. The point is what happens if the constraint is not valid. Error policy may need to modify the constraint, so it has to be copied (the argument of change_constraint cannot be modified). Best regards, Robert

Robert Kawulak skrev:
From: Thorsten Ottosen
this seems *very inefficient* compared to only changing the constraint. You can't simply change the constraint if the current value does not conform to it.You have to check that and eventually copy the value and the constraint, call the error policy for them and assign them back. In most common cases this seems not much more efficient than copying the whole object and swapping it (considering the typical case when one or both of the policies are empty or have trivial copy).
Having said that, I agree with you at this point -- implementing change_constraint as a member might allow for more efficient implementation for some cases. This is an argument that might convince me to make it a member and I will investigate this possibility. It seems to me that it must be possible to check the new constraint without creating a new object.
Yes, it is possible to check the constraint without this. The point is what happens if the constraint is not valid.
? Well, I would expect nothing to happen.
Error policy may need to modify the constraint, so it has to be copied (the argument of change_constraint cannot be modified).
Well, yes, but that justify to copy *also* the value and the error handler? I also wondering if it is a good idea to let the error_policy change the constraint? Why is that useful? And will it not mean we might pay extra for all the calls to the policy where we do not need to change the constraint in the policy? -Thorsten

From: Thorsten Ottosen
Yes, it is possible to check the constraint without this. The point is what happens if the constraint is not valid.
? Well, I would expect nothing to happen.
I would expect the error policy to be invoked. If the user decides (by selecting appropriate policy) that an exception should be thrown whenever his operation would cause the value to become invalid, then this should also apply to constraint modification. I would be very surprised if the operation would be simply ignored.
Error policy may need to modify the constraint, so it has to be copied (the argument of change_constraint cannot be modified).
Well, yes, but that justify to copy *also* the value and the error handler?
I suspect yes for the value (to ensure strong exception guarantee), but not necessarily for the error handler.
I also wondering if it is a good idea to let the error_policy change the constraint? Why is that useful?
We need to pass the constraint to the error policy anyway (see below). Letting it modify the constraint is safe and does not cost anything, while this allows for some more sophisticated logic of error handling (bounded object with memory from the docs being an extreme example).
And will it not mean we might pay extra for all the calls to the policy where we do not need to change the constraint in the policy?
In some cases the error policy needs to access the constraint even if it does not need to modify it. For example, the wrap policy needs to query the bounds to perform the modulo arithmetic operations. Best regards, Robert

----- Original Message ----- From: "Robert Kawulak" <robert.kawulak@gmail.com> To: <boost@lists.boost.org> Sent: Tuesday, December 09, 2008 1:11 AM Subject: Re: [boost] [review][constrained_value] Review of Constrained ValueLibrary begins today
From: Thorsten Ottosen
Yes, it is possible to check the constraint without this. The point is what happens if the constraint is not valid.
? Well, I would expect nothing to happen.
I would expect the error policy to be invoked. If the user decides (by selecting appropriate policy) that an exception should be thrown whenever his operation would cause the value to become invalid, then this should also apply to constraint modification. I would be very surprised if the operation would be simply ignored.
Error policy may need to modify the constraint, so it has to be copied (the argument of change_constraint cannot be modified).
Well, yes, but that justify to copy *also* the value and the error handler?
I suspect yes for the value (to ensure strong exception guarantee), but not necessarily for the error handler.
I also wondering if it is a good idea to let the error_policy change the constraint? Why is that useful?
We need to pass the constraint to the error policy anyway (see below). Letting it modify the constraint is safe and does not cost anything, while this allows for some more sophisticated logic of error handling (bounded object with memory from the docs being an extreme example).
And will it not mean we might pay extra for all the calls to the policy where we do not need to change the constraint in the policy?
In some cases the error policy needs to access the constraint even if it does not need to modify it. For example, the wrap policy needs to query the bounds to perform the modulo arithmetic operations.
Hi, I find that the extreme example "bounded object with memory" is not a good example of constrained_value because not constrained at all. It is the example of what can be done with the current interface but that shouldn't be able to be done with a constrained value. I don't understand why an error policy can modify the value or the constraint. I would prefer an error policy taking only two in-parameters (by value or const&) the new value and the constraint. The wrapping and cliping classes are not exactly constrained values, they are adapting a value of a type to a constrained value. So I would prefer to have a different class for them taking a constraint adaptor that would take the value by reference and adapt it to the constraint. For the user this do not changes; the classes bounded, wrapping, cliping would provide the same semantics as now. Respect to monitored values I think that it is better to have a separated class. A monitored value do not has an error handling de perse. We can monitor values that are constrained or not. "bounded object with memory" could be better considerd as a monitored value. Best, Vicente

vicente.botet skrev:
----- Original Message ----- From: "Robert Kawulak" <robert.kawulak@gmail.com>
In some cases the error policy needs to access the constraint even if it does not need to modify it. For example, the wrap policy needs to query the bounds to perform the modulo arithmetic operations.
Hi,
I find that the extreme example "bounded object with memory" is not a good example of constrained_value because not constrained at all. It is the example of what can be done with the current interface but that shouldn't be able to be done with a constrained value. I don't understand why an error policy can modify the value or the constraint. I would prefer an error policy taking only two in-parameters (by value or const&) the new value and the constraint.
The wrapping and cliping classes are not exactly constrained values, they are adapting a value of a type to a constrained value. So I would prefer to have a different class for them taking a constraint adaptor that would take the value by reference and adapt it to the constraint.
I agree. This seems like a better division of responsibility. -Thorsten

From: vicente.botet (I have slightly changed the order of citations.)
I don't understand why an error policy can modify the value or the constraint.
Constrained object treats each error as possibly recoverable and performing the recovery is the task of the error policy. Therefore the error policy is responsible for leaving the constrained object in a valid state. To allow for this, an error policy must be able to adjust at least the value. And I see no reason to disallow changing the constraint too. If something: * is not potentially dangerous, * costs nothing, * does not add extra complexity for the normal usage, * creates opportunity to use the library in some new, creative way, if somebody wishes to, then why should it be banned?
The wrapping and cliping classes are not exactly constrained values, they are adapting a value of a type to a constrained value. So I would prefer to have a different class for them taking a constraint adaptor that would take the value by reference and adapt it to the constraint.
So, having the adaptor, why its underlying value should be a constrained object at all? If the adaptor adjusts the value so it always meets the condition, then the error policy of the constrained object would not be used anyway.
I find that the extreme example "bounded object with memory" is not a good example of constrained_value because not constrained at all.
This is why I called it "extreme" example. :P
"bounded object with memory" could be better considerd as a monitored value.
I agree. Best regards, Robert

----- Original Message ----- From: "Robert Kawulak" <robert.kawulak@gmail.com> To: <boost@lists.boost.org> Sent: Wednesday, December 10, 2008 2:20 AM Subject: Re: [boost] [review][constrained_value] Review ofConstrainedValueLibrary begins today
From: vicente.botet (I have slightly changed the order of citations.)
I don't understand why an error policy can modify the value or the constraint.
Constrained object treats each error as possibly recoverable and performing the recovery is the task of the error policy. Therefore the error policy is responsible for leaving the constrained object in a valid state. To allow for this, an error policy must be able to adjust at least the value.
Well, at least you should name it RecoverableErrorPolicy.
And I see no reason to disallow changing the constraint too. If something: * is not potentially dangerous, * costs nothing, * does not add extra complexity for the normal usage, * creates opportunity to use the library in some new, creative way, if somebody wishes to, then why should it be banned?
Well the error_policy function follows this prototype template <typename V, typename C> void operator () (const V &, const V &, const C &) const; while this one could be enough template <typename V> void operator () (const V &) const; I'm realy get convainced that the base type od the constrained class should be monitored and the prototype for a Monitor function should be template <typename V> void operator () (V &) const; In this way, the constrained class could be an specialization. We see that the classes clipping and wrapping can be implemented in a more efficient way by defining a Monitor than defining a Constraint and an ErrorPolicy. So don't need to have a ErrorPolicy prototype template <typename V, typename C> void operator () (const V &, const V &, const C &) const; but template <typename V> void operator () (const V &) const; You extreme example would be a monitored specialization.
The wrapping and cliping classes are not exactly constrained values, they are adapting a value of a type to a constrained value. So I would prefer to have a different class for them taking a constraint adaptor that would take the value by reference and adapt it to the constraint.
So, having the adaptor, why its underlying value should be a constrained object at all?
I have no said that. The underlying type could be any value type.
If the adaptor adjusts the value so it always meets the condition, then the error policy of the constrained object would not be used anyway.
Exactly. A constrained_adapteed should not have error policy. The constraintAdaptor must feet the value to the constraint. Replace constrained_adapteed by monitored if you want. Best, Vicente

From: vicente.botet
And I see no reason to disallow changing the constraint too. If something: * is not potentially dangerous, * costs nothing, * does not add extra complexity for the normal usage, * creates opportunity to use the library in some new, creative way, if somebody wishes to, then why should it be banned?
Well the error_policy function follows this prototype template <typename V, typename C> void operator () (const V &, const V &, const C &) const; while this one could be enough template <typename V> void operator () (const V &) const;
The question that I answered here was why error policy can modify the constraint, while now you're jumping to why error policy takes the constraint as argument. This has already been answered in my earlier post.
I'm realy get convainced that the base type od the constrained class should be monitored and the prototype for a Monitor function should be template <typename V> void operator () (V &) const;
Did you do any research of possible use cases? If I were to design a monitored value class, I'd consider passing both the old and the new value to allow for a more general usage. For example, the monitor function would be able to log somewhere how much the value has changed or prevent invalid state transition in a model of a finite state machine.
We see that the classes clipping and wrapping can be implemented in a more efficient way by defining a Monitor than defining a Constraint and an ErrorPolicy.
Maybe. But given optimisation capabilities of compilers, I would say: not much more. I have been compiling code snippets using wrapping<int> (with dynamic bounds) to assembly code and the result was as if the code had been written by hand in an optimal way.
The wrapping and cliping classes are not exactly constrained values
It depends how you define constrained value. The design of this library assumes that constrained object is an object with value conforming to a specified constraint (so constrained object guarantees that its value belongs to the specified subset of values of the underlying type). Can you show that this definition is inappropriate or that wrapping/clipping objects don't fit to it? The fact that a value is constrained is a contract, and throwing or adjusting the value are examples of methods (policies) guaranteeing this contract.
, they are adapting a value of a type to a constrained
value. So I would prefer to have a different class for them taking a constraint adaptor that would take the value by reference and adapt it to the constraint.
So, having the adaptor, why its underlying value should be a constrained object at all?
I have no said that.
I thought this is what you meant by saying: "they are adapting a value of a type to a constrained value". Best regards, Robert

Robert Kawulak skrev:
From: Thorsten Ottosen
Yes, it is possible to check the constraint without this. The point is what happens if the constraint is not valid. ? Well, I would expect nothing to happen.
I would expect the error policy to be invoked. If the user decides (by selecting appropriate policy) that an exception should be thrown whenever his operation would cause the value to become invalid, then this should also apply to constraint modification. I would be very surprised if the operation would be simply ignored.
I was unintentionally unclear. I meant that the held value should be unchanged.
Error policy may need to modify the constraint, so it has to be copied (the argument of change_constraint cannot be modified). Well, yes, but that justify to copy *also* the value and the error handler?
I suspect yes for the value (to ensure strong exception guarantee), but not necessarily for the error handler.
This seems a little ad hoc to me. What are the requirements of the contraint? Is it unrealistic to demand that it has a no-throw swap? If not, then do it, and give change_contraint the strong guarantee as one would expect.
I also wondering if it is a good idea to let the error_policy change the constraint? Why is that useful?
We need to pass the constraint to the error policy anyway (see below). Letting it modify the constraint is safe and does not cost anything, while this allows for some more sophisticated logic of error handling (bounded object with memory from the docs being an extreme example).
And will it not mean we might pay extra for all the calls to the policy where we do not need to change the constraint in the policy?
In some cases the error policy needs to access the constraint even if it does not need to modify it. For example, the wrap policy needs to query the bounds to perform the modulo arithmetic operations.
I guess my concern here is that the error policy is more than just an error policy. This suggest that another *orthogonal* concept is hidden in there somewhere. Thorsten

From: Thorsten Ottosen
Error policy may need to modify the constraint, so it has to be copied (the argument of change_constraint cannot be modified). Well, yes, but that justify to copy *also* the value and the error handler?
I suspect yes for the value (to ensure strong exception guarantee), but not necessarily for the error handler.
This seems a little ad hoc to me. What are the requirements of the contraint? Is it unrealistic to demand that it has a no-throw swap?
You're right, I didn't think of the possibility of swapping the constraint instead of assignment.
In some cases the error policy needs to access the constraint even if it does not need to modify it. For example, the wrap policy needs to query the bounds to perform the modulo arithmetic operations.
I guess my concern here is that the error policy is more than just an error policy.
Yes, it is. It is a recoverable error policy. Best regards, Robert

"Why C++'s floating point types shouldn't be used with bounded objects? [...] there exist a number of myths of what IEEE-compliance really entails from the point of view of program semantics. We shall discuss the following myths, among others: [...] "If x < 1 tests true at one point, then x < 1 stays true later if I never modify x."
I don't fully understand this...AFAIK, this is not allowed for an IEEE complient implementation. How could we ever implement *anything* with respect to floats then?
I also don't like this, but this is the reality (see http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.18 too). It's very important to realise that floating-point operations may be not repeatable. Unfortunately, to be able to have a reliable constrained object, the comparison of two values *must* be repeatable.
I've heard of libraries for "reliable floating-point computations" and of compiler switches that turn some surprising floating-point optimisations off, but I still don't feel competent enough in this respect to find a way that *guarantees* proper working of constrained objects. If somebody does -- you are welcome to help. ;-)
How about, floating point is allowed, but the results may sometimes be surprising? We already accept this sort of floating point behavior in many other situations. So, bounded_float should be allowed, but buyer should beware.

Hi, 2008/12/5 Neal Becker <ndbecker2@gmail.com>:
How about, floating point is allowed, but the results may sometimes be surprising?
If the surprise consists of breaking the invariant, I'm not convinced... Regards, Robert

On Fri, Dec 5, 2008 at 8:07 AM, Robert Kawulak <robert.kawulak@gmail.com> wrote:
Hi,
2008/12/5 Neal Becker <ndbecker2@gmail.com>:
How about, floating point is allowed, but the results may sometimes be surprising?
If the surprise consists of breaking the invariant, I'm not convinced...
Your currently require that the condition should not "spontaneously" change. If you also allow conditions that can spontaneously change from unsatisfied to satisfied (but not the other way around), your guarantee/invariant is no weaker, but you allow the possibility of triggering the error policy in cases where the condition is initially unsatisfied but would eventually spontaneously change to satisfied. You then provide a continuum of behavior. On one extreme, a condition that never spontaneously changes will trigger the policy exactly when the condition is broken. This is the behavior that your library provides right now. In the middle, a condition that can spontaneously change from unsatisfied to satisfied can trigger in "gray zone" conditions determined by the specifics of the condition. On the other end, a condition always initially reports that it is not satisfied even though it will always eventually spontaneously become satisfied (e.g., the condition that x>0 seconds has elapsed on a timer that starts timing on construction). A solution for the floating point problem can then be provided (as I think has been suggested earlier in this thread, using a platform-specific delta value to adjust the comparisons, where delta is greater than the amount by which a floating point value can spontaneously change). This would fit into the library as a "grey zone" constraint, where the magnitude of the grey zone is determined by the delta, and usefulness by how accurate delta is. Stjepan

From: Robert Kawulak 2008/12/5 Neal Becker <ndbecker2@gmail.com>:
How about, floating point is allowed, but the results may sometimes be surprising?
If the surprise consists of breaking the invariant, I'm not convinced...
Moreover, isn't your proposition actually similar to the current state? Technically, there are no limitations in the library to prevent you from defining bounded<float> ("floating point is allowed"). But the advice in the documentation ("don't use built-in floating point types with this library (until you really know what you're doing)") can be interpreted as "the results may sometimes be surprising". Best regards, Robert

Robert Kawulak wrote:
From: Robert Kawulak 2008/12/5 Neal Becker <ndbecker2@gmail.com>:
How about, floating point is allowed, but the results may sometimes be surprising?
If the surprise consists of breaking the invariant, I'm not convinced...
Moreover, isn't your proposition actually similar to the current state? Technically, there are no limitations in the library to prevent you from defining bounded<float> ("floating point is allowed"). But the advice in the documentation ("don't use built-in floating point types with this library (until you really know what you're doing)") can be interpreted as "the results may sometimes be surprising".
Yes. I don't suggest changing anything. If you want to use float, you can, subject to the limitations we've already discussed. But, the limitations on floating point comparisons are/were always there, and those who care should already be aware of them. All that happens, I think, is that when you assign a result to the constrained float type, it may or may not violate the constraint if it is very close to the limit. Sounds to me that it's still a useful construct.

From: Neal Becker
But, the limitations on floating point comparisons are/were always there, and those who care should already be aware of them.
Even discussions during this review show how people are *not* aware of the limitations and are very surprised to learn about them. I think the lack of the awarness is common because the behaviour of FP is far from what one could call "normal" or intuitive.
All that happens, I think, is that when you assign a result to the constrained float type, it may or may not violate the constraint if it is very close to the limit. Sounds to me that it's still a useful construct.
OK, I'll explain this in the docs. Best regards, Robert

Neal Becker skrev:
Robert Kawulak wrote:
From: Robert Kawulak 2008/12/5 Neal Becker <ndbecker2@gmail.com>:
How about, floating point is allowed, but the results may sometimes be surprising?
If the surprise consists of breaking the invariant, I'm not convinced... Moreover, isn't your proposition actually similar to the current state? Technically, there are no limitations in the library to prevent you from defining bounded<float> ("floating point is allowed"). But the advice in the documentation ("don't use built-in floating point types with this library (until you really know what you're doing)") can be interpreted as "the results may sometimes be surprising".
Yes. I don't suggest changing anything.
I totally disagree. People have to deal with floats anyway. That is a seperate issue. The advic should be removed IMO, and bounded_float provided. -Thorsten

Robert Kawulak wrote:
From: Robert Kawulak 2008/12/5 Neal Becker <ndbecker2@gmail.com>:
How about, floating point is allowed, but the results may sometimes be surprising?
If the surprise consists of breaking the invariant, I'm not convinced... Moreover, isn't your proposition actually similar to the current state? Technically, there are no limitations in the library to
defining bounded<float> ("floating point is allowed"). But
From: Thorsten Ottosen Neal Becker skrev: prevent you from the advice in
the documentation ("don't use built-in floating point types with this library (until you really know what you're doing)") can be interpreted as "the results may sometimes be surprising".
Yes. I don't suggest changing anything.
I totally disagree. People have to deal with floats anyway. That is a seperate issue. The advic should be removed IMO, and bounded_float provided.
It should be provided, but Boost should first include some set of mechanisms to deal with the FP issues. They are too general to be implemented within this library and they are not tightly coupled with the concept of constrained types. I see this as an analogy to arithmetic overflows prevention, which is also too general and too orthogonal to this library. Best regards, Robert

Robert Kawulak wrote:
Robert Kawulak wrote:
From: Robert Kawulak 2008/12/5 Neal Becker <ndbecker2@gmail.com>:
How about, floating point is allowed, but the results may sometimes be surprising?
If the surprise consists of breaking the invariant, I'm not convinced... Moreover, isn't your proposition actually similar to the current state? Technically, there are no limitations in the library to
defining bounded<float> ("floating point is allowed"). But
From: Thorsten Ottosen Neal Becker skrev: prevent you from the advice in
the documentation ("don't use built-in floating point types with this library (until you really know what you're doing)") can be interpreted as "the results may sometimes be surprising".
Yes. I don't suggest changing anything. I totally disagree. People have to deal with floats anyway. That is a seperate issue. The advic should be removed IMO, and bounded_float provided.
It should be provided, but Boost should first include some set of mechanisms to deal with the FP issues. They are too general to be implemented within this library and they are not tightly coupled with the concept of constrained types. I see this as an analogy to arithmetic overflows prevention, which is also too general and too orthogonal to this library.
Best regards, Robert
A pragmatic solution might be to provide constrained floating point values, and implement them the naive way, as if this issue did not exist. Then an application developer, who is using the library, has two options: 1. He can say, I don't care, the probability that something unexpected happens is probably less than the probability of being hit by a meteor while driving to work. 2. He may say, I do care, I want 100% correctness. Then he can probably find some compiler flags that ensure that this problem will not happen. --Johan RĂ¥de

Thorsten Ottosen wrote:
6. My main use case would be to use constrained floating point values, but the docs state:
"Why C++'s floating point types shouldn't be used with bounded objects? [...] there exist a number of myths of what IEEE-compliance really entails from the point of view of program semantics. We shall discuss the following myths, among others: [...] "If x < 1 tests true at one point, then x < 1 stays true later if I never modify x."
I don't fully understand this...AFAIK, this is not allowed for an IEEE complient implementation. How could we ever implement *anything* with respect to floats then?
With difficulty :-( Here's the thought experiment I came up with to verify that this is a real issue: * Imagine that the constraint is that the value is > 1, and that we're working with double's. * Imagine that the value being assigned is the result of some complex computation, and that the assignment function is inlined. * The compiler may now take the value being assigned as it exists in a register (from the computation), and perform a the check on that. * If the register containing the result is wider than a double (say Intels 80-bit long double), and the result is very slightly > 1, then the comparison will succeed. * The value is now stored in the object - and rounded down to double precision in the process. * The value may have been rounded down to exactly 1, and the constrait is now broken! Obviously if the constraint had been >= 1 then we'd be OK in this case (but then values very slightly < 1 may get rounded up, so we could eroniously reject a value). I think of any worse situations than these at present - which doesn't mean they don't exist - so whether or not you consider this an issue may depend on the use case. Presumably, a carefully written predicate that forces any rounding to occur prior to constraint checking would fix the issue, and I 100% agree that use with floating-point types is a very important use case. John.

John Maddock skrev:
Thorsten Ottosen wrote:
6. My main use case would be to use constrained floating point values, but the docs state:
"Why C++'s floating point types shouldn't be used with bounded objects? [...] there exist a number of myths of what IEEE-compliance really entails from the point of view of program semantics. We shall discuss the following myths, among others: [...] "If x < 1 tests true at one point, then x < 1 stays true later if I never modify x."
I don't fully understand this...AFAIK, this is not allowed for an IEEE complient implementation. How could we ever implement *anything* with respect to floats then?
With difficulty :-(
Here's the thought experiment I came up with to verify that this is a real issue:
* Imagine that the constraint is that the value is > 1, and that we're working with double's. * Imagine that the value being assigned is the result of some complex computation, and that the assignment function is inlined. * The compiler may now take the value being assigned as it exists in a register (from the computation), and perform a the check on that. * If the register containing the result is wider than a double (say Intels 80-bit long double), and the result is very slightly > 1, then the comparison will succeed. * The value is now stored in the object - and rounded down to double precision in the process. * The value may have been rounded down to exactly 1, and the constrait is now broken!
Obviously if the constraint had been >= 1 then we'd be OK in this case (but then values very slightly < 1 may get rounded up, so we could eroniously reject a value).
I think of any worse situations than these at present - which doesn't mean they don't exist - so whether or not you consider this an issue may depend on the use case.
Presumably, a carefully written predicate that forces any rounding to occur prior to constraint checking would fix the issue, and I 100% agree that use with floating-point types is a very important use case.
I guess p. 34 (Intervals) provides some hints to how we write such a careful predicate. If I understand this correctly, then we should at least use bounds that are exactly representable in the type involved. For example, to create a bound for probabilities we can use 0 (can be exactly represented), and then for 1 we must can the nearest number larger than 1 representable in the type (e.g. float). -Thorsten

From: Thorsten Ottosen I guess p. 34 (Intervals) provides some hints to how we write such a careful predicate. If I understand this correctly, then we should at least use bounds that are exactly representable in the type involved.
I'm afraid the bounds are not enough, the values would also have to have exact representation. But the section indeed provides a hint -- maybe the problem could be somehow solved if we have a function float exact(float) that, given a floating point value (that may have greater precision because of caching in a register), returns a value that is truncated (has exactly the precision of float, not greater). Does it sound sensible? Anyway, I think the solution to reliable FP arithmetic is too general to make it a part of this library. This should be addressed by a dedicated library, and then Constrained Value library could make a use of it. Best regards, Robert

Robert Kawulak skrev:
From: Thorsten Ottosen I guess p. 34 (Intervals) provides some hints to how we write such a careful predicate. If I understand this correctly, then we should at least use bounds that are exactly representable in the type involved.
I'm afraid the bounds are not enough, the values would also have to have exact representation. But the section indeed provides a hint -- maybe the problem could be somehow solved if we have a function float exact(float) that, given a floating point value (that may have greater precision because of caching in a register), returns a value that is truncated (has exactly the precision of float, not greater). Does it sound sensible?
Maybe.
Anyway, I think the solution to reliable FP arithmetic is too general to make it a part of this library. This should be addressed by a dedicated library, and then Constrained Value library could make a use of it.
I've been thinking about my use-cases, and I think I most want it in the interface and rarely internally in the representation of classes. For example, I might say typedef bounded_float<double,0x?????,0x?????> Cost; Cost& Foo::cost(); Cost Foo::cost() const; Now, it might also be useful to be able to specify that numbers that are "close" (defined by the user) to the bounds should be rounded to the bounds, but I think that was already possible in your library. Right? So I think my conclusion is the following: The fact that floating point calculations are not easily portable is not an argument against having constrained values of floats; if anything, it is an agument for having them because the library makes it easier to detect/respond to such portability problems. -Thorsten

From: Thorsten Ottosen
Now, it might also be useful to be able to specify that numbers that are "close" (defined by the user) to the bounds should be rounded to the bounds, but I think that was already possible in your library. Right?
Yes, although not out of the box. You'd have to write a simple error policy that does the rounding.
So I think my conclusion is the following: The fact that floating point calculations are not easily portable is not an argument against having constrained values of floats;
It's not portability that has been the main concern here. It is the very small set of guarantees given by the standard regarding FP calculations, lacking some of the natural assumptions everybody has when working with FP. Also lacking some of the guarantees that would make preserving the invariant of constrained float an easy task. Best regards, Robert

Robert Kawulak skrev:
From: Thorsten Ottosen
So I think my conclusion is the following: The fact that floating point calculations are not easily portable is not an argument against having constrained values of floats;
It's not portability that has been the main concern here. It is the very small set of guarantees given by the standard regarding FP calculations, lacking some of the natural assumptions everybody has when working with FP. Also lacking some of the guarantees that would make preserving the invariant of constrained float an easy task.
It's not the library that can take this decision for the users. The library does as much as it can to help the users. It still better than status quo. -Thorsten

From: Thorsten Ottosen
It's not portability that has been the main concern here. It is the very small set of guarantees given by the standard regarding FP calculations, lacking some of the natural assumptions everybody has when working with FP. Also lacking some of the guarantees that would make preserving the invariant of constrained float an easy task.
It's not the library that can take this decision for the users. The library does as much as it can to help the users. It still better than status quo.
Sorry, I'm not sure if I understand your point here -- could you please rephrase this? Best regards, Robert

Robert Kawulak skrev:
From: Thorsten Ottosen
It's not portability that has been the main concern here. It is the very small set of guarantees given by the standard regarding FP calculations, lacking some of the natural assumptions everybody has when working with FP. Also lacking some of the guarantees that would make preserving the invariant of constrained float an easy task. It's not the library that can take this decision for the users. The library does as much as it can to help the users. It still better than status quo.
Sorry, I'm not sure if I understand your point here -- could you please rephrase this?
There is nothing profound here. It's the same thing I been trying to communicate for some time. The fact that floating points are surprising and tricky is not an argument against putting support for bounded_float in your library. User deal with these problems in ad hoc manners (the status quo), and your library can help them with easier error-checking and clipping etc. -Thorsten

Thorsten Ottosen wrote: ...
There is nothing profound here. It's the same thing I been trying to communicate for some time. The fact that floating points are surprising and tricky is not an argument against putting support for bounded_float in your library. User deal with these problems in ad hoc manners (the status quo), and your library can help them with easier error-checking and clipping etc.
Yes, well stated. That's what I was trying to say.

From: Thorsten Ottosen
Sorry, I'm not sure if I understand your point here -- could you please rephrase this?
There is nothing profound here. It's the same thing I been trying to communicate for some time. The fact that floating points are surprising and tricky is not an argument against putting support for bounded_float in your library. User deal with these problems in ad hoc manners (the status quo), and your library can help them with easier error-checking and clipping etc.
OK, thanks -- now I get it, and after the discussions I agree with this. Best regards, Robert

John Maddock skrev:
Thorsten Ottosen wrote:
I don't fully understand this...AFAIK, this is not allowed for an IEEE complient implementation. How could we ever implement *anything* with respect to floats then?
With difficulty :-(
Here's the thought experiment I came up with to verify that this is a real issue:
* Imagine that the constraint is that the value is > 1, and that we're working with double's. * Imagine that the value being assigned is the result of some complex computation, and that the assignment function is inlined. * The compiler may now take the value being assigned as it exists in a register (from the computation), and perform a the check on that. * If the register containing the result is wider than a double (say Intels 80-bit long double), and the result is very slightly > 1, then the comparison will succeed. * The value is now stored in the object - and rounded down to double precision in the process. * The value may have been rounded down to exactly 1, and the constrait is now broken!
Obviously if the constraint had been >= 1 then we'd be OK in this case (but then values very slightly < 1 may get rounded up, so we could eroniously reject a value).
This might be quite OK. At least this preserves the invariant and therefore seems much better than the alternative. -Thorsten
participants (10)
-
Jeff Garland
-
Jeff Garland
-
Johan RĂ¥de
-
John Maddock
-
Neal Becker
-
Paul A. Bristow
-
Robert Kawulak
-
Stjepan Rajko
-
Thorsten Ottosen
-
vicente.botet