Is there interest in portable integer overflow detection, with policy based handling?

The implementation is complete, with 5 policies created (and users can always create and use more). The policies are: ignore_overflow throw_overflow assert_overflow saturate_overflow nan_overflow The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow. Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak). Does this pique anyone's interest? Thank you, Ben Robinson

Le 21/02/12 07:04, Ben Robinson a écrit :
The implementation is complete, with 5 policies created (and users can always create and use more). The policies are:
ignore_overflow throw_overflow assert_overflow saturate_overflow nan_overflow
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak).
Does this pique anyone's interest?
Hi, Yes I think this is interesting by itself. Adding a range is a thing that you could consider, as proposed to the standard recently "C++ Binary Fixed-Point Arithmetic" (see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html). Note that this proposal has a wider scope. I've on my ToDo list the implementation of the fractional part, which I will constraint to range and resolutions that can be represented using a built-in integer type. Best, Vicente

Vicente J. Botet Escriba wrote:
Le 21/02/12 07:04, Ben Robinson a écrit :
The implementation is complete, with 5 policies created (and users can always create and use more). The policies are:
ignore_overflow throw_overflow assert_overflow saturate_overflow nan_overflow
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak).
Does this pique anyone's interest?
Hi,
Yes I think this is interesting by itself. Adding a range is a thing that you could consider, as proposed to the standard recently "C++ Binary Fixed-Point Arithmetic" (see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html). Note that this proposal has a wider scope.
I've on my ToDo list the implementation of the fractional part, which I will constraint to range and resolutions that can be represented using a built-in integer type.
Best, Vicente
Coincidently, right now I'm working on the exact same thing. Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Tue, Feb 21, 2012 at 1:36 AM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
Le 21/02/12 07:04, Ben Robinson a écrit :
The implementation is complete, with 5 policies created (and users can
always create and use more). The policies are:
ignore_overflow throw_overflow assert_overflow saturate_overflow nan_overflow
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak).
Does this pique anyone's interest?
Hi,
Yes I think this is interesting by itself. Adding a range is a thing that you could consider, as proposed to the standard recently "C++ Binary Fixed-Point Arithmetic" (see http://www.open-std.org/jtc1/**sc22/wg21/docs/papers/2012/**n3352.html<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html>). Note that this proposal has a wider scope.
That paper was a good read. The author states twice "Pre-emptively checking for overflow is challenging and tedious" This library aims to simplify that task completely under the hood, provided you are using the integers supplied from the library, instead of the builtin ones.
Ben Robinson
I've on my ToDo list the implementation of the fractional part, which I will constraint to range and resolutions that can be represented using a built-in integer type.
Best, Vicente
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>

Hi,
Yes I think this is interesting by itself. Adding a range is a thing that you could consider...
The Boost Constrained Value Library is a very complete library for range limiting. It was accepted into Boost some time ago, and the author is putting the finishing touches on it before general release. That library however has no provision for overflow detection. It was my purpose to create an integer data type library, with built-in overflow detection, which would seamlessly integrate with the Constrained Value Library. Thank you, Ben Robinson
Best, Vicente
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>

On Tue, Feb 21, 2012 at 4:58 AM, Toon Knapen <toon.knapen@gmail.com> wrote:
On Tue, Feb 21, 2012 at 7:04 AM, Ben Robinson <cppmaven@gmail.com> wrote:
<snip> Does this pique anyone's interest?
Can you elaborate on how this affects performance for integer arithmetic ?
There are two components to the library, overflow detection, and overflow handling. Overflow handling is policy dependent, but overflow detection is built-in to the library, and a key feature.
The overflow detection routines were meta-programmed with respect to the data types involved in the operation, in order to maximize performance. Template specializations are provided for each detection routines, which eliminate unnecessary operations based on both the signededness, and the ranges on both data types involved in the operation. For example, detect_addition_overflow performs only a single subtract and a single compare if both data types are unsigned. If both data types are signed, it will perform a maximum of one subtract, and either two or three compares. Because the detection routines are built into the overloaded math operators, and does not require one to rewrite existing code to make use of the detection, a developer could use these routines in a debug build, and then simply supply the ignore_overflow policy for a release build, which will inline out to zero overhead. Or even typedef to the library data types for debug, and typedef to PODs for release. The usage is exactly that of PODs. In short, performance was a key consideration in the design and implementation. Thank you, Ben Robinson
thanks, toon
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Ben Robinson wrote:
On Tue, Feb 21, 2012 at 4:58 AM, Toon Knapen <toon.knapen@gmail.com> wrote:
Can you elaborate on how this affects performance for integer arithmetic?
The overflow detection routines were meta-programmed with respect to the data types involved in the operation, in order to maximize performance. Template specializations are provided for each detection routines, which eliminate unnecessary operations based on both the signededness, and the ranges on both data types involved in the operation.
For example, detect_addition_overflow performs only a single subtract and a single compare if both data types are unsigned. If both data types are signed, it will perform a maximum of one subtract, and either two or three compares.
I haven't seen any code, so I can't see exactly what you're doing, nor have I spent any cycles thinking about the details that would apply in what you've described. However, I wrote a library that does range checked assignment, in a new-style cast form (checked_cast). It never requires more than two comparisons (against the target's maximum and minimum values), and there's no subtraction. I use TMP to determine signedness and relative size to determine which comparisons are needed. Obviously, types with the same signedness only require underflow or overflow detection when the target type is smaller than the source type. Thus, I'm concerned that your range tests are not as efficient as they could be. (Then again, what you've done might optimize better, for all I know. If so, be sure to document that in a Rationale section.) _____ Rob Stewart robert.stewart@sig.com Software Engineer using std::disclaimer; Dev Tools & Components Susquehanna International Group, LLP http://www.sig.com ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

I haven't seen any code, so I can't see exactly what you're doing, nor have I spent any cycles thinking about the details that would apply in what you've described. However, I wrote a library that does range checked assignment, in a new-style cast form (checked_cast). It never requires more than two comparisons (against the target's maximum and minimum values), and there's no subtraction. I use TMP to determine signedness and relative size to determine which comparisons are needed. Obviously, types with the same signedness only require underflow or overflow detection when the target type is smaller than the source type. Thus, I'm concerned that your range tests are not as efficient as they could be. (Then again, what you've done might optimize better, for all I know. If so, be sure to document that in a Rationale section.)
VerifiedInt contains the following four detection routines:
detect_overflow_impl_assignment detect_overflow_impl_addition detect_overflow_impl_subtraction detect_overflow_impl_multiplication Overflow (positive and negative) are not possible with division and modulus. The assignment detection routine performs exactly like yours, requiring between zero and two comparisons, based on the signedness and relative sizes of the data types using TMP. I will definitely create a Rationale section when I start producing the documentation. Thank you, Ben Robinson
_____ Rob Stewart robert.stewart@sig.com Software Engineer using std::disclaimer; Dev Tools & Components Susquehanna International Group, LLP http://www.sig.com

It would appear there is interest in this library. Therefore, I am making the full source available on GitHub here: https://github.com/cppmaven/VerifiedInt The library consists of three header files: verified_int.hpp (the integer class) verified_int_policies.hpp (the overflow handling policies) detail/verified_int_overflow_detection.hpp (the TMP overflow detection routines) In addition, you will notice that some of the hundreds of unit tests make use of the metaassert.hpp capability. The VerifiedInt library contains a number of static assertions. MetaAssert allows me to write unit tests, which will pass only if the static assertion fails instead of passing. The trick is to convert what would be a compiler error into a runtime exception, and then detect the exception in the unit test. Enjoy! Ben Robinson

Ben Robinson wrote:
It would appear there is interest in this library. Therefore, I am making the full source available on GitHub here:
https://github.com/cppmaven/VerifiedInt
The library consists of three header files:
verified_int.hpp (the integer class) verified_int_policies.hpp (the overflow handling policies) detail/verified_int_overflow_detection.hpp (the TMP overflow detection routines)
In addition, you will notice that some of the hundreds of unit tests make use of the metaassert.hpp capability. The VerifiedInt library contains a number of static assertions. MetaAssert allows me to write unit tests, which will pass only if the static assertion fails instead of passing. The trick is to convert what would be a compiler error into a runtime exception, and then detect the exception in the unit test.
Note that boost testing handles this by detecting failure to compile as a legitmate, expecte result. That is there exists the concept that a test must fail to compile in order to be considered as passing the test. Robert Ramey

Ben Robinson wrote:
It would appear there is interest in this library. Therefore, I am making the full source available on GitHub here:
https://github.com/cppmaven/VerifiedInt
In addition, you will notice that some of the hundreds of unit tests make use of the metaassert.hpp capability. The VerifiedInt library contains a number of static assertions. MetaAssert allows me to write unit tests, which will pass only if the static assertion fails instead of passing. The trick is to convert what would be a compiler error into a runtime exception, and then detect the exception in the unit test.
It's not immediately apparent to me how MetaAssert works, but it seems like if it's trapping a compiler error then it might be useful for detecting whether or not serialization will compile for a type. Erik ---------------------------------------------------------------------- This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses. References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing.

On Wed, Feb 22, 2012 at 9:40 AM, Robert Ramey <ramey@rrsd.com> wrote:
Ben Robinson wrote:
It would appear there is interest in this library. Therefore, I am making the full source available on GitHub here:
https://github.com/cppmaven/VerifiedInt
The library consists of three header files:
verified_int.hpp (the integer class) verified_int_policies.hpp (the overflow handling policies) detail/verified_int_overflow_detection.hpp (the TMP overflow detection routines)
In addition, you will notice that some of the hundreds of unit tests make use of the metaassert.hpp capability. The VerifiedInt library contains a number of static assertions. MetaAssert allows me to write unit tests, which will pass only if the static assertion fails instead of passing. The trick is to convert what would be a compiler error into a runtime exception, and then detect the exception in the unit test.
Note that boost testing handles this by detecting failure to compile as a legitmate, expecte result. That is there exists the concept that a test must fail to compile in order to be considered as passing the test.
It is always possible to write a build system, which will invert the return code from the compiler, to make non-compiling source pass the build.
My solution using metaassert does not depend in any way on the build system, nor does it depend on any specific unit testing framework. The solution is implemented fully in portable source code, and allows you to unit tests failing static assertions, using which whatever build system and unit testing framework you desire. Thank you, Ben Robinson
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Le 22/02/12 16:56, Ben Robinson a écrit :
It would appear there is interest in this library. Therefore, I am making the full source available on GitHub here:
https://github.com/cppmaven/VerifiedInt
The library consists of three header files:
verified_int.hpp (the integer class) verified_int_policies.hpp (the overflow handling policies) detail/verified_int_overflow_detection.hpp (the TMP overflow detection routines)
In addition, you will notice that some of the hundreds of unit tests make use of the metaassert.hpp capability. The VerifiedInt library contains a number of static assertions. MetaAssert allows me to write unit tests, which will pass only if the static assertion fails instead of passing. The trick is to convert what would be a compiler error into a runtime exception, and then detect the exception in the unit test.
Hi, I've found this use of METAASSERT that trouble my understanding // Prevent implicit conversion from one verified type to another // on assignment via operator T() on the right-hand-side. template <class R> verified_int(R const prevented) { BOOST_METAASSERT_MSG(sizeof(R) == 0, CANNOT_CONSTRUCT_FROM_A_DIFFERENTLY_TYPED_VERIFIED_INT, (R)); } This doesn't prevent the implicit conversion, but makes it fail. This mean that if I have a function f overloaded with verified_int<X> and type ConvertibleFromR void f(verified_int<X>); void f(ConvertibleFromR); the call R r; f(r); will result on ambiguous resolution. I guess that as verifier_int is templated with a policy, the detection mechanism should be public, and so it should appear in a public file and not inside the detail directory. I don't know if the rational to have on each overflow policy almost all the operation logic is due to performance constraints. Have you considered an overflow policy that contains only the action to do when there is an overflow? Best, Vicente

On Thu, Feb 23, 2012 at 1:35 AM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
Hi,
I've found this use of METAASSERT that trouble my understanding
// Prevent implicit conversion from one verified type to another // on assignment via operator T() on the right-hand-side. template <class R> verified_int(R const prevented) { BOOST_METAASSERT_MSG(sizeof(R) == 0, CANNOT_CONSTRUCT_FROM_A_** DIFFERENTLY_TYPED_VERIFIED_**INT, (R)); }
This doesn't prevent the implicit conversion, but makes it fail. This mean that if I have a function f overloaded with verified_int<X> and type ConvertibleFromR
void f(verified_int<X>); void f(ConvertibleFromR);
the call
R r; f(r);
will result on ambiguous resolution.
I will explain the rational behind the implementation choices of the binary math operators for VerifiedInt. When performing multiple operations (let's take addition for example), the order of operations is important. Compare:
verified_uint8_t a = 250; verified_uint8_t b = a - 50 + 50; verified_uint8_t c = a + 50 - 50; Clearly computing b will not cause an overflow, but computing c will. The order of operations is left to right. If the policy is to saturate, b will equal 250, and c will equal 205. VerifiedInts are also implicitly convertible to their underlying data type. This presents a detection problem when the non-verified int is on the LHS of a math operation. Take: verified_uint8_t a = 250; verified_uint8_t b = 50 + a; Since a is implicitly convertable to a uint8_t, the compiler will perform this conversion implicitly, and the result will be an undetected overflow. It is for this reason, that I have supplied binary math operators where only the RHS is a verified_int, and I have statically asserted them to cause a compiler error. If the user wants to write such an expression, they can statically cast 'a' to a uint8_t.
I guess that as verifier_int is templated with a policy, the detection mechanism should be public, and so it should appear in a public file and not inside the detail directory.
I can move the detection header file out of the detail directory, if that seems more logical. You are correct, policy authors will need to include and use it directly. I'll incorporate that feedback.
I don't know if the rational to have on each overflow policy almost all the operation logic is due to performance constraints. Have you considered an overflow policy that contains only the action to do when there is an overflow?
I chose to allow the policy author to make a single function call to determine if overflow has occured, and provide no restrictions on implementing the resulting behavior. The policy author not only has access to the type of overflow, but both values in the case of a math operation, which would permit creating a periodic policy for example. Or a policy could be created which only checks assignment, and has no run-time cost for the math operations, etc... If more structure is desired, I could provide it, but felt the trade-off wasn't worth it, considering I simplified the detection logic to a single function call, provided in a single header. Thank you, Ben Robinson
Best, Vicente
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>

Le 24/02/12 05:33, Ben Robinson a écrit :
On Thu, Feb 23, 2012 at 1:35 AM, Vicente J. Botet Escriba< vicente.botet@wanadoo.fr> wrote:
Hi,
I've found this use of METAASSERT that trouble my understanding
// Prevent implicit conversion from one verified type to another // on assignment via operator T() on the right-hand-side. template<class R> verified_int(R const prevented) { BOOST_METAASSERT_MSG(sizeof(R) == 0, CANNOT_CONSTRUCT_FROM_A_** DIFFERENTLY_TYPED_VERIFIED_**INT, (R)); }
This doesn't prevent the implicit conversion, but makes it fail. This mean that if I have a function f overloaded with verified_int<X> and type ConvertibleFromR
void f(verified_int<X>); void f(ConvertibleFromR);
the call
R r; f(r);
will result on ambiguous resolution.
I will explain the rational behind the implementation choices of the binary math operators for VerifiedInt. When performing multiple operations (let's take addition for example), the order of operations is important. Compare:
verified_uint8_t a = 250;
verified_uint8_t b = a - 50 + 50;
verified_uint8_t c = a + 50 - 50;
Clearly computing b will not cause an overflow, but computing c will. The order of operations is left to right. If the policy is to saturate, b will equal 250, and c will equal 205. Let see how the built-in behave { unsigned char a=250; unsigned char b=50; unsigned char c=50; unsigned char r=a-b+c; std::cout <<"a-50+50 = "<< int(r) << std::endl; } { unsigned char a=250; unsigned char b=50; unsigned char c=50; unsigned char r=a+b-c; std::cout <<"a+50-50 = "<< int(r) << std::endl; }
The result is a-50+50 = 250 a+50-50 = 250
VerifiedInts are also implicitly convertible to their underlying data type. This presents a detection problem when the non-verified int is on the LHS of a math operation. Take:
verified_uint8_t a = 250;
verified_uint8_t b = 50 + a;
Since a is implicitly convertable to a uint8_t, the compiler will perform this conversion implicitly, and the result will be an undetected overflow. I don't think so. 50+a is 300 which when assigned to b will overflow. It is for this reason, that I have supplied binary math operators where only the RHS is a verified_int, and I have statically asserted them to cause a compiler error. If the user wants to write such an expression, they can statically cast 'a' to a uint8_t.
I really think there is a design problem here. I, as a possible user, will not accept to do this kind of cast.
I don't know if the rationale to have on each overflow policy almost all the operation logic is due to performance constraints. Have you considered an overflow policy that contains only the action to do when there is an overflow?
I chose to allow the policy author to make a single function call to determine if overflow has occured, and provide no restrictions on implementing the resulting behavior. The policy author not only has access to the type of overflow, but both values in the case of a math operation, which would permit creating a periodic policy for example. Or a policy could be created which only checks assignment, and has no run-time cost for the math operations, etc... If more structure is desired, I could provide it, but felt the trade-off wasn't worth it, considering I simplified the detection logic to a single function call, provided in a single header.
It is good to isolate the overflow detection logic. However, with your design, the user adding a new policy needs to define the overflow policy for each one of the operations. Up to now your library manage with 4 check operations (add,substract,multiply/devide) + assignment, but what will happens if you add (or worst yet the user adds) an operation that needs to be checked? Should all the overflow policies be updated with this new checked operation? I was thinking, for example, to scale (*2^n or 2^(-n). This design seems to don't scale well. Have you tried to isolate the overflow actions in a policy following this interface struct OverflowPolicy { template <typename T, typename U> T on_positive_overflow(U value); template <typename T> T on_negative_overflow(U value); }; Ignore policy will just return the value, exception policy will throw the corresponding exception, saturate will return the integer_traits<L>::const_max or integer_traits<L>::const_min, assert policy will assert false and return the value, a modulus policy will return the value modulo the range of T. I would expect the compiler to optimize the following code if (detect_positive_overflow(new_val)) { value_=new_val; } else if (detect_negative_overflow(new_val)) { value_=new_val; } else { value_=new_val; } as if value_=new_val; but I have no checked it yet. If this optimization is confirmed, this design will scale and should perform as well as the built-in types. If this optimization is not confirmed, it could be an good argument to choose an alternative design (maybe the yours) that would perform as well as the built-in types for the ignore policy. Maybe the overflow policy is not enough orthogonal and the library should just work for a predefined set of overflow policies. BTW, it is surprising the way a nan-verified_int<T>is introduced with the nan_overflow policy, as the policy is doing much more than checking for overflow. I don't see how the user can make the difference between nan-integer and integer_traits<L>::const_max. It is curious that 0/0 doesn't results in nan. Note that I never used a nan-integer with fixed size integers :( Best, Vicente

Ben Robinson wrote:
Overflow (positive and negative) are not possible with division and modulus.
hmmm - what happens when an unsigned integer is divided by a negative number? Doesn't this overflow or something? Robert Ramey

Ben Robinson wrote:
Overflow (positive and negative) are not possible with division and modulus.
hmmm - what happens when an unsigned integer is divided by a negative number? Doesn't this overflow or something?
A positive whole number divided by a negative whole number is a negative value which is smaller in magnitude than the original numerator. With integers, truncation of the fractional part may be involved, but this is not overflow. Integers can only overflow in the positive and negative
On Wed, Feb 22, 2012 at 9:38 AM, Robert Ramey <ramey@rrsd.com> wrote: direction. That is not the same as underflow, which floating point numbers can do. Thank you, Ben Robinson Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 22/02/12 18:40, Ben Robinson wrote:
On Wed, Feb 22, 2012 at 9:38 AM, Robert Ramey <ramey@rrsd.com> wrote:
Ben Robinson wrote:
Overflow (positive and negative) are not possible with division and modulus.
hmmm - what happens when an unsigned integer is divided by a negative number? Doesn't this overflow or something?
A positive whole number divided by a negative whole number is a negative value which is smaller in magnitude than the original numerator. With integers, truncation of the fractional part may be involved, but this is not overflow. Integers can only overflow in the positive and negative direction. That is not the same as underflow, which floating point numbers can do.
What about UINT_MAX/-1 or INT_MIN/-1. I believe they both overflow by your definition. John Bytheway

What about UINT_MAX/-1 or INT_MIN/-1. I believe they both overflow by your definition.
John Bytheway
You are absolutely correct. I am not sure how I missed that corner case
considering that I have the negation operator, and multiplication by -1 accounted for. I will write additional unit tests and post a fix shortly. Providing valuable feedback is why I post to the Boost community. Thank you, Ben Robinson
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Ben Robinson wrote:
On Wed, Feb 22, 2012 at 9:38 AM, Robert Ramey <ramey@rrsd.com> wrote:
Ben Robinson wrote:
Overflow (positive and negative) are not possible with division and modulus.
hmmm - what happens when an unsigned integer is divided by a negative number? Doesn't this overflow or something?
A positive whole number divided by a negative whole number is a negative value which is smaller in magnitude than the original numerator.
I am aware of this. but what is the result of this operation? To be correct it would have to be negative. But the numerator is unsigned. How is this resolved? To take a pathological case: unsigned int x = std::numeric_limits<unsigned int>::max(); x = x / (-1); Will this throw? or fail at compile time, or what? Robert Ramey

On Wed, Feb 22, 2012 at 11:37 AM, Robert Ramey <ramey@rrsd.com> wrote:
... How is this resolved? To take a pathological case:
unsigned int x = std::numeric_limits<unsigned int>::max(); x = x / (-1);
Will this throw? or fail at compile time, or what?
Robert Ramey
You are absolutely correct. I am not sure how I missed that corner case considering that I have the negation operator, and multiplication by -1 are accounted for. I will write additional unit tests and post a fix shortly.
Providing valuable feedback is why I post to the Boost community. Thank you, Ben Robinson
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

A fix for VerifiedInt has been uploaded to the GitHub repository here: https://github.com/cppmaven/VerifiedInt In addition, I added separate exception types 'positive_overflow' and 'negative_overflow', to the policy which throws, so unit tests can verify the correct type of overflow. Continued feedback from the community is greatly appreciated. Thank you, Ben Robinson

Le 23/02/12 06:11, Ben Robinson a écrit :
On Wed, Feb 22, 2012 at 11:37 AM, Robert Ramey<ramey@rrsd.com> wrote:
... How is this resolved? To take a pathological case:
unsigned int x = std::numeric_limits<unsigned int>::max(); x = x / (-1);
Will this throw? or fail at compile time, or what?
Robert Ramey
You are absolutely correct. I am not sure how I missed that corner case considering that I have the negation operator, and multiplication by -1 are accounted for. I will write additional unit tests and post a fix shortly. I would expect the result type of dividing unsigned verified_int and signed verified_int to be signed verified_int. Couldn't this help to avoid the overflow on operator/()? Of course, if the user assigns a signed to an unsigned, overflow must be checked. I would also prefer that there is no implicit conversion from signed to unsigned verified_int. A specific cast could be used for this purpose.
unary minus operator on unsigned verified_int could also result on a signed verified_int. Just my 2 cts, Vicente

On Thu, Feb 23, 2012 at 5:59 AM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:I would expect the result type of dividing unsigned verified_int and signed verified_int to be signed verified_int. Couldn't this help to avoid the overflow on operator/()?
Of course, if the user assigns a signed to an unsigned, overflow must be checked. I would also prefer that there is no implicit conversion from signed to unsigned verified_int. A specific cast could be used for this purpose.
unary minus operator on unsigned verified_int could also result on a signed verified_int.
I agree with you on preventing implicit conversions. As far as the return types of math operations, the resulting type is always that of the LHS. The leftmost type in a sequence of binary math operations is used for the entire computation, as it is processed left to right.
My philosophy on verified_int, is that overflow should be detected, not avoided. If the author needs a larger data type, then this library will catch that overflow, and the data types can be increased to accomodate the necessary ranges. I do mostly embedded development, and many data types are chosen to minimize space. This library was designed to add overflow detection only, and not implicitly convert data types. That way, once overflow was proven to not exist in a code base, the verified_int could be replaced with the underlying types via some typedefs, and the run-time cost is completely eliminated. If verified_int starts doing more than just checking, than changing to the underlying types would change the system's behavior, an undersirable effect. We agree on this point I believe. Thank you, Ben Robinson
Just my 2 cts, Vicente
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>

Le 24/02/12 05:42, Ben Robinson a écrit :
On Thu, Feb 23, 2012 at 5:59 AM, Vicente J. Botet Escriba< vicente.botet@wanadoo.fr> wrote:I would expect the result type of dividing unsigned verified_int and signed verified_int to be signed verified_int. Couldn't this help to avoid the overflow on operator/()?
Of course, if the user assigns a signed to an unsigned, overflow must be checked. I would also prefer that there is no implicit conversion from signed to unsigned verified_int. A specific cast could be used for this purpose.
unary minus operator on unsigned verified_int could also result on a signed verified_int.
I agree with you on preventing implicit conversions. As far as the return types of math operations, the resulting type is always that of the LHS. The leftmost type in a sequence of binary math operations is used for the entire computation, as it is processed left to right. Note that process and type are different things. The type of T+U can be common_type<T,U>::type.
My philosophy on verified_int, is that overflow should be detected, not avoided. If the author needs a larger data type, then this library will catch that overflow, and the data types can be increased to accomodate the necessary ranges. I do mostly embedded development, and many data types are chosen to minimize space.
In general, the space is taken by the variables, the temporaries do no count so much.
This library was designed to add overflow detection only, and not implicitly convert data types. That way, once overflow was proven to not exist in a code base, the verified_int could be replaced with the underlying types via some typedefs, and the run-time cost is completely eliminated. If verified_int starts doing more than just checking, than changing to the underlying types would change the system's behavior, an undersirable effect. We agree on this point I believe.
IIIUC I think we agree on the goal but not on how to achieve it. I think verified_int should behave as the builtin types works (as much as possible). Let see some examples { short b=3; std::cout <<"-b = "<< -b << std::endl; } { unsigned short a=2; short b=-1; std::cout <<"a/b = "<< a/b << std::endl; } { unsigned short a=2; short b=-1; std::cout <<"a*b = "<< a*b << std::endl; } { unsigned short a=2; short b=-3; std::cout <<"a+b = "<< a+b << std::endl; } { unsigned short a=2; short b=3; std::cout <<"a-b = "<< a-b << std::endl; } { unsigned char a=240; unsigned char b=240; std::cout <<"a+b = "<< a+b << std::endl; } The results are -b = -3 a/b = -2 a*b = -2 a+b = -1 a-b = -1 a+b = 480 As you can see the built-in types works as I was suggesting you. The overflow problem occurs when you assign a larger (in range) type to a shorter type or on some operations on the larger (signed/unsigned) integer type. But maybe, what verified_int is modeling is an integer type that doesn't converts to other verified_int types and checks for overflow. Is this what your library is designed for? Best, Vicente

On Fri, Feb 24, 2012 at 12:31 AM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
IIIUC I think we agree on the goal but not on how to achieve it. I think verified_int should behave as the builtin types works (as much as possible). Let see some examples
{ short b=3; std::cout <<"-b = "<< -b << std::endl; } { unsigned short a=2; short b=-1; std::cout <<"a/b = "<< a/b << std::endl; } { unsigned short a=2; short b=-1; std::cout <<"a*b = "<< a*b << std::endl; } { unsigned short a=2; short b=-3; std::cout <<"a+b = "<< a+b << std::endl; } { unsigned short a=2; short b=3; std::cout <<"a-b = "<< a-b << std::endl; } { unsigned char a=240; unsigned char b=240; std::cout <<"a+b = "<< a+b << std::endl; }
The results are
-b = -3 a/b = -2 a*b = -2 a+b = -1 a-b = -1 a+b = 480
As you can see the built-in types works as I was suggesting you. The overflow problem occurs when you assign a larger (in range) type to a shorter type or on some operations on the larger (signed/unsigned) integer type.
But maybe, what verified_int is modeling is an integer type that doesn't converts to other verified_int types and checks for overflow. Is this what your library is designed for?
Best,
Vicente
Vicente,
Thank you for your thoughtful input. You are correct in that my verified_int class checks for overflow while preventing any implicit conversion of it's underlying representation. This has certain advantages. Consider a small code refactor which is not expected to change behavior (assume no compiler optimizations): // One math operation per line: verified_uint8_t a = 250; uint8_t b = 10; verified_uint8_t temp = a + b; // This will overflow verified_uint8_t c = temp - b; // We already overflowed above //Now two operations on the same line: verified_uint8_t a = 250; uint8_t b = 10; verified_uint8_t c = a + b - b; // This will also overflow (as currently implemented) However, if the result of a+b can return common_type<A,B>::type which will then subtract b, and then be implicitly converted to verified_uint8_t, the overflow will be avoided. But now that I think more about this, the language will cause an overflow on assignment in the first example, and the language will avoid the overflow in the second example due to the type of the temporary being chosen by the compiler. I will definitely put more thought into this. Excellent feedback. Thank you, Ben Robinson
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>

Le 24/02/12 10:21, Ben Robinson a écrit :
On Fri, Feb 24, 2012 at 12:31 AM, Vicente J. Botet Escriba< vicente.botet@wanadoo.fr> wrote:
But maybe, what verified_int is modeling is an integer type that doesn't converts to other verified_int types and checks for overflow. Is this what your library is designed for?
Thank you for your thoughtful input. You are correct in that my verified_int class checks for overflow while preventing any implicit conversion of it's underlying representation. This has certain advantages. Consider a small code refactor which is not expected to change behavior (assume no compiler optimizations):
// One math operation per line: verified_uint8_t a = 250; uint8_t b = 10; verified_uint8_t temp = a + b; // This will overflow verified_uint8_t c = temp - b; // We already overflowed above Now that we have auto in C++11, I think that intermediates could be stored using auto. So the preceding could be written as
//Now two operations on the same line: verified_uint8_t a = 250; uint8_t b = 10; verified_uint8_t c = a + b - b; // This will also overflow (as currently implemented)
However, if the result of a+b can return common_type<A,B>::type which will then subtract b, and then be implicitly converted to verified_uint8_t, the overflow will be avoided. Well, a+b should in addition use promotion to a larger type to avoid overflow (when possible of course).
But now that I think more about this, the language will cause an overflow on assignment in the first example, and the language will avoid the overflow in the second example due to the type of the temporary being chosen by the compiler. Note that the use of a specific shorter temporary types is what is requesting the overflow. The use of intermediary larger types (when
verified_uint8_t a = 250; uint8_t b = 10; auto temp = a + b; // This should NOT overflow verified_uint8_t c = temp - b; // c is equal to a possible) avoids overflow. When the user wants to check for overflow s/he just assigns the expression to a variable with the wanted constraints. Another possibility, and maybe what you are looking for, is to define arithmetic operations with a specific result type, as e.g template <typename Res, typename T, typename U> Res add(T,U); I don't know yet if what you look for is a closed and verified integer type which is a different thing, with a quite different semantic respect to the C++ integer built-in types. closed_and_verified_int<T> operator+(closed_and_verified_int<T>,closed_and_verified_int<T>);
I will definitely put more thought into this. Excellent feedback.
Regards, Vicente

On Tue, Feb 21, 2012 at 1:04 AM, Ben Robinson <cppmaven@gmail.com> wrote:
The implementation is complete, with 5 policies created (and users can always create and use more). The policies are:
ignore_overflow throw_overflow assert_overflow saturate_overflow nan_overflow
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak).
Does this pique anyone's interest?
How does it relate to Boost.NumericConversion? See http://www.boost.org/doc/libs/1_48_0/libs/numeric/conversion/doc/html/index.... --Beman

Beman Dawes wrote:
On Tue, Feb 21, 2012 at 1:04 AM, Ben Robinson <cppmaven@gmail.com> wrote:
The implementation is complete, with 5 policies created (and users can always create and use more). The policies are:
ignore_overflow throw_overflow assert_overflow saturate_overflow nan_overflow
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak).
Does this pique anyone's interest?
How does it relate to Boost.NumericConversion?
See http://www.boost.org/doc/libs/1_48_0/libs/numeric/conversion/doc/html/index....
I took a serious look at boost numeric conversion and found it to be very, very difficult (for me) to understand and use. It could be just an issue with the documentation. Everytime I had a question about what it does, I just couldn't find a good answer in the documents. When I went to look into the code, it wasn't easy either. Robert Ramey
--Beman
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

2012/2/21 Ben Robinson <cppmaven@gmail.com>:
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Great! Overflow detections are widely used in Boost.Lexical_cast library, but the implementation is not generic. I would very appreciate a header only library for detecting overflows. How fast is your library? Does it support unsigned long long int overflow detection? Best regards, Antony Polukhin

On Tue, Feb 21, 2012 at 9:18 AM, Antony Polukhin <antoshkka@gmail.com>wrote:
2012/2/21 Ben Robinson <cppmaven@gmail.com>:
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Great! Overflow detections are widely used in Boost.Lexical_cast library, but the implementation is not generic. I would very appreciate a header only library for detecting overflows.
Well now you have it! :)
How fast is your library? Does it support unsigned long long int overflow detection?
The library is meta-programmed with respect to the data types involved in each overloaded math operation, taking advantage is performance savings, such as when both data types are unsigned (see my more complete performance response above to a previous email). Addition overflow detection for example involves a single subtract, along with from one to three compares. The library is currently implemented over the following data types: uint8_t, uint16_t, uint32_t uint64_t, int8_t, int16_t, int32_t int64_t. Some of the template specializations for overflow detection were tricky for the 64-bit types, given the issues with negative, maximum range values. These specializations exist and are fully unit tested. Thank you, Ben Robinson
Best regards, Antony Polukhin
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 22/02/12 02:00, Ben Robinson wrote:
On Tue, Feb 21, 2012 at 9:18 AM, Antony Polukhin <antoshkka@gmail.com>wrote:
How fast is your library? Does it support unsigned long long int overflow detection?
The library is meta-programmed with respect to the data types involved in each overloaded math operation, taking advantage is performance savings, such as when both data types are unsigned (see my more complete performance response above to a previous email).
Addition overflow detection for example involves a single subtract, along with from one to three compares.
Would you be open to providing assembly implementations to supplement the existing portable implementation if there was demand and they were faster? John Bytheway

On Wed, Feb 22, 2012 at 11:04 AM, John Bytheway <jbytheway+boost@gmail.com>wrote:
Would you be open to providing assembly implementations to supplement the existing portable implementation if there was demand and they were faster?
I am certainly open to specializing the implementation for specific
platforms, once the portable implementation gains support and is approved for Boost. Adding optimizations for specific platoforms does not seem outside the goals of the Boost, provided a fully portable implementation exists. Thank you, Ben Robinson
John Bytheway
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Tue, Feb 21, 2012 at 1:04 AM, Ben Robinson <cppmaven@gmail.com> wrote:
The implementation is complete, with 5 policies created (and users can always create and use more). The policies are:
ignore_overflow throw_overflow assert_overflow saturate_overflow nan_overflow
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak).
Does this pique anyone's interest?
How does it relate to Boost.NumericConversion?
See http://www.boost.org/doc/libs/1_48_0/libs/numeric/conversion/doc/html/index....
--Beman
Boost.NumericConversion has some limitations with respect to overflow handling. The arguments to the handling policy only contain the type of overflow, and not the values, nor the mathematical operation which was attempted. Therefore, as I understand it, it is not possible to write a saturate_overflow, or periodic_overflow policy, for example. You can only
On Tue, Feb 21, 2012 at 5:01 AM, Beman Dawes <bdawes@acm.org> wrote: throw, assert, or ignore. Again this is my understanding. Boost.NumericConversion has some nice floating point to integer conversion capability. My library aims only at detecting integer overflow (positive and negative). Ben Robinson
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Tue, Feb 21, 2012 at 12:04 AM, Ben Robinson <cppmaven@gmail.com> wrote:
The implementation is complete, with 5 policies created (and users can always create and use more). The policies are:
ignore_overflow throw_overflow assert_overflow saturate_overflow nan_overflow
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak).
Does this pique anyone's interest?
Yes! I've been using Microsoft's safeint (http://safeint.codeplex.com) for a while, but something more built into boost would be nice. -- Cory Nelson http://int64.org

On Tue, Feb 21, 2012 at 9:29 AM, Cory Nelson <phrosty@gmail.com> wrote:
On Tue, Feb 21, 2012 at 12:04 AM, Ben Robinson <cppmaven@gmail.com> wrote:
The implementation is complete, with 5 policies created (and users can always create and use more). The policies are:
ignore_overflow throw_overflow assert_overflow saturate_overflow nan_overflow
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak).
Does this pique anyone's interest?
Yes! I've been using Microsoft's safeint (http://safeint.codeplex.com) for a while, but something more built into boost would be nice.
Yea, I wanted to call my library SafeInt, but Microsoft's library already uses that name. The same is true of BoundedInt, a great name but already taken.
I would love for this library to be accepted into Boost, given the portable nature of the implementation, and the attention to performance in the implementation. Thank you, Ben Robinson
-- Cory Nelson http://int64.org
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi, 2012/2/21 Ben Robinson <cppmaven@gmail.com>:
The implementation is fully portable, and it provides integer types which can be used like the builtin integers, except they will trigger their policy on an overflow.
That's great that Boost is on the way to have this facility.
Also, this library will integrate seamlessly with the Boost Constrained Value Library (contributed by Robert Kawulak).
It should nicely complement the functionality of Constrained Value. What I would be interested in is seeing some code for constrained_int<verified_int<>> compiled to assembly with optimisation turned on - I guess a good compiler may nicely optimise out unnecessary checks, but I'd like to see that verified. I've only skimmed quickly through the code - it needs polishing, but looks promising. Some small remarks for now: - instead of using macros for testing different types you may use test case templates (http://www.boost.org/doc/libs/1_49_0/libs/test/doc/html/utf/user-guide/test-...), - in verified_int::operator-() you don't check for overflow of -this->value_, - I doubt that virtual inheritance of overflow_detected and std::exception makes sense - can there be several kinds of overflow at once, forcing multiple inheritance of the exceptions? Best regards, Robert
participants (11)
-
Antony Polukhin
-
Beman Dawes
-
Ben Robinson
-
Cory Nelson
-
John Bytheway
-
Nelson, Erik - 2
-
Robert Kawulak
-
Robert Ramey
-
Stewart, Robert
-
Toon Knapen
-
Vicente J. Botet Escriba