[Interval] Conversion from integer types?

Can someone explain what the issue is in converting an integer type to a boost::interval, and in performing mixed integer/floating point arithmetic and comparisons? Is it simply that the integer types may have more bits in their representation than floating point ones, and we may therefore loose precision? If so, can we not convert integers to intervals implicitly, complete with calculation of the error? That would auto-magically make mixed integer/floating point arithmetic work as well. And yes, I'm prepared to invest some time to make this work and do the right thing, if time is the only stumbling block :-) Thanks, John.

Le samedi 22 décembre 2007 à 16:03 +0000, John Maddock a écrit :
Can someone explain what the issue is in converting an integer type to a boost::interval, and in performing mixed integer/floating point arithmetic and comparisons?
Is it simply that the integer types may have more bits in their representation than floating point ones, and we may therefore loose precision?
Yes, that is the main issue I can think of.
If so, can we not convert integers to intervals implicitly, complete with calculation of the error? That would auto-magically make mixed integer/floating point arithmetic work as well.
You are right for arithmetic operations. Note that it has a surprising consequence though. It means that 2*i (with i of type interval<float>) would be much slower than 2.0f*i, since operations involving implicitly-singleton intervals are faster than operations on "wide" intervals. It also does not work that well with comparisons. For example, 2147483600 < i may be true from a mathematical point of view. But when the lower bound of i happens to be 2147483648, the comparison interval<float>(2147483600) < i can evaluate to something else or throw an exception (depending on the comparison kind).
And yes, I'm prepared to invest some time to make this work and do the right thing, if time is the only stumbling block :-)
If you intend to work on it, you may want to take a look at the code stored there: https://gforge.inria.fr/projects/std-interval/ This is a project aiming at transforming the Boost.Interval library so that it matches the specification of N2137 while retaining backward-compatibility. As a consequence, it provides a brand new policy-based system, which should be a lot simpler to use than the previous one, I hope. In particular, it would then be trivial to add a "no integer / small integer / full integer" policy for dealing with the issue at hand. But I am just thinking out loud and it may be too far-fetched. Best regards, Guillaume

Guillaume Melquiond wrote:
Le samedi 22 décembre 2007 à 16:03 +0000, John Maddock a écrit :
Can someone explain what the issue is in converting an integer type to a boost::interval, and in performing mixed integer/floating point arithmetic and comparisons?
Is it simply that the integer types may have more bits in their representation than floating point ones, and we may therefore loose precision?
Yes, that is the main issue I can think of.
If so, can we not convert integers to intervals implicitly, complete with calculation of the error? That would auto-magically make mixed integer/floating point arithmetic work as well.
You are right for arithmetic operations. Note that it has a surprising consequence though. It means that 2*i (with i of type interval<float>) would be much slower than 2.0f*i, since operations involving implicitly-singleton intervals are faster than operations on "wide" intervals.
:-(
It also does not work that well with comparisons. For example, 2147483600 < i may be true from a mathematical point of view. But when the lower bound of i happens to be 2147483648, the comparison interval<float>(2147483600) < i can evaluate to something else or throw an exception (depending on the comparison kind).
Yes, I'm already knee deep in the comparison issues !
And yes, I'm prepared to invest some time to make this work and do the right thing, if time is the only stumbling block :-)
If you intend to work on it, you may want to take a look at the code stored there: https://gforge.inria.fr/projects/std-interval/ This is a project aiming at transforming the Boost.Interval library so that it matches the specification of N2137 while retaining backward-compatibility. As a consequence, it provides a brand new policy-based system, which should be a lot simpler to use than the previous one, I hope. In particular, it would then be trivial to add a "no integer / small integer / full integer" policy for dealing with the issue at hand. But I am just thinking out loud and it may be too far-fetched.
Ah, I was wanting to do the minimum amount of work necessary ;-) I already have some patches for the existing Boost.Interval code that effectively fix the mixed integer/floating point arithmetic issue (I hope). I hadn't realized that there was a new design floating around. I'll take a look but don't hold your breath :-) Does this mean that you're not keen on patches to the the existing Boost.Interval code? Thanks, John.

Are you certain that mixing floating-point types and integral types is desirable? floating-point types are, of course, approximations unlike integer types. It is dangerous to mix the two, and the approach to do so should not be allowed even by a policy. Mixing floating point types with integer types implicitly is a poor software engineering practice without merit in my humble opinion. Implicit type conversion has been a frequently regretted design decision in my experience despite the initial syntactic appeal. Neil Groves On Jan 3, 2008 5:21 PM, John Maddock <john@johnmaddock.co.uk> wrote:
Guillaume Melquiond wrote:
Le samedi 22 décembre 2007 à 16:03 +0000, John Maddock a écrit :
Can someone explain what the issue is in converting an integer type to a boost::interval, and in performing mixed integer/floating point arithmetic and comparisons?
Is it simply that the integer types may have more bits in their representation than floating point ones, and we may therefore loose precision?
Yes, that is the main issue I can think of.
If so, can we not convert integers to intervals implicitly, complete with calculation of the error? That would auto-magically make mixed integer/floating point arithmetic work as well.
You are right for arithmetic operations. Note that it has a surprising consequence though. It means that 2*i (with i of type interval<float>) would be much slower than 2.0f*i, since operations involving implicitly-singleton intervals are faster than operations on "wide" intervals.
:-(
It also does not work that well with comparisons. For example, 2147483600 < i may be true from a mathematical point of view. But when the lower bound of i happens to be 2147483648, the comparison interval<float>(2147483600) < i can evaluate to something else or throw an exception (depending on the comparison kind).
Yes, I'm already knee deep in the comparison issues !
And yes, I'm prepared to invest some time to make this work and do the right thing, if time is the only stumbling block :-)
If you intend to work on it, you may want to take a look at the code stored there: https://gforge.inria.fr/projects/std-interval/ This is a project aiming at transforming the Boost.Interval library so that it matches the specification of N2137 while retaining backward-compatibility. As a consequence, it provides a brand new policy-based system, which should be a lot simpler to use than the previous one, I hope. In particular, it would then be trivial to add a "no integer / small integer / full integer" policy for dealing with the issue at hand. But I am just thinking out loud and it may be too far-fetched.
Ah, I was wanting to do the minimum amount of work necessary ;-)
I already have some patches for the existing Boost.Interval code that effectively fix the mixed integer/floating point arithmetic issue (I hope). I hadn't realized that there was a new design floating around. I'll take a look but don't hold your breath :-)
Does this mean that you're not keen on patches to the the existing Boost.Interval code?
Thanks, John.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Neil Groves wrote:
Are you certain that mixing floating-point types and integral types is desirable?
floating-point types are, of course, approximations unlike integer types. It is dangerous to mix the two, and the approach to do so should not be allowed even by a policy. Mixing floating point types with integer types implicitly is a poor software engineering practice without merit in my humble opinion.
Implicit type conversion has been a frequently regretted design decision in my experience despite the initial syntactic appeal.
What do you consider is wrong with using integer literals to represent constants, where those constants are indeed integers? Either in code such as: my_real -= 1; or in tables of (integer) coefficients to polynomials? In this case my_real is a template type, so the conversion may or may not loose precision depending upon the type, but the since result is always represented as the floating point type, then there is no more accurate way to represent an integer than as an... integer. Converting the other way most certainly is wrong, of course. All IMHO, Regards, John.

I agree that the examples you provide appear safe. While proof by analogy is fraud, it appears to be similar in nature to casts in general. One can provide many examples of the safe use of casts, but generally avoidance leads to better code. I am not familiar enough with the use of boost::interval to take a strong stance, espcially since I do not have metrics to back up my statements. I would urge careful analysis incase the interoperability of types introduces new risks. In the case of template functions where you have template<class T> T foo(T x) { return x -= 1; }, I like to use (when I remember!) boost::numeric_cast since T might be smaller than int. That is, template<class T> T foo(T x) { return x -= numeric_cast<T>(1); } I think I'm probably being pedantic but as I recall the size of int is not stated as part of the C++ specification, only in relative terms to other types. Therefore mixing ints with floating-point types is not guaranteed to be lossless, although on most implementations it will be. I am certainly would concede to those developers that have more experience with this library however, especially if they are doing the work! HTH, Neil Groves On Jan 4, 2008 2:01 PM, John Maddock <john@johnmaddock.co.uk> wrote:
Neil Groves wrote:
Are you certain that mixing floating-point types and integral types is desirable?
floating-point types are, of course, approximations unlike integer types. It is dangerous to mix the two, and the approach to do so should not be allowed even by a policy. Mixing floating point types with integer types implicitly is a poor software engineering practice without merit in my humble opinion.
Implicit type conversion has been a frequently regretted design decision in my experience despite the initial syntactic appeal.
What do you consider is wrong with using integer literals to represent constants, where those constants are indeed integers?
Either in code such as:
my_real -= 1;
or in tables of (integer) coefficients to polynomials?
In this case my_real is a template type, so the conversion may or may not loose precision depending upon the type, but the since result is always represented as the floating point type, then there is no more accurate way to represent an integer than as an... integer.
Converting the other way most certainly is wrong, of course.
All IMHO,
Regards, John.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Neil Groves wrote:
In the case of template functions where you have template<class T> T foo(T x) { return x -= 1; }, I like to use (when I remember!) boost::numeric_cast since T might be smaller than int. That is, template<class T> T foo(T x) { return x -= numeric_cast<T>(1); }
Right but that may throw: IMO a user is going to very surprised indeed if your code throws when converting a literal :-) However, depending where the integer has come from a numeric_cast may well be in order.
I think I'm probably being pedantic but as I recall the size of int is not stated as part of the C++ specification, only in relative terms to other types. Therefore mixing ints with floating-point types is not guaranteed to be lossless, although on most implementations it will be.
Absolutely, the obvious one is a long long converted to double may loose digits. However, in the case of interval arithmetic, at least the converted value (even if the conversion is implicit as part of an operation) is converted to an interval that correctly identifies the uncertainty in the value. But... I'll admit that my use case is exclusively related to the use of constants: in this case there is simply no better way of representing those constants than as integers - and yes possibly as long long's - I could convert them to floating point values, but that would simply introduce the "inexactness" at an earlier stage. At least if they are encoded as integers there is a *chance* that they will be used exactly: for example if you are using extended precision arithmetic. Regards, John.
participants (3)
-
Guillaume Melquiond
-
John Maddock
-
Neil Groves