[Interval] Bug when constructing from an unsigned integer type

This compiles: boost::interval<double> in(1u); but creates a nonsense interval. IMO this should work? Likewise the same problem manifests itself when constructing from an unsigned long long (and in this case you really do want a true interval to be created). John.

Le mercredi 26 décembre 2007 à 13:02 +0000, John Maddock a écrit :
This compiles:
boost::interval<double> in(1u);
but creates a nonsense interval.
IMO this should work?
It works fine for me. Which testcase and which compiler are causing trouble? Best regards, Guillaume

Guillaume Melquiond wrote:
Le mercredi 26 décembre 2007 à 13:02 +0000, John Maddock a écrit :
This compiles:
boost::interval<double> in(1u);
but creates a nonsense interval.
IMO this should work?
It works fine for me. Which testcase and which compiler are causing trouble?
I've since found the issue to be specific to rounded_arith_opp: it negates the value being converted even if it's an unsigned type. The attached patch fixes this, and also suppresses some pesky MSVC warnings. You can see the full set of patches I've been working on here: http://svn.boost.org/trac/boost/changeset/42438 Along with some more std lib functions and other goodies here: http://svn.boost.org/trac/boost/browser/sandbox/interval_math_toolkit/boost/... The floor/ceil functions are the problematic ones here, these should probably throw in the case that the two bounds don't yield the same result :-) The most important addition is for pow: which requires modifying the transcendental function traits class support. I also found it very useful to have a traits class that always returned nearest rounded results for the transcendental functions when doing "approximate" interval arithmetic: * Using rounded_transc_opp led to situations where the resulting interval was invalid (rounding down returned a larger value than rounding up) with msvc and std::exp. I would expect to see the same with rounded_transc_std. * Using rounded_transc_exact as a "dummy" class with rounded_arith_opp produced quite erroneous results with std::exp, I gave up testing it after that. I assume it could be used in conjunction with rounded_transc_std OK, but then normal arithmetic would be slower? So providing a traits class that resets the rounding mode to nearest before making a std lib call seemed sensible, on the assumption that these calls occur much less frequently than normal arithmetic, and that the cost of the call will swamp the cost of the rounding mode change anyway. I haven't investigated using a correctly rounded solution yet, but that will come :-) Let me know if you want any more information about these patches. I'm not sure how these fit with the revised std lib design though? Regards, John.

Le jeudi 03 janvier 2008 à 17:47 +0000, John Maddock a écrit :
boost::interval<double> in(1u); but creates a nonsense interval.
I've since found the issue to be specific to rounded_arith_opp: it negates the value being converted even if it's an unsigned type. The attached patch fixes this, and also suppresses some pesky MSVC warnings.
I see, the behavior you experienced makes sense. I should check now why I did not experience it; there may be an optimization bug in GCC that hided it. I'm a bit reluctant to apply your fix though, since it will not work with big unsigned values. I need to think a bit more about it.
You can see the full set of patches I've been working on here: http://svn.boost.org/trac/boost/changeset/42438
Along with some more std lib functions and other goodies here: http://svn.boost.org/trac/boost/browser/sandbox/interval_math_toolkit/boost/...
The floor/ceil functions are the problematic ones here, these should probably throw in the case that the two bounds don't yield the same result :-)
I didn't see anything related to floor and ceil in your patch. Which issues are you referring to?
The most important addition is for pow: which requires modifying the transcendental function traits class support.
That is fine. The support class was also extended for some new functions like expm1 and the like.
I also found it very useful to have a traits class that always returned nearest rounded results for the transcendental functions when doing "approximate" interval arithmetic:
* Using rounded_transc_opp led to situations where the resulting interval was invalid (rounding down returned a larger value than rounding up) with msvc and std::exp. I would expect to see the same with rounded_transc_std.
These two policies are mainly for use with some "smart" mathematical libraries that properly compute elementary functions according to the current rounding mode. As surprising as it may seem, such libraries do exist. There is one from Sun for instance.
* Using rounded_transc_exact as a "dummy" class with rounded_arith_opp produced quite erroneous results with std::exp, I gave up testing it after that. I assume it could be used in conjunction with rounded_transc_std OK, but then normal arithmetic would be slower?
This policy is meant to be used when the _exact one is used for the arithmetic operations. For example, when performing non-guaranteed computations on wide intervals, or when using an arbitrary precision type for bounds. As a summary, mixing rounding policy probably leads to unexpected results, and calling default elementary functions with a non-exact policy may also lead to unexpected results. I should redo the documentation, since these points may not be that clear.
So providing a traits class that resets the rounding mode to nearest before making a std lib call seemed sensible, on the assumption that these calls occur much less frequently than normal arithmetic, and that the cost of the call will swamp the cost of the rounding mode change anyway. I haven't investigated using a correctly rounded solution yet, but that will come :-)
Actually, there is code in the new version in order to support the CRlibm library for computing correctly-rounded results of elementary functions. For efficiency reasons, this library only works in rounding to nearest (while the result is rounded in the wanted direction), so we are using a scheme similar to the one you are describing here. When using an elementary function from the standard libm, there could be an additional y*(1+eps)+eta in order to ensure that the computed values are rounded outwards. The cost would be negligible with respect to the rounding mode change.
Let me know if you want any more information about these patches. I'm not sure how these fit with the revised std lib design though?
Except for the test_input.hpp part, they would probably apply with almost no changes. But I am still not sure about conversions. In the proposal for standardization, we only support floating-point bounds, conversions are done through the constructors, and they are "explicit" when there is a precision loss (that is a potential widening). Perhaps this could be a way to do it. Only allow conversions in Boost if there is no loss. For example, int and unsigned could not be used with interval<float>, but they could be used with interval<double> on computers where they are 32bit wide. Best regards, Guillaume

Le lundi 14 janvier 2008 à 15:01 +0100, Guillaume Melquiond a écrit :
Le jeudi 03 janvier 2008 à 17:47 +0000, John Maddock a écrit :
boost::interval<double> in(1u); but creates a nonsense interval.
I've since found the issue to be specific to rounded_arith_opp: it negates the value being converted even if it's an unsigned type. The attached patch fixes this, and also suppresses some pesky MSVC warnings.
I see, the behavior you experienced makes sense. I should check now why I did not experience it; there may be an optimization bug in GCC that hided it.
Replying to myself. I was too eager in accusing GCC: The mistake is actually in the library for C99-enabled compilers. Now that I fixed it, interval<double>(1u) returns [-4.29497e+09,1] on my computer. So rounded conversions are now performed as they were expected (yet not intended) to perform, but it also means that the bug will become visible with other compilers than MSVC. Best regards, Guillaume

Guillaume Melquiond wrote:
I'm a bit reluctant to apply your fix though, since it will not work with big unsigned values. I need to think a bit more about it.
Right if the unsigned value is greater than the max signed value then bad things happen, the only thing I can think of is to check for that case, and temporarily change the rounding mode rather than negating for that special case.
The floor/ceil functions are the problematic ones here, these should probably throw in the case that the two bounds don't yield the same result :-)
I didn't see anything related to floor and ceil in your patch. Which issues are you referring to?
I put those in a separate file for now: see http://svn.boost.org/trac/boost/browser/sandbox/interval_math_toolkit/boost/... It contains interval versions of: floor ceil fabs pow ldexp frexp Of these fabs, ldexp and frexp are trivial. pow is fairly straightforward I hope (but requires changes to the traits concept as previously mentioned). The open question with floor and ceil is what should happen if the floor/ceil of the two bounds don't match? Currently it just returns the new interval, but I can imagine some situations where this would be an error. There's also an interval version of boost::math::fpclassify in there (and hense isnan, isinf etc) which broadly follows the same classification rules that C99 suggests for complex numbers in it's appendix.
The most important addition is for pow: which requires modifying the transcendental function traits class support.
That is fine. The support class was also extended for some new functions like expm1 and the like.
OK good.
These two policies are mainly for use with some "smart" mathematical libraries that properly compute elementary functions according to the current rounding mode. As surprising as it may seem, such libraries do exist. There is one from Sun for instance.
Nod, I realise that. BTW it would be nice to have ready made traits classes for these available :-)
* Using rounded_transc_exact as a "dummy" class with rounded_arith_opp produced quite erroneous results with std::exp, I gave up testing it after that. I assume it could be used in conjunction with rounded_transc_std OK, but then normal arithmetic would be slower?
This policy is meant to be used when the _exact one is used for the arithmetic operations. For example, when performing non-guaranteed computations on wide intervals, or when using an arbitrary precision type for bounds.
As a summary, mixing rounding policy probably leads to unexpected results, and calling default elementary functions with a non-exact policy may also lead to unexpected results. I should redo the documentation, since these points may not be that clear.
OK.
So providing a traits class that resets the rounding mode to nearest before making a std lib call seemed sensible, on the assumption that these calls occur much less frequently than normal arithmetic, and that the cost of the call will swamp the cost of the rounding mode change anyway. I haven't investigated using a correctly rounded solution yet, but that will come :-)
Actually, there is code in the new version in order to support the CRlibm library for computing correctly-rounded results of elementary functions. For efficiency reasons, this library only works in rounding to nearest (while the result is rounded in the wanted direction), so we are using a scheme similar to the one you are describing here.
When using an elementary function from the standard libm, there could be an additional y*(1+eps)+eta in order to ensure that the computed values are rounded outwards. The cost would be negligible with respect to the rounding mode change.
Yep, assuming the std lib is actually accurate to 1eps of course ;-) But this would be fairly trivial to add with nextbefore/nextafter I guess?
Let me know if you want any more information about these patches. I'm not sure how these fit with the revised std lib design though?
Except for the test_input.hpp part, they would probably apply with almost no changes. But I am still not sure about conversions.
In the proposal for standardization, we only support floating-point bounds, conversions are done through the constructors, and they are "explicit" when there is a precision loss (that is a potential widening).
Perhaps this could be a way to do it. Only allow conversions in Boost if there is no loss. For example, int and unsigned could not be used with interval<float>, but they could be used with interval<double> on computers where they are 32bit wide.
This still seems overly restrictive for generic code, consider this use case: I have polynomial coefficients that are integers, sometimes small integers (int's), sometimes big ones (long long's). How should I best store those coefficients and evaluate the polynomial? To me it's a non brainer that integers should be stored as integers: there is no more accurate way to represent them. Neither is there any more space-efficient way to represent them IMO. If the code is called with a type wider than the integer-coefficient type then we have no problem, if it's called with a narrower type then we have rounding loss, but crucially, there is still no better way of performing the arithmetic, assuming it is the users intent that it be carried out at the precision given. Indeed for built in floating point types: int * T may well be carried out at double precision internally in the FPU even T is float, where as depending on the compiler settings: static_cast<T>(int) * T may be evaluated by converting the int to a float first. I guess an explicit conversion is still better than nothing, but would require me to obfuscate my code with a large number of type-casts :-( Given that an implicit convertion from an integer type will result in an exact interval representation if there is precision loss, what do we loose by making this conversion implicit? It's no worse than the existing implicit conversion from long double to interval<double> is it? Or am I missing something? Just trying to understand the issues, yours, John.

Le mardi 15 janvier 2008 à 10:10 +0000, John Maddock a écrit :
Guillaume Melquiond wrote:
I'm a bit reluctant to apply your fix though, since it will not work with big unsigned values. I need to think a bit more about it.
Right if the unsigned value is greater than the max signed value then bad things happen, the only thing I can think of is to check for that case, and temporarily change the rounding mode rather than negating for that special case.
This seems like a good solution. And in order to avoid this branch, a bigger signed type could be used instead for small unsigned types. I will have to see to the template machinery for selecting the conversion functions.
I put those in a separate file for now: see http://svn.boost.org/trac/boost/browser/sandbox/interval_math_toolkit/boost/...
It contains interval versions of:
floor ceil fabs pow ldexp frexp
Hmm... floor and ceil are two functions we did not consider for standardization. They would make sense though, I have to take a look at that. All the other functions (and some more of C99), however, are in the proposal and in the new design of the library. Concerning pow, I notice that your function does not handle negative inputs for the lhs argument. Are you mathematically defining pow(x,y) as exp(y*ln(x))?
Of these fabs, ldexp and frexp are trivial. pow is fairly straightforward I hope (but requires changes to the traits concept as previously mentioned). The open question with floor and ceil is what should happen if the floor/ceil of the two bounds don't match? Currently it just returns the new interval, but I can imagine some situations where this would be an error.
From an interval arithmetic point of view, this is never an error, since the functions on real numbers are well defined, and so are their interval extensions.
The only issue lies in the discontinuity when the two bounds are different, which impedes the applicability of Brouwer's theorem (or any of its variants). The proposal solves this issue by providing two interval functions whenever the real function has discontinuities. The second function takes a boolean reference as an additional parameter, and it changes its value to true whenever a discontinuity was encountered. This makes it more flexible than an exception, since the interval result is nonetheless meaningful and can be used in later computations.
These two policies are mainly for use with some "smart" mathematical libraries that properly compute elementary functions according to the current rounding mode. As surprising as it may seem, such libraries do exist. There is one from Sun for instance.
Nod, I realise that. BTW it would be nice to have ready made traits classes for these available :-)
This is something I have yet to tackle in the new design. All the other policies have been subsumed by simpler mechanisms, but not this one.
When using an elementary function from the standard libm, there could be an additional y*(1+eps)+eta in order to ensure that the computed values are rounded outwards. The cost would be negligible with respect to the rounding mode change.
Yep, assuming the std lib is actually accurate to 1eps of course ;-)
Right (which is a dangerous assumption). I was actually thinking of generic eps and eta values that would depend on the implementation.
But this would be fairly trivial to add with nextbefore/nextafter I guess?
Good point. For 1 ulp changes, they could indeed be used. It just happens that I am not fond of these two functions (does nextbefore even exist?), since they take a second parameter which I have a hard time filling. The IEEE-754R defines the nextup and nextdown functions, which are a lot more useful in my opinion.
In the proposal for standardization, we only support floating-point bounds, conversions are done through the constructors, and they are "explicit" when there is a precision loss (that is a potential widening).
Perhaps this could be a way to do it. Only allow conversions in Boost if there is no loss. For example, int and unsigned could not be used with interval<float>, but they could be used with interval<double> on computers where they are 32bit wide.
This still seems overly restrictive for generic code, consider this use case:
I have polynomial coefficients that are integers, sometimes small integers (int's), sometimes big ones (long long's). How should I best store those coefficients and evaluate the polynomial?
To me it's a non brainer that integers should be stored as integers: there is no more accurate way to represent them.
Neither is there any more space-efficient way to represent them IMO.
This I agree with. Storage should be performed with whatever type fits the data. But that does not mean that computations should directly be performed on storage types, in my opinion. Requiring a conversion to a computable type beforehand does not seem like a bad design.
If the code is called with a type wider than the integer-coefficient type then we have no problem, if it's called with a narrower type then we have rounding loss, but crucially, there is still no better way of performing the arithmetic, assuming it is the users intent that it be carried out at the precision given. Indeed for built in floating point types:
int * T
may well be carried out at double precision internally in the FPU even T is float, where as depending on the compiler settings:
static_cast<T>(int) * T
may be evaluated by converting the int to a float first.
I am not sure I get your point, since this uncertainty in the evaluation scheme seems like a very good reason for putting as many hints to the compiler as possible in the code.
I guess an explicit conversion is still better than nothing, but would require me to obfuscate my code with a large number of type-casts :-(
Given that an implicit convertion from an integer type will result in an exact interval representation if there is precision loss, what do we loose by making this conversion implicit? It's no worse than the existing implicit conversion from long double to interval<double> is it? Or am I missing something?
No, you are right. But I was referring to the proposal and not to the existing library. In particular, the proposal fixes a few design "issues" people encountered in the library. In particular, people felt it was dangerous to implicitly get non-singleton (although correct) intervals from scalar values. So the proposal states: template<> class interval<double> { interval(double); explicit interval(long double); }; Best regards, Guillaume
participants (2)
-
Guillaume Melquiond
-
John Maddock