[Math] Rationale Behind Epsilon Values
Hello, I am new to this mailing list, so its possible the format of my question may be off by a little. My team has been working on getting boost to run on PPC 64-LE. There are many tests in boost that rely on the machine epsilon value. Why was this value chosen to determine success or failure of a particular test knowing that it is hardware specific? More specifically, the Long Double type is very different on both platforms. However, by forcing the x86 Long Double epsilon value on the PPC machine we were able to get many of the previously failing tests to pass. This change was inspired by the fact that a similar fix was already in the code for Darwin platform, but the value that it returns for the Darwin platform is not the x86 value. Also the LDBL_MANT_DIG on both the Darwin platform and our platform (PPC 64-LE) are the same, and are equal to 106. How was the Long Double value on the Darwin platform determined? For reference, the actual code lies in boost/math/tools/precision.hpp. We were also wondering whether or not this was a valid solution to fixing the problem? Part of us feel this doesn't address the underlaying problem. Sincerely. -Axel
The issue is this:
Our tests, and some of the code in the headers as well assume that:
x + x * eps != x for all x.
It's kind of a fundamental requirement for sensibly reasoning about the
bahaviour of floating point types.
But on platforms that use gcc's weird "double double" type as a long
double, then numeric_limits<long double>::epsilon() is the rather insane
value of 4.9406564584124654e-324. Technically this is correct, since
adding 1.0L to this value does indeed yield a distinct value, but the
broarder condition we rely on above holds only for x = 1, not all x.
The workaround is to use 2^(1-D) for epsilon, where D is the number of bits
precision in the type: in this case 106 as opposed to 64 on x86, or 113 for
a "true" 128-bit floating point type. Note that for any "normal" binary
floating point type F, then, numeric_limits<F>::epsilon and ldexp(F(1),
1-numeric_limits<F>::digits) yield the same value. It's just these problem
"double double" types that fail this test.
HTH, John.
On 27 May 2014 17:19, Axel Ismirlian
Hello, I am new to this mailing list, so its possible the format of my question may be off by a little. My team has been working on getting boost to run on PPC 64-LE. There are many tests in boost that rely on the machine epsilon value. Why was this value chosen to determine success or failure of a particular test knowing that it is hardware specific? More specifically, the Long Double type is very different on both platforms.
However, by forcing the x86 Long Double epsilon value on the PPC machine we were able to get many of the previously failing tests to pass. This change was inspired by the fact that a similar fix was already in the code for Darwin platform, but the value that it returns for the Darwin platform is not the x86 value. Also the LDBL_MANT_DIG on both the Darwin platform and our platform (PPC 64-LE) are the same, and are equal to 106. How was the Long Double value on the Darwin platform determined? For reference, the actual code lies in boost/math/tools/precision.hpp. We were also wondering whether or not this was a valid solution to fixing the problem? Part of us feel this doesn't address the underlaying problem. Sincerely. -Axel
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
But on platforms that use gcc's weird "double double" type as a long double, then numeric_limits<long double>::epsilon() is the rather insane value of 4.9406564584124654e-324. Technically this is correct, since adding 1.0L to this value does indeed yield a distinct value, but the broarder condition we rely on above holds only for x = 1, not all x.
The workaround is to use 2^(1-D) for epsilon, where D is the number of bits precision in the type: in this case 106 as opposed to 64 on x86, or 113 for a "true" 128-bit floating point type. Note that for any "normal" binary floating point type F, then, numeric_limits<F>::epsilon and ldexp(F(1), 1-numeric_limits<F>::digits) yield the same value. It's just these
From: JOHN MADDOCK
"double double" types that fail this test.
HTH, John.
How do you suggest implementing this particular workaround? We had tried adding a special case for the epsilon function for the PPC in precision.hpp much like the special case found for the Darwin platform in the same file. Or should we implement a solution that uses the workaround you mentioned? Sincerely, -Axel _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
How do you suggest implementing this particular workaround? We had tried adding a special case for the epsilon function for the PPC in precision.hpp much like the special case found for the Darwin platform in the same file. Or should we implement a solution that uses the workaround you mentioned?
Can you please check and see if this fix: https://github.com/boostorg/math/commit/7cb3316d0616db01f6c4a56126d4bf8c73e7... deals with the issue? Are there any compilers other than GCC that are relevant here? Thanks, John.
participants (3)
-
Axel Ismirlian
-
John Maddock
-
JOHN MADDOCK