[interval] unprotect &c.

I find this in the documentation, which sounds self-contradictory:
Unprotected rounding
As explained in this section, a good way to speed up computations when the base type is a basic floating-point type is to unprotect the intervals at the hot spots of the algorithm. This method is safe and really an improvement for interval computations. But please remember that any basic floating-point operation executed inside the unprotection blocks will probably have an undefined behavior (but only for the current thread).
From reading the docs, it's very unclear what optimization this unprotection mechanism allows, and it's unclear when/how it's mathematically valid to use the results (e.g. why not do all computations that way if it's faster?) I get only a vague sense of
a. That doesn't sound "safe." 1. there's the potential undefined behavior. 2. there's the whole notion of "unprotect"-ing the computation. Don't I lose the value of interval computation? That is, will my computed results still reflect the potential error due to floating-point precision limits? b. How am I going to do any useful computation in an unprotection block without doing any basic floating-point operations? If it's not self-contradictory, could you explain what it means and, if possible, improve the wording? the answers to these questions from the docs. Yes, I read the Horner example. Finally, the use of the term "unprotection block" looks extremely misleading. It looks like you have unprotected datatypes, but "block" implies that there's a lexical scope within which unprotection is in effect. There does seem to be such a notion for rounding mode (by declaring an auto variable of I::traits_type::rounding), but not so for unprotect. Unless I'm gravely confused, which is possible, in which case, again, the docs need to be upgraded. Thanks, -- Dave Abrahams Boost Consulting www.boost-consulting.com

Le mercredi 07 décembre 2005 à 10:59 -0500, David Abrahams a écrit :
I find this in the documentation, which sounds self-contradictory:
Unprotected rounding
As explained in this section, a good way to speed up computations when the base type is a basic floating-point type is to unprotect the intervals at the hot spots of the algorithm. This method is safe and really an improvement for interval computations. But please remember that any basic floating-point operation executed inside the unprotection blocks will probably have an undefined behavior (but only for the current thread).
a. That doesn't sound "safe."
Indeed. This is the reason why it is not enabled for the whole program, contrarily to what it is done in a few other interval libraries. This can be restricted to a scope and the user has to explicitly enable it.
1. there's the potential undefined behavior.
As soon as you break the assumption other parts of a program make about the rounding mode, you can lead them to invoke undefined behavior. sin(double) can easily return a value that is not between -1 and 1, if it is invoked in a scope where the rounding is not preserved.
2. there's the whole notion of "unprotect"-ing the computation. Don't I lose the value of interval computation? That is, will my computed results still reflect the potential error due to floating-point precision limits?
The interval computations are fine. These are the floating-point computations that are not.
b. How am I going to do any useful computation in an unprotection block without doing any basic floating-point operations?
Interval computations are useful.
If it's not self-contradictory, could you explain what it means and, if possible, improve the wording?
From reading the docs, it's very unclear what optimization this unprotection mechanism allows, and it's unclear when/how it's mathematically valid to use the results (e.g. why not do all computations that way if it's faster?) I get only a vague sense of the answers to these questions from the docs. Yes, I read the Horner example.
Ideally, compilers should do this optimization themselves. Unfortunately no compiler that I know does it. In fact, they are not even able to properly handle the floating-point pragmas, so we are still years away from the time they will do handle the optimization. We cannot just tell the users: "in ten years, your compiler will probably be able to optimize the code, you just have to wait till then, before using the library". Please note that this problem plagues all the interval libraries that do not benefit from dedicated compiler support (like the one the Sun compiler provides). So, in the meantime, we have provided a way for the user to emulate this optimization by manually deciding of program scopes where the rounding mode is not changed and restored at each interval computation. The code can get a few orders of magnitude faster, if it does intensive interval computations.
Finally, the use of the term "unprotection block" looks extremely misleading. It looks like you have unprotected datatypes, but "block" implies that there's a lexical scope within which unprotection is in effect. There does seem to be such a notion for rounding mode (by declaring an auto variable of I::traits_type::rounding), but not so for unprotect. Unless I'm gravely confused, which is possible, in which case, again, the docs need to be upgraded.
I agree the documentation should be clearer. As long as the variable of type I::traits_type::rounding is alive, then we are in a scope that is protected. In such a scope, floating-point computations will have strange behaviors, but computations involving unprotected intervals (they run a lot faster than computations involving correct intervals) are able to give correct results. Thanks to your comments, I now understand how speaking of "unprotected" intervals can be misleading. By this term, we intended to express that unprotected intervals lead to incorrect computations, when used outside of a scope protected by a variable of type I::traits_type::rounding. Best regards, Guillaume

Guillaume Melquiond <guillaume.melquiond@ens-lyon.fr> writes:
Le mercredi 07 décembre 2005 à 10:59 -0500, David Abrahams a écrit :
I find this in the documentation, which sounds self-contradictory:
Unprotected rounding
As explained in this section, a good way to speed up computations when the base type is a basic floating-point type is to unprotect the intervals at the hot spots of the algorithm. This method is safe ^^^^^^^^^^^^^^^^^^^ and really an improvement for interval computations. But please remember that any basic floating-point operation executed inside the unprotection blocks will probably have an undefined behavior (but only for the current thread).
a. That doesn't sound "safe."
Indeed.
But you just said it was! Make up your mind ;-)
This is the reason why it is not enabled for the whole program, contrarily to what it is done in a few other interval libraries. This can be restricted to a scope and the user has to explicitly enable it.
1. there's the potential undefined behavior.
As soon as you break the assumption other parts of a program make about the rounding mode, you can lead them to invoke undefined behavior. sin(double) can easily return a value that is not between -1 and 1, if it is invoked in a scope where the rounding is not preserved.
2. there's the whole notion of "unprotect"-ing the computation. Don't I lose the value of interval computation? That is, will my computed results still reflect the potential error due to floating-point precision limits?
The interval computations are fine. These are the floating-point computations that are not.
I figured out that's what you probably meant... eventually. But my whole point is that the answer is very unclear from your docs. I had to scratch my head about it and write a long email to this list before that fact was apparent to me.
b. How am I going to do any useful computation in an unprotection block without doing any basic floating-point operations?
Interval computations are useful.
Only if it's clear to the reader that you can do them without causing UB! ;-) Otherwise, they're just another illegal operation.
If it's not self-contradictory, could you explain what it means and, if possible, improve the wording?
From reading the docs, it's very unclear what optimization this unprotection mechanism allows, and it's unclear when/how it's mathematically valid to use the results (e.g. why not do all computations that way if it's faster?) I get only a vague sense of the answers to these questions from the docs. Yes, I read the Horner example.
Ideally, compilers should do this optimization themselves.
Now I'm really confused. What is the optimization, exactly? It seems to say, "stop tracking computational error altogether," (as though you were using a plain double) but maybe that's not what you mean? [read to the end; I may have figured it out]
Unfortunately no compiler that I know does it. In fact, they are not even able to properly handle the floating-point pragmas, so we are still years away from the time they will do handle the optimization.
We cannot just tell the users: "in ten years, your compiler will probably be able to optimize the code, you just have to wait till then, before using the library". Please note that this problem plagues all the interval libraries that do not benefit from dedicated compiler support (like the one the Sun compiler provides).
So, in the meantime, we have provided a way for the user to emulate this optimization by manually deciding of program scopes where the rounding mode is not changed and restored at each interval computation.
Do you mean to say that the user delimits a region within which she knows using a single rounding mode for all intervals computation will yield correct results (I shouldn't have to guess -- the answer should be obvious from the docs)? If so, why would that invalidate computation with ordinary doubles? The rounding mode isn't normally changed by the compiler for ordinary FP calculation, is it?
The code can get a few orders of magnitude faster, if it does intensive interval computations.
Finally, the use of the term "unprotection block" looks extremely misleading. It looks like you have unprotected datatypes, but "block" implies that there's a lexical scope within which unprotection is in effect. There does seem to be such a notion for rounding mode (by declaring an auto variable of I::traits_type::rounding), but not so for unprotect. Unless I'm gravely confused, which is possible, in which case, again, the docs need to be upgraded.
I agree the documentation should be clearer. As long as the variable of type I::traits_type::rounding is alive, then we are in a scope that is protected.
Now you're changing terms again. I thought it was "unprotected!"
In such a scope, floating-point computations will have strange behaviors, but computations involving unprotected intervals (they run a lot faster than computations involving correct intervals) are able to give correct results.
Because rounding for ordinary numbers is supposed to be "round-to-nearest" rather than "round up" or "round down" (at least one of which is needed for interval arithmetic)?
Thanks to your comments, I now understand how speaking of "unprotected" intervals can be misleading. By this term, we intended to express that unprotected intervals lead to incorrect computations, when used outside of a scope protected by a variable of type I::traits_type::rounding.
That's helpful at least. The docs need a lot of help in this area, still. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Le jeudi 08 décembre 2005 à 07:33 -0500, David Abrahams a écrit :
Ideally, compilers should do this optimization themselves.
Now I'm really confused. What is the optimization, exactly? It seems to say, "stop tracking computational error altogether," (as though you were using a plain double) but maybe that's not what you mean? [read to the end; I may have figured it out]
The optimization is the elimination of dead reconfigurations of the floating-point unit of the processor. Example: old = fpu_mode; fpu_mode = 1; ... fpu_mode = old; // -- old = fpu_mode; // useless, the compiler should just remove it fpu_mode = 1; // -- ... fpu_mode = old; Because compilers are unable to do this optimization, we provide a way for the user to enable it manually by unprotecting interval types.
So, in the meantime, we have provided a way for the user to emulate this optimization by manually deciding of program scopes where the rounding mode is not changed and restored at each interval computation.
Do you mean to say that the user delimits a region within which she knows using a single rounding mode for all intervals computation will yield correct results (I shouldn't have to guess -- the answer should be obvious from the docs)?
She doesn't have to know that much, she just has to know that she does only interval computations in that block.
If so, why would that invalidate computation with ordinary doubles? The rounding mode isn't normally changed by the compiler for ordinary FP calculation, is it?
Because the rounding mode isn't changed by the compiler for ordinary FP computations, they will give unexpected results when executed in a block where the interval library has taken over the floating-point unit.
I agree the documentation should be clearer. As long as the variable of type I::traits_type::rounding is alive, then we are in a scope that is protected.
Now you're changing terms again. I thought it was "unprotected!"
The interval types are unprotected. So they have to be used in a scope that is protected. Otherwise computations with them would lead to wrong results. Default interval types are protected against an unadapted environment: some code modifies the environment and restores it later on, so that the library computes a correct result. But this protection is costly performance-wise. So you unprotect these types to speed up the computations that involves them. But as a consequence, you can safely do these computations only in a manually protected block.
In such a scope, floating-point computations will have strange behaviors, but computations involving unprotected intervals (they run a lot faster than computations involving correct intervals) are able to give correct results.
Because rounding for ordinary numbers is supposed to be "round-to-nearest" rather than "round up" or "round down" (at least one of which is needed for interval arithmetic)?
Right. The floating-point environment is set up at the start of the program by the C runtime, and compilers generate code that expects this environment to be untouched. Best regards, Guillaume

Guillaume Melquiond <guillaume.melquiond@ens-lyon.fr> writes:
I agree the documentation should be clearer. As long as the variable of type I::traits_type::rounding is alive, then we are in a scope that is protected.
Now you're changing terms again. I thought it was "unprotected!"
The interval types are unprotected. So they have to be used in a scope that is protected.
I now understand what you mean, but find the terminology quite confusable. The docs need a better explanation.
Otherwise computations with them would lead to wrong results.
Default interval types are protected against an unadapted environment: some code modifies the environment and restores it later on, so that the library computes a correct result. But this protection is costly performance-wise. So you unprotect these types to speed up the computations that involves them. But as a consequence, you can safely do these computations only in a manually protected block.
In such a scope, floating-point computations will have strange behaviors, but computations involving unprotected intervals (they run a lot faster than computations involving correct intervals) are able to give correct results.
Because rounding for ordinary numbers is supposed to be "round-to-nearest" rather than "round up" or "round down" (at least one of which is needed for interval arithmetic)?
Right. The floating-point environment is set up at the start of the program by the C runtime, and compilers generate code that expects this environment to be untouched.
Okay, I think I understand it all, thanks. Are you planning to do something to the docs in response to all this? -- Dave Abrahams Boost Consulting www.boost-consulting.com

Le jeudi 08 décembre 2005 à 16:55 -0500, David Abrahams a écrit :
Okay, I think I understand it all, thanks. Are you planning to do something to the docs in response to all this?
Of course. As long as somebody who wants to use the interval library does not get it from the documentation, it needs reworking. Best regards, Guillaume
participants (2)
-
David Abrahams
-
Guillaume Melquiond