
Does boost have anything like this? // Returns whether an operation will go out of bounds // given the values which will be passed to that operation template <typename Type> struct overflow_traits { bool neg(const Type & val); bool add(const Type & lhs, const Type & rhs); bool sub(const Type & lhs, const Type & rhs); bool mul(const Type & lhs, const Type & rhs); bool div(const Type & lhs, const Type & rhs); bool mod(const Type & lhs, const Type & rhs); //... };

I recently wrote a proposal: http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?SecureInteger...
Does boost have anything like this?
// Returns whether an operation will go out of bounds // given the values which will be passed to that operation template <typename Type> struct overflow_traits { bool neg(const Type & val); bool add(const Type & lhs, const Type & rhs); bool sub(const Type & lhs, const Type & rhs); bool mul(const Type & lhs, const Type & rhs); bool div(const Type & lhs, const Type & rhs); bool mod(const Type & lhs, const Type & rhs); //... };
-- Alexander Nasonov

"Alexander Nasonov" <alnsn@yandex.ru> wrote in message news:45050C11.000002.25346@mfront8.yandex.ru...
I recently wrote a proposal: http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?SecureInteger...
What about throwing an exception? try{ size = checked(_1 + _2 etc)(...); } catch (std::runtime_error e){ } Its a great idea though anyway. regards Andy Little

What about throwing an exception?
try{ size = checked(_1 + _2 etc)(...); } catch (std::runtime_error e){
}
Actually, your suggestion is what I started with. But I rejected it because "size = something" should be checked too. I should have put it to the Wiki though as it's not a trivial thing. -- Alexander Nasonov

From: Alexander Nasonov
I recently wrote a proposal:
http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?SecureI ntegerOperations Hi! I'm very glad someone is willing to address this problem, which seems to be neglected. ;-) You might want to take a look at SafeInt: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncod e/html/secure01142004.asp It takes a different approach, which IMO is more convenient in most situations than the free functions, it defines a wrapper class template which controls all the arithmetic operations of the underlying type. The code is copyrighted and highly platform-dependent, but I think that if Boost is going to have any arithmetic operations checking mechanism, then it should rather look like this. Also, I think the interface containing a bunch of C-style set of functions with arguments' types pushed into the functions' names is a bit inadequate for a modern C++ library - it makes it very difficult to use your library in a generic code, and it makes it easy to perform unwanted argument conversions. Best regards, Robert

You might want to take a look at SafeInt: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncod e/html/secure01142004.asp It takes a different approach, which IMO is more convenient in most situations than the free functions, it defines a wrapper class template which controls all the arithmetic operations of the underlying type. The code is copyrighted and highly platform-dependent, but I think that if Boost is going to have any arithmetic operations checking mechanism, then it should rather look like this.
I agree that C++ lacks of safe integers types but I disagree that throwing an exception is a good idea: - Overflows tend to happen under rare circumstances and tests usually don't cover all of them (or even don't cover at all) - It's hard to view an innocent i = i + j as an expression that may throw - Throwing an exception from a place where it's not expected often breaks invariants (in C++, broken invariants are often subtle and dangerous) - No way to grep overflow checks - Unlike ignored return types, compilers don't print any warning on ignored throw clause - I can hardly imagine that I change some int members of popular classes in a hope that it would magically work when I resolve hundreds of compiler erros
Also, I think the interface containing a bunch of C-style set of functions with arguments' types pushed into the functions' names is a bit inadequate for a modern C++ library - it makes it very difficult to use your library in a generic code, and it makes it easy to perform unwanted argument conversions.
Don't say "generic" to people who care about security. It's hard to check even ordinary code. I don't know what they would do if you send them a generic code, especially if it's look very much like basic integer operations but may throw ;-) I don't know what they think about Boost.Lambda either. It can be generic to a certain extent if coded carefully but that's definitely too much for Average Joe. -- Alexander Nasonov

"Alexander Nasonov" <alnsn@yandex.ru> writes:
You might want to take a look at SafeInt: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncod e/html/secure01142004.asp It takes a different approach, which IMO is more convenient in most situations than the free functions, it defines a wrapper class template which controls all the arithmetic operations of the underlying type. The code is copyrighted and highly platform-dependent, but I think that if Boost is going to have any arithmetic operations checking mechanism, then it should rather look like this.
I agree that C++ lacks of safe integers types but I disagree that throwing an exception is a good idea: - Overflows tend to happen under rare circumstances and tests usually don't cover all of them (or even don't cover at all)
That's all typical of anything that's reported by exceptions.
- It's hard to view an innocent i = i + j as an expression that may throw -
A basic lesson one must learn to write exception-safe code: "innocent" code might throw.
Throwing an exception from a place where it's not expected often breaks invariants
True. You can't simply plug in something that throws where something nonthrowing existed previously. These are not drop-in replacements for unsigned integral types. Signed integrals exhibit undefined behavior on overflow (IIRC), so arguably they can be drop-in replacements for those.
(in C++, broken invariants are often subtle and dangerous) - No way to grep overflow checks -
What does that mean?
Unlike ignored return types, compilers don't print any warning on ignored throw clause - I can hardly imagine that I change some int members of popular classes in a hope that it would magically work when I resolve hundreds of compiler erros
What does that mean? -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
"Alexander Nasonov" <alnsn@yandex.ru> writes:
(in C++, broken invariants are often subtle and dangerous) - No way to grep overflow checks
What does that mean?
The first statement means that, for example you can leave some member uninitialized then an exception is thrown unexpectedly. The second that it's harder to identify all places where overflow checks are made. Compare i = i + j and is_mathematically_correct(_1 = _1 + _2)(i, j). The latter can be searched with grep -rl is_mathematically_correct .
Unlike ignored return types, compilers don't print any warning on ignored throw clause - I can hardly imagine that I change some int members of popular classes in a hope that it would magically work when I resolve hundreds of compiler erros
What does that mean?
Sentence 1. If your goal is to identify all places where error checking is ignored, you may try to increase warning level and see if there are any "return value is ignored" warnings. However, it's not the case for errors that are indicated by throwing an exception. Sentense 2. You wrote:
True. You can't simply plug in something that throws where something nonthrowing existed previously. These are not drop-in replacements for unsigned integral types. Signed integrals exhibit undefined behavior on overflow (IIRC), so arguably they can be drop-in replacements for those.
I meant exactly this when I wrote about changing a type from int to something close to int yet different because it may throw. -- Alexander Nasonov

"Alexander Nasonov" <alnsn@yandex.ru> writes:
David Abrahams wrote:
"Alexander Nasonov" <alnsn@yandex.ru> writes:
(in C++, broken invariants are often subtle and dangerous) - No way to grep overflow checks
What does that mean?
The first statement means that, for example you can leave some member uninitialized then an exception is thrown unexpectedly. The second that it's harder to identify all places where overflow checks are made. Compare i = i + j and is_mathematically_correct(_1 = _1 + _2)(i, j). The latter can be searched with grep -rl is_mathematically_correct .
Ah, that's a very interesting interface! I like it.
Unlike ignored return types, compilers don't print any warning on ignored throw clause - I can hardly imagine that I change some int members of popular classes in a hope that it would magically work when I resolve hundreds of compiler erros
What does that mean?
Sentence 1. If your goal is to identify all places where error checking is ignored, you may try to increase warning level and see if there are any "return value is ignored" warnings. However, it's not the case for errors that are indicated by throwing an exception.
Yes, I got that part.
Sentense 2. You wrote:
True. You can't simply plug in something that throws where something nonthrowing existed previously. These are not drop-in replacements for unsigned integral types. Signed integrals exhibit undefined behavior on overflow (IIRC), so arguably they can be drop-in replacements for those.
I meant exactly this when I wrote about changing a type from int to something close to int yet different because it may throw.
Where do "hundreds of compiler errors" come into it? -- Dave Abrahams Boost Consulting www.boost-consulting.com

I think that adding semantics to the actual operations on integers (like allowing addition to throw) is dangerous and could easilly cause unexpected behavior. Allowing a check to be made before the operation is performed makes what the code does more clear and explicit, and doesn't invoke the performance penalty in places where performing the check is not critical (for instance, if you know at design time that you are subtracting a positive signed number from another positive signed number, overflow is impossible). I suggested some sort of overflow_traits because I thought it nicely mirrored numeric_traits, which allows the bounds of a built in type to be determined. In my case specifically, I only want to test if an operation will overflow or underflow and do something different as a result. I am trying to create a rational number class that does not break under overflow conditions, but just loses precision instead. While an overflow is detected I want to divide both numerator and denominator by 2 and keep trying. Also, I plan to support support infinity, negative infinity, and NaN (1/0, -1/0, and 0/0), so overflows which are really overflows for the rational type will result in these values.

FWIW, I think both methods (pre-checking or using a safe_int wrapper class) are usefull. Sometimes you don't want to change code but make some assertions, sometimes you want even more security. Philippe

"Jason Hise" <0xchaos@gmail.com> writes:
I think that adding semantics to the actual operations on integers (like allowing addition to throw) is dangerous and could easilly cause unexpected behavior.
Overflowing addition of signed integers is _currently_ allowed to throw as one expression of undefined behavior. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On 9/11/06, Jason Hise <0xchaos@gmail.com> wrote:
I am trying to create a rational number class that does not break under overflow conditions, but just loses precision instead. While an overflow is detected I want to divide both numerator and denominator by 2 and keep trying. Also, I plan to support support infinity, negative infinity, and NaN (1/0, -1/0, and 0/0), so overflows which are really overflows for the rational type will result in these values.
you can download something pretty raw but already doing what you plan to do from www/ccg.hu/pub/src/rat check the comments at the beginning of bench1-op.cpp ... bench4-det.cpp to find out how does it compare to double (and boost::rational) there's a m$ project file included, on linux you can compile with g++ -Wall -I<boost-path> -O3 -DFXS bench1-op.cpp br, andras

"Alexander Nasonov" <alnsn@yandex.ru> wrote in message news:45055A4E.000003.05188@colgate.yandex.ru...
- Overflows tend to happen under rare circumstances and tests usually don't cover all of them (or even don't cover at all)
This seems more like a reason For than against, unless you just prefer things to quietly keep going like nothing happened? as after all nothing will seem to have happened if you havent checked. And that is Real Bad IMO. regards Andy Little

Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
This seems more like a reason For than against, unless you just prefer things to quietly keep going like nothing happened? as after all nothing will seem to have happened if you havent checked. And that is Real Bad IMO.
I agree that generating a crash dump is better but when you throw, you should be prepared that your exception may be caught. For example, a hypothetical server processes user requests in a loop with try/catch inside. If processing of a request throws, the server just prints a message to syslog and continues. -- Alexander Nasonov

Alexander Nasonov wrote :
Don't say "generic" to people who care about security.
I don't see how generics are security issues. Could you explain that argument ? I don't see how generics are related to generic (code).
Compare template<class T> inline T max(T const& a, T const& b) { return a > b ? a : b; } and inline int max(int a, int b) { return a > b ? a : b; } When you use the first code, you should check that every T you pass to the max() has greater than with expected behavior, that copy ctors copies without side effects, etc (try to continue my list). With flexibility comes the danger. -- Alexander Nasonov

From: Alexander Nasonov - Overflows tend to happen under rare circumstances and tests usually don't cover all of them (or even don't cover at all)
You're right that the checks are not needed everywhere, but think of places where a programmer *thought* they were not needed but in fact they were - this is not so uncommon bug, and I treat this class as a method to avoid such bugs. OTOH you seem to have a different goal - to provide checks only when a programmer needs them. Maybe the best solution, as Philippe Vaucher pointed out, is to have both interfaces - one explicit (free functions), and the second - implicit (a wrapper class, which would make use of the former).
- Throwing an exception from a place where it's not expected often breaks invariants (in C++, broken invariants are often subtle and dangerous)
Not less than UB, which would probably happen in most situations where the exception would be thrown. I think exception is way better than UB ;-)
- No way to grep overflow checks
Good point.
- I can hardly imagine that I change some int members of popular classes in a hope that it would magically work when I resolve hundreds of compiler erros
Also, I think the interface containing a bunch of C-style set of functions with arguments' types pushed into the functions' names is a bit inadequate for a modern C++ library - it makes it very difficult to use your library in a generic code, and it makes it easy to
Oh, I think you exaggerate a bit ;-) perform
unwanted argument conversions.
Don't say "generic" to people who care about security.
I don't see why genericity would make security to be sacrified... What I was thinking of was making the functions statically polymorphic, and this would make them even more secure. This is because the user doesn't have to care for the operands' type (which means less errors when he makes a mistake as to what the type is). So, instead of int addsi(int lhs, int rhs); unsigned int addui(unsigned int lhs, unsigned int rhs); there could be signed int add( signed int, signed int); unsigned int add(unsigned int, unsigned int); This way we have the same functionality the set of C-style functions have, plus: - there are no unintended implicit argument conversions when the user makes error and uses a wrong function for a wrong type, - if the type of the argument changes, there's no need to manually change all the calls (which would be another error-prone activity), - the functions can be used in generic code. Is there really a reason why this approach is less secure than yours? Best regards, Robert

Robert Kawulak <kawulak <at> student.agh.edu.pl> writes:
You're right that the checks are not needed everywhere, but think of places where a programmer *thought* they were not needed but in fact they were - this is not so uncommon bug, and I treat this class as a method to avoid such bugs. Now think of a programmer who believes that they code is safe because it throws on overflow but the code isn't safe because it contains subtle bugs that appear only after the exception is thrown (e. g. broken invariants).
What do you prefer to look for in a code, for missing checks or for invariants?
OTOH you seem to have a different goal - to provide checks only when a programmer needs them. Maybe the best solution, as Philippe Vaucher pointed out, is to have both interfaces - one explicit (free functions), and the second - implicit (a wrapper class, which would make use of the former).
I'm all for two libraries.
- Throwing an exception from a place where it's not expected often breaks invariants (in C++, broken invariants are often subtle and dangerous)
Not less than UB, which would probably happen in most situations where the exception would be thrown. I think exception is way better than UB
... if not swallowed by try/catch and silently ignored.
popular classes in a hope that it would magically work when I resolve hundreds of compiler erros
Oh, I think you exaggerate a bit
No, I was in similar situations several times.
I don't see why genericity would make security to be sacrified... It's much harder to check generic code.
What I was thinking of was making the functions statically polymorphic, and this would make them even more secure. This is because the user doesn't have to care for the operands' type (which means less errors when he makes a mistake as to what the type is). So, instead of
int addsi(int lhs, int rhs); unsigned int addui(unsigned int lhs, unsigned int rhs);
there could be
signed int add( signed int, signed int); unsigned int add(unsigned int, unsigned int);
enum { e1 = 1U + UINT_MAX / 2U, /* not shown on purpose */ }; long i = add(e1, LONG_MAX / 2L); Try to audit this "simple" code (hints: integral promotion, usual arithmetic conversion, bit representation ).
This way we have the same functionality the set of C-style functions have, plus: - there are no unintended implicit argument conversions when the user makes error and uses a wrong function for a wrong type, - if the type of the argument changes, there's no need to manually change all the calls (which would be another error-prone activity), - the functions can be used in generic code. Is there really a reason why this approach is less secure than yours?
See code snipset above. -- Alexander Nasonov

Alexander Nasonov wrote :
Now think of a programmer who believes that they code is safe because it throws on overflow but the code isn't safe because it contains subtle bugs that appear only after the exception is thrown (e. g. broken invariants).
What do you prefer to look for in a code, for missing checks or for invariants?
Good code should be exception-safe anyway.

Alexander Nasonov wrote :
What do you prefer to look for in a code, for missing checks or for invariants?
Good code should be exception-safe anyway.
Point one: Get real. People rarely have time to analyze exception safety. Point two: All C code is exception-safe by definition. It's much easier to audit C code because one doesn't have to think about rather unusual path of execution. -- Alexander Nasonov

Alexander Nasonov wrote :
Point one: Get real. People rarely have time to analyze exception safety.
All resources should be released in destructors, thus achieving basic exception safety and often commit or rollback semantics. That's RAII. The basics of modern C++ programming. Of course that means that code which would use safe integers throwing exceptions instead of preliminary checks on unsafe integers will have to be written in a modern way. And the preliminary checks are more efficient also since exceptions induce runtime overhead, but are easier to use.
Point two: All C code is exception-safe by definition. It's much easier to audit C code because one doesn't have to think about rather unusual path of execution.
I thought Boost did C++ libraries, not C ones. Therefore, the C way is irrelevant in Boost. Anyway, to me it doesn't seem that C code is exception-safe by definition at all. Feel free to correct me if I'm wrong though, even if this is going offtopic. A piece of code is said to be exception-safe if run-time failures within the code will not produce ill-effects, such as memory leaks, garbled data or invalid output. Exception-safe code must satisfy invariants placed on the code even if exceptional situations occur. The fact that C doesn't have an exception system bundled in the language (well actually it has something similar, but it doesn't provide resource liberation) doesn't make it exception-safe. It is actually much more difficult to achieve exception safety in that language, since you have to do many checks and pass error codes to higher layers through return values when not having enough context. If you were to forget ones of those checks, everything can go wrong. Thus indeed, the need to analyze code in C especially.

"Alexander Nasonov" <alnsn@yandex.ru> wrote
All C code is exception-safe by definition. It's much easier to audit C code because one doesn't have to think about rather unusual path of execution.
I am not sure about the exact definition of exception safety, but to me it also involves the ability to safely _return_ from any point in the function... Regards, Arkadiy

"Alexander Nasonov" <alnsn@yandex.ru> writes:
Good code should be exception-safe anyway.
Point one: Get real. People rarely have time to analyze exception safety.
Yes, people rarely have time to write correct code. That said, so what? Shouldn't good code be correct? -- Dave Abrahams Boost Consulting www.boost-consulting.com

"Alexander Nasonov" <alnsn@yandex.ru> writes:
Point two: All C code is exception-safe by definition.
Until you invoke undefined behavior (e.g. by overflowing an int), in which case the code can throw exceptions (or do anything else it likes) and all bets are off. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Point one: Get real. People rarely have time to analyze exception safety.
You can make the same point just about any correct programming approach. Why do it correctly when you can hack your way to something that works?
Point two: All C code is exception-safe by definition. It's much easier to audit C code because one doesn't have to think about rather unusual path of execution.
In C you still have undefined behavior. More to the point, overflowing a signed integer is undefined behavior in C. The compiler is free to generate code which does anything at all in this case, like send a nasty email to your boss. Or throw a C++ exception. --Emil
participants (10)
-
Alexander Nasonov
-
Andras Erdei
-
Andy Little
-
Arkadiy Vertleyb
-
David Abrahams
-
Emil Dotchevski
-
Jason Hise
-
loufoque
-
Philippe Vaucher
-
Robert Kawulak