
I think that adding semantics to the actual operations on integers (like allowing addition to throw) is dangerous and could easilly cause unexpected behavior. Allowing a check to be made before the operation is performed makes what the code does more clear and explicit, and doesn't invoke the performance penalty in places where performing the check is not critical (for instance, if you know at design time that you are subtracting a positive signed number from another positive signed number, overflow is impossible). I suggested some sort of overflow_traits because I thought it nicely mirrored numeric_traits, which allows the bounds of a built in type to be determined. In my case specifically, I only want to test if an operation will overflow or underflow and do something different as a result. I am trying to create a rational number class that does not break under overflow conditions, but just loses precision instead. While an overflow is detected I want to divide both numerator and denominator by 2 and keep trying. Also, I plan to support support infinity, negative infinity, and NaN (1/0, -1/0, and 0/0), so overflows which are really overflows for the rational type will result in these values.