Hi, I was going through the Safe Numerics library in Boost Library Incubator, and I realized I disagree with the basic idea it is built on. I wanted to rise my concerns here. If I were to summarize in one sentence what this library is, I would say: a drop-in replacement for type int that checks for overflow in run-time and throws an exception if it finds one. Did I get it right? The remainder of this post is based on this interpretation. If so, I am not sure if this idea is a good one and worth promoting in Boost. BTW this is one of my criteria for letting a library into Boost: whether it promotes worthy ideas. I agree with the statement that a program should be UB-free. But I do not think that the approach of letting the programmer do what he did before, having the library or some run-time tool check for potential UB, and throwing an exception instead makes the program any better (or safer). It is just hiding the symptoms but not curing the disease. The programmer should not plant the UB in the first place - I agree. But this is different than first doing the mess and then having the run-time clean it up for you. I know it works for many people, in a number of languages, and it may even be considered a practical solution, but (by inclusion into Boost) I wouldn't like to be sending the message "this is how you are suppose to code". I try to recall how I use type int. I do not think I ever use it for anything that would be close to "numeric" as I know the term from math. Use Case 1 (an index): for (size_t i = 0, I = v.size(); i != I; ++i) { if (i != 0) str += ","; str += v[i]; } There doesn't appear to be a good reason to wrap it into safe<int> here, even though the incrementation could possibly overflow. Plus, it would kill my performance. Use Case 2 (using reasonably small range): I used an int to represent a square on a chessboard. There is only 64 squares, so I couldn't possibly overflow, on whatever platform. And even if there exists a platform where 64 doesn't fit into an int, I would not use safe<int> there. I would rather go for something like double_int. If I were to use some numeric computations on integers and I perceived any risk that I may overflow, I would not be satisfied with having the computations stop because of an exception. I would rather use a bigger type (BigInt?). I do not think int is even meant to be used in numerical computations. I believe it is supposed to be a building block for building more useful types like BigInt. One good usage example I can think of is this. After a while of trying to chase a bug I comae up with a hypothesis that my int could be overflowing. I temporarily replace it with safe<int> and put a break point in function overflow() to trap it and support my hypothesis. I would probably use a configurable typedef then: #ifndef NDEBUG typedef safe<int> int_t; #else typedef int int_t; #endif But is this the intent? But perhaps it is just my narrow perspective. Can you give me a real-life example where substituting safe<int> for int has merit and is not controversial? I do not mean the code, just a story. Regards, &rzej