On 18 November 2014 03:15, Andrzej Krzemienski
If so, I am not sure if this idea is a good one and worth promoting in Boost. BTW this is one of my criteria for letting a library into Boost: whether it promotes worthy ideas. I agree with the statement that a program should be UB-free. But I do not think that the approach of letting the programmer do what he did before, having the library or some run-time tool check for potential UB, and throwing an exception instead makes the program any better (or safer).
While I don't necessarily agree that throwing (because it basically makes the types unusable within destructors and functions called from destructors) is the correct response, there are certainly good use cases for this library.
It is just hiding the symptoms but not curing the disease. The programmer should not plant the UB in the first place - I agree. But this is different than first doing the mess and then having the run-time clean it up for you. I know it works for many people, in a number of languages, and it may even be considered a practical solution, but (by inclusion into Boost) I wouldn't like to be sending the message "this is how you are suppose to code".
C arithmetic is deceptively hard. For instance, look at < http://www.slashslash.info/2014/02/undefined-behavior-and-apples-secure-coding-guide/> and the discussion at https://plus.google.com/+JonKalb/posts/DfMWdBKHHvJ. All that comes from discussing this possibly incorrect code: size_t bytes = m * n; if (bytes < n || bytes < m) DoSomething(bytes); In general, you can either (a) prevent UB, or (b) detect what would be UB just before it happens and do something. A BigInt library is one way to mitigate (a), but it most certainly isn't applicable for all circumstances. I try to recall how I use type int. I do not think I ever use it for
anything that would be close to "numeric" as I know the term from math.
Use Case 1 (an index):
for (size_t i = 0, I = v.size(); i != I; ++i) { if (i != 0) str += ","; str += v[i]; }
There doesn't appear to be a good reason to wrap it into safe<int> here, even though the incrementation could possibly overflow.
Assuming v.size() returns a size_t, how can the increment possibly overflow? As I said, this stuff is incredibly difficult to reason about, and I'm not seeing the problem in the above code. #ifndef NDEBUG
typedef safe<int> int_t; #else typedef int int_t; #endif
I hope people don't do this. This isn't about finding programming bugs; rather, it's about something slightly different: detecting conditions where the normal assumptions about arithmetic do not hold. Then again, people call vector::at() for the wrong reasons, so I'm not holding my breath...
But is this the intent?
But perhaps it is just my narrow perspective. Can you give me a real-life example where substituting safe<int> for int has merit and is not controversial? I do not mean the code, just a story.
Besides security, just about anything financial. Do you, for instance, want any calculations involving UB to be applied to your bank account? Especially when it is unlikely to be an actual issue until you have a lot of money on the line? It is, as you point out, a tradeoff between safety and performance. When you are able to make that tradeoff, such a library is a life saver. -- Nevin ":-)" Liber mailto:nevin@eviloverlord.com (847) 691-1404