2015-12-12 1:18 GMT+01:00 Robert Ramey
what other parts?
1. safe_unsigned_range -- while I understand the idea behind it, I don't think I would be inclined to use it.
And also, I believe it addresses a
different problem, and could be handled by another library, like: http://rk.hekko.pl/constrained_value/
I don't remember seeing this before. Clearly there is some overlap.
2. safe<unsigned int> -- this looks suspicious. Unlike signed int, unsigned
int does not overflow.
It does overflow. The only difference is that it doesn't result in undefined behavior. The behavior is still arithmetically incorrect though.
It is meant to represent the modulo arithmetic with
well defined result for any input values.
Hmmm - I didn't believe that. But I checked and found that std::numeric_limits<unsigned int>::is_modulo is set to true.
I'm going to argue that many programmers use unsigned as a number which can only be positive and do not explicitly consider the consequences of overflow. In order to make a drop in replace which is safe, we need this.
They do. And they get themselves into many troubles. E.g.: unsigned i = -1; They also loose the UB, and therefore loose the chance of a good compiler or static analyzer detecting some bugs.
BTW - the utility of using a built-in unsigned as a modulo integer which a specific number of bits is pretty small. If I want to a variable to hold the minutes in an hour or eggs in a dozen it doesn't help me. I'm doubting that unsigned is use as a modulo integer in only a few very odd cases - and of course one wouldn't not apply "safe" in these cases.
True. The advice I have heard quite often now is: use unsigned only for bitmasks.
If I do not want modulo
arithmetic, I would rather go with safe<int> than safe<unsigned>.
You might not have a choice. If you're re-working some existing program which uses the unsigned range, you'll need this.
Agreed.
In the context of this library the safe_range ... are important for a very special reason. The bounds are carried around with type of expression results. So if I write
save<int> a, x, b, y; y = a * x + b;
runtime checking will generally have to be performed. But if I happen to know that my variables are limited to certain range.
safe_integer_range<-100, 100> a, x, b, y; y = a * x + b;
Then it can be known at compile time that y can never overflow so no runtime checking is required. Here we've achieved the holy grail:
a) guaranteed correct arithmetic result b) no runtime overhead. c) no exception code emitted. d) no special code - we just write algebraic expressions
How can this be possible? If I assign: a = 100; x = 100; b = 100; then (a * x + b) results 10100 and will overflow upon assignment to y. Or am I missing something? I can imagine that the following works though: safe_integer_range<-100, 100> a, x, b; safe_integer_range<-10100, 10100> y = a * x + b; Regards, &rzej