
On 12/14/15 12:31 AM, Andrzej Krzemienski wrote:
While I do agree with your point of view, I do not agree with the choice of the example. I would say that the problem here comes from the fact that that an unsigned type is used for anything else but a bit-set. To fix it, and to avoid such problems in the future, one does not necessarily have to use a library, but simply apply the rule "never use unsigned to represent integer numbers, even the positive ones".
My library does not purport to "fix" any problems - only detect them. So you might think of my library as a way to detect violations in your rules. I don't see anything in the library which would have to be changed in order to serve your stated purpose.
And it would work for some time, until I start playing with bigger numbers:
int main(){ int8_t x = 100; int y = x * x; std::cout << y << std::endl;
int z1 = 1000000000; int z2 = 1000000000; auto y2 = z1 * z2; // overflow std::cout << y2 << std::endl;
return 0; }
And at this point I need safe<int>
well, feel free to use it! (or a BigInt). Which to me is an entirely different thing for a different purpose. The main purpose of the safe numeric library is to reconcile the conflict between what people expect when they see an arithmetic expression and what current computer languages actually do. BigInt - or John Maddox's multi-precision numerics address an entirely different concern. C++ doesn't support arbitrarily large numbers and those libraries do. You could say that all libraries also address the confict - but they do so in a very different way. So one or the other is a better choice depending on the purpose to which they are to be used. These libraries complement each other. Robert Ramey