Arithmetic operations in C++ are NOT guaranteed to yield a correct mathematical result. This feature is inherited from the early days of C. The behavior of int, unsigned int and others were designed to map closely to the underlying hardware. Computer hardware implements these types as a fixed number of bits. When the result of arithmetic operations exceeds this number of bits, the result will not be arithmetically correct. I have crafted a library to address this issue once and for all. You can find out more about this by checking out the page for Safe Numerics at the boost library incubator. www.blincubator.com I hereby request that this library be added to the boost review queue. I've also made a proposal for the C++ Standards committee to include a simplified version of this library as part of he C++ standard. You can see the proposal at http://www.rrsd.com/software_development/safe_numerics/proposal.pdf Robert Ramey
On 12/10/2015 2:49 AM, Robert Ramey wrote:
Arithmetic operations in C++ are NOT guaranteed to yield a correct mathematical result. This feature is inherited from the early days of C. The behavior of int, unsigned int and others were designed to map closely to the underlying hardware. Computer hardware implements these types as a fixed number of bits. When the result of arithmetic operations exceeds this number of bits, the result will not be arithmetically correct.
I have crafted a library to address this issue once and for all. You can find out more about this by checking out the page for Safe Numerics at the boost library incubator. www.blincubator.com
I hereby request that this library be added to the boost review queue.
I've also made a proposal for the C++ Standards committee to include a simplified version of this library as part of he C++ standard.
You can see the proposal at http://www.rrsd.com/software_development/safe_numerics/proposal.pdf
Shouldn't the library be in Boost directory format and also use bjam to be added to the boost review queue ?
On 12/10/15 5:58 AM, Edward Diener wrote:
Shouldn't the library be in Boost directory format and also use bjam to be added to the boost review queue ?
hmmm - maybe. Certainly if it were to get accepted. But I'm not sure that's an official review requirement. In any case, it only entails inserted two directories boost/safe which have no other members. I wouldn't think this is a major obstacle. Robert Ramey
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Robert Ramey Sent: 10 December 2015 07:50 To: boost@lists.boost.org Subject: [boost] a safe integer library
Arithmetic operations in C++ are NOT guaranteed to yield a correct mathematical result. This feature is inherited from the early days of C. The behavior of int, unsigned int and others were designed to map closely to the underlying hardware. Computer hardware implements these types as a fixed number of bits. When the result of arithmetic operations exceeds this number of bits, the result will not be arithmetically correct.
I have crafted a library to address this issue once and for all. You can find out more about this by checking out the page for Safe Numerics at the boost library incubator. www.blincubator.com
I hereby request that this library be added to the boost review queue.
I've also made a proposal for the C++ Standards committee to include a simplified version of this library as part of he C++ standard.
You can see the proposal at http://www.rrsd.com/software_development/safe_numerics/proposal.pdf
Detecting and handling overflow (and underflow) is certainly something is a big missing item with C/C++. But I'm not sure that your proposal is radical enough. I'm sure that your solution will work (though I haven't been able to study it in detail yet), but if you allow definition of a safe minimum and maximum then I fear that you are paying the big cost in speed that comes from not using the built-in carry bit provided by all the processors that we care about. So I fear it is premature until we have solved the problem of detecting overflow and underflow efficiently. Lawrence Crowl, Overflow-Detecting and Double-Wide Arithmetic Operations http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0103r0.html This seems something that will allow your solution to be efficient, at least for built-in integral types? (And not everyone wants to throw exceptions - even if perhaps many think that they are mad?) HTH Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
On 12/10/15 9:30 AM, Paul A. Bristow wrote:
Detecting and handling overflow (and underflow) is certainly something is a big missing item with C/C++.
But I'm not sure that your proposal is radical enough.
This is the first time I remember being accused of this.
I'm sure that your solution will work (though I haven't been able to study it in detail yet),
I much appreciate your vote of confidence.
but if you allow definition of a safe minimum and maximum then I fear that you are paying the big cost in speed
nope, you've got it backwards. Consider the following example. You have a variable stored in an int8_t. You can absolutely know that if you square this variable, it can never produce an arithmetically correct result - even in C++. So there is zero overhead in proving that your program can never fail. One of the main features of the libraries implementation is that it carries range information around with the type. When a type is used in an expression, compile time range arithmetic is used to determine whether or not it's necessary to do any runtime checking. So such runtime checking is performed only when it is actually necessary. that comes from not using the built-in carry bit provided by all the processors that we care about. In cases where runtime checking is necessary, the library implements this is a portable way. But there's no reason one couldn't conditionally specialize the functions to implement this functionality in a manner which takes advantage of any special hardware facilities.
So I fear it is premature until we have solved the problem of detecting overflow and underflow efficiently.
Just the opposite. Now is the time to use C++ to make the problem smaller and create a "drop in" interface so that any future "solutions" to the problem can be available without having to recode our programs. That is, we want to augment/enhance C++ through libraries such as this to decouple our programs from specific machine features while maintaining the ability to gain maximal runtime efficiency. So the library has several aspects: a) creating types which carry their ranges around with them through expressions. b) using operator overloading to make usage of the library as easy as replacing your integer types with corresponding safe integer types. c) implementing runtime code to handling error generating expressions in an efficient manner. For c) I've provided a portable solution which is pretty efficient. But I'm guessing you're correct that one could do better were he to give up on portability.
Lawrence Crowl, Overflow-Detecting and Double-Wide Arithmetic Operations http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0103r0.html
I looked at these. The above discussion should make it clear I'm addressing something else here. The problem for me is not creating efficient runtime checking. The problem is to create an interface which can incorporate such code while maintaining the ability to transparently code numeric algorithms. It's about decoupling the arithmetic from the hardware in way that preserves maximum efficiency.
This seems something that will allow your solution to be efficient, at least for built-in integral types?
So if you're interested in contributing: a) Consider re-implementing you're own version of "checked" operations which exploit features of this or that hardware. b) create a couple of performance tests so we can measure what the actual performance hit is. The above should make it clear that this will require a non-trivial amount of effort.
(And not everyone wants to throw exceptions - even if perhaps many think that they are mad?)
Note that the library specifies error behavior via a policy. www.blincubator.com Take care to make a distinction between the proposal to C++ standards committe http://www.rrsd.com/software_development/safe_numerics/proposal.pdf (which is less ambitious aimed at whimpier programmers) and the proposal for Boost www.blincubator.com ( which is for glutons for punishment such as ourselves). Robert Ramey
Hi Robert, Robert Ramey wrote:
creating types which carry their ranges around with them through expressions.
I'm curious to know how well this works in practice, i.e. how much
of the time do you need run-time checking in real programs. Here
are a few examples that have occurred to me:
1. Arguments to functions
void f(safe<int> i)
{
int8_t j = i; // Needs a run-time check unless we know i<128
}
f(3); // Run-time check happens, even though value is known.
Even though the body of f is visible at the point of the call,
I don't think an approach based on types can avoid the run-time
check unless the function args are templates:
template <typename RANGE_TYPE>
void f(RANGE_TYPE i)
{
int8_t j = i;
}
f(3);
Here you can presumably arrange things so that the type of i inside
f includes the range information and the run-time check can be
avoided. But this does rely on the body of f being visible at the
point of the call. The verbosity of the function declaration should
improve a bit with concepts, but it's not really a drop-in replacement.
Related to this is function return types:
safe<int> incr(safe<int> i)
{
return i+1;
}
int8_t j = incr(42); // Run-time check or not?
Maybe auto return types help a bit but what about
return (i%2==0) ? i/2 : 3*i+1;
2. Merging constraints
Example:
void g(int j);
void f(safe<int> i)
{
for (int a = 0; a < 1000000; ++a) {
g(i+a);
}
}
Does that generate a million run-time checks? If my expectation
is that i will be "small", most likely I would be happy with just
one check at the start of f:
assert(i < maxint - 1000000);
3. Implied constraints:
vector<S> v;
f(v); // fills v
safe
On 12/12/15 9:11 AM, Phil Endecott wrote:
Hi Robert,
Robert Ramey wrote:
creating types which carry their ranges around with them through expressions.
I'm curious to know how well this works in practice, i.e. how much of the time do you need run-time checking in real programs.
That makes two of us.
Here are a few examples that have occurred to me:
1. Arguments to functions
void f(safe<int> i) { int8_t j = i; // Needs a run-time check unless we know i<128 } correct.
f(3); // Run-time check happens, even though value is known.
The constructor of safe<int> doesn't do a runtime check when it's being initialized by an int. the assignment would require a runtime check.
Even though the body of f is visible at the point of the call,
In general this is not true. f could well be compiled separately.
I don't think an approach based on types can avoid the run-time check unless the function args are templates:
template <typename RANGE_TYPE> void f(RANGE_TYPE i) { int8_t j = i; }
f(3);
here is a simpler way
void f(safe<int> i){
int j = i; // Needs a run-time check unless we know i<128
}
also
void f(safe<int> i){
auto j = i; // Needs a run-time check unless we know i<128
}
also
void f(safe
Here you can presumably arrange things so that the type of i inside f includes the range information and the run-time check can be avoided. But this does rely on the body of f being visible at the point of the call.
nope - it relies upon picking types which are large enough and carry thier range information with them. The "automatic" promotion policy can do this automatically. The verbosity of the function declaration should
improve a bit with concepts, but it's not really a drop-in replacement.
This is true - used as a drop in replacement, one will get more runtime checking.
Related to this is function return types:
safe<int> incr(safe<int> i) { return i+1; }
int8_t j = incr(42); // Run-time check or not?
No runtime check when the safe<int> is constructed on the stack yes runtime check calculating the result of i + i. No runtime check when returning the result. Since C++ expression rules mean that the result of i + i will be an int and there check required to construct a safe<int> from an int. In fact, probably zero overhead due to NVRO. yes runtime check on assigment to int8_t j = incr(42). Unless incr is made constexpr at which time everything moves to compile time. zero runtime checking - actually zero execution for the whole thing!
Maybe auto return types help a bit but what about
return (i%2==0) ? i/2 : 3*i+1;
what about it?
2. Merging constraints
Example:
void g(int j);
void f(safe<int> i) { for (int a = 0; a < 1000000; ++a) { g(i+a); } }
Does that generate a million run-time checks?
yep (note: a will overflow on some machines)
If my expectation is that i will be "small", most likely I would be happy with just one check at the start of f:
assert(i < maxint - 1000000);
of course. But maybe you'd consider the following equivalent void f(safe_integer_range<0, maxint> i){ safe_integer_range<0, 1000000> a; for (a = 0; a < 1000000; ++a) { g(i+a); } } if maxint is less than 2^32 - 1000000 then no runtime checking will in the function. Note how this differs from the usage of assert. With assert we check during debug build. But when we ship we throw away the seat belts. With safe types in this case, we get zero runtime overhead and we're guaranteed not to crash at runtime. At this point we're beyond the "drop-in" replacement stage. But we're also enhancing getting safety and performance guarantees we've never been able to get before.
3. Implied constraints:
vector<S> v; f(v); // fills v safe
bytes = v.size() * sizeof(S); I know that the multiplication can't overflow, because if v.size() was large enough that it would overflow then I would have run out of memory already.
Hmmm - I don't actually know that for a fact. I've always found the
usage of size type confusing. I don't really know that it's not possible
to allocate a vector whose total space occupies more than 2^32 bytes (or
is it 2^64).
But in any case, since construction safe
So as I say, I'm curious to hear whether you've applied your safe<int> to any real applications and measured how many of the potential run- time checks are avoidable.
Only my tests and examples. I'm hoping someone will create some performance tests.
My fear is that it could be fewer than you hope, and that the checks remain inside loops rather than begin hoisted outside them.
We'll just have to see.
Another reservation about this is the question of what should happen when an overflow is detected. Exceptions are not great, because you have to worry about what you'll do when you get one.
The library has the concept of exception policy. throwing an exception is the default but you can select a different one or make your own. It's not hard. But you really have to spend time thinking about what you want to do. One thing you might want to look at is the usage of automatic type promotion (another policy - default is native). When using this policy, types returned from binary expressions are almost always large enough to hold the expression results - so we're back in to zero runtime checking. Another thing you might look into is safe...range which does the opposite. I attaches range information to your variables at compile time so that runtime checking can be skipped when it's known not to be necessary. Then there is the fact that all this is constexpr friendly which can have a huge impact on performace.
One ambitious idea is to find a way to fall back to arbitrary precision arithmetic:
template <typename INT> void f(INT i) { INT i2 = i*i; INT j = (i2 + 4096) << 2; g(j%7); }
try { f<int>(n); } catch (overflow&) { f
(n); } How much syntactic sugar can we hide that in?
This is too ambitious for me. But you could do it with your own custom
promotion policy. The requirements for such a policy are in the
documentation and you can look how the "automatic" promotion policy does
it. It specifies the return types from binary operations dependent on
the types of the operands. It does it in such a way as to minimize the
need for runtime checking. If you make your own policy and named it
"phil" then using it would be:
template<typename T>
using safe_t = safe
I recently experimented with LLVM's thread safety checking (see http://clang.llvm.org/docs/ThreadSafetyAnalysis.html ); it lets you annotate variables to indicate that locks must be held when they are accessed:
mutex m; int i GUARDED_BY(m);
I've been impressed by that, and I wonder if attributes defining expected ranges and static analysis might be a more practical way to find potential overflows.
using safe_...range(min, max) seems equivalent to me. I'm now trying to make work the piece d'resistance - an error trapping policy which uses static_assert so that for things like embedded systems one could get a compile time error anywhere something could possible overflow - no runtime checking and no exception handling required. I'm having difficulty with this as we speak. My expectation is to see the library used in following manner a) some safety critical program has a bug which is suspected to be addressable by safe type b) In desparation, someone just changes all the int, ... types via a alias to safe equivalents. c) program is compiled and run - and a bug is found. Hooray. d) the alias is redefined to the built-in equivalents d) but wait, some idiot consults a lawyer. He says - well now you know how to guarantee that the program has no bugs (remember he's a lawyer) so you better make sure you include this facility in case we get sued. We have to be able to prove we did everything humanly possible to avoid bugs. e) someone complains that the program is going to be too slow. But under pressure of the authorities they produce a benchmark and it IS slower. But not quite by as much as was feared. f) so now someone says - what would it take to make it faster? At this point some changes are made so that we're not doing as much shrinking of types. Perhaps the libraries "automatic" policy is used which expands intermediate and auto variables to be big enough to hold the results. perhaps safe_...range is used to specify real ranges. This eliminates the performance bottleneck. But we're no longer using safe types as "drop-in" replacements. We've created new types selected to (almost) never trap. It's just a speculative scenario. But it's not unbelievable. I hope it can be seen that I've spent considerable time thinking about this. Now I'm afraid to get on an air plane or get an MRI. It's that bad. Robert Ramey
I have written a similar library (http://doublewise.net/c++/bounded/), and I have a few questions about your implementation. You have a few policies for what to do on overflow. In particular, I see you have throw_exception and trap_exception. trap_exception provides a compile-time error instead of a run-time error. Could you explain this a little bit more? In my library, conversions for which the type is definitely in range (for instance, [1, 6] assigned to [0, 10]) are implicit and no check is performed. Conversions for which the type is possibly in range are explicit and the policy is applied (run-time check). Conversions for which the type is definitely out of range are a compile-time error, unless the overflow policy says that overflow is not an error. Your overflow policies all seem to assume overflow is error. How difficult would it be to add in the ability to provide 'clamping' or 'saturation' behavior? If you go below the min, set the value to the min, and if you go above the max, set to the max. Similar question for a modulo policy, which gives you the same behavior as unsigned integers. I will have more thoughts later as I read the rest of the documentation.
On 12/16/15 11:16 PM, David Stone wrote:
I have written a similar library (http://doublewise.net/c++/bounded/), and I have a few questions about your implementation.
I'm very aware of this. I attended your excellent presentation of you bounded integer library at C++Now 2014. I've also referenced your library in the documentation for safe_numeric. I was working on the safe numeric library at the time so that's why I was interested.
You have a few policies for what to do on overflow. In particular, I see you have throw_exception and trap_exception. trap_exception provides a compile-time error instead of a run-time error. Could you explain this a little bit more?
As we speak I'm putting the final touches on this. Basically it traps at compile time if an exception might occur. That is if it cannot be known at compile time the the operation cannot fail - a compile time error is invoked.
In my library, conversions for which the type is definitely in range (for instance, [1, 6] assigned to [0, 10]) are implicit and no check is performed.
this is similar
Conversions for which the type is possibly in range are explicit and the policy is applied (run-time check).
hard to tell, but might be similar.
Conversions for which the type is definitely out of range are a compile-time error,
In my case this is always an error.
unless the overflow policy says that overflow is not an error.
But one could specify his own custom policy which ignored such an error. BUT I'm not sure how useful it would be as it would apply to the type - not the value so it would be hard to use as one might intend. I do have a policy to ignore errors. But I haven't tested it. I'm thinking I should eliminate it as it would defeat the whole purpose of the library.
Your overflow policies all seem to assume overflow is error.
It's not so much the policies - it's more my vision for the library usage. The driving purpose is to be able to know that one's code implements correct arithmetic. So my basic premise is that an overflow is an error.
How difficult would it be to add in the ability to provide 'clamping' or 'saturation' behavior? If you go below the min, set the value to the min, and if you go above the max, set to the max.
I never considered this, and from what I know about the design of the library, I don't think it would be practical to implement on top of what I have now.
Similar question for a modulo policy, which gives you the same behavior as unsigned integers.
I've got a whole different view of this as well. Consider handing some other type - be it a wall known arithmetic type such as modulo integers or some special type like money. My approach would be: a) make your class - say a clock (modulo 60) or money class b) declare safe<money> and you'll be done! right - not quite. But not that far off. I have a couple of "back ends" which handle compile time integer<T> for any T and checked arithmetic which is templated on T. Currently it works only for integer types. So one would have to specialize on checked<money> which would be defined in terms of checked<T> since money would be defined in terms of some integer. As far as exceptions and promotions - selection of these would be independent of any particular type.
I will have more thoughts later as I read the rest of the documentation.
I'm currently putting the finishing touches on an upgrade of the code and improvement and enhancement of the documentation. The main enhancement will be a treatment of how to guarantee no errors with no runtime overhead. I expect to have a new version by monday. Robert Ramey
On 17 December 2015 at 19:12, Robert Ramey
On 12/16/15 11:16 PM, David Stone wrote:
I have written a similar library (http://doublewise.net/c++/bounded/), and I have a few questions about your implementation.
I'm very aware of this. I attended your excellent presentation of you bounded integer library at C++Now 2014. I've also referenced your library in the documentation for safe_numeric. I was working on the safe numeric library at the time so that's why I was interested.
Great initiatives! I've been thinking about both approaches, especially with a view to translating a safe language for kids. I can see two good use cases: using both a particular integer type [Robert?], and using a minimal type from a set of possible [David?] (e.g. set of: fast; aligned; all/size minimal). Reminds me a little of Pascal's integer ranges with modern machinery. Indefinite loops and subtracting unsigned values are particular breakages with performance concerns that I can't see being addressed without run-time overhead, but perhaps those casts, at the coupling points, can be debug only on request? My preference for lib/policy would certainly be for zero run-time overhead unless cast but I can imagine many would be happy with continual run-time checks. Having no exceptions would be a requirement for most of my use cases. One alternate way I've thought of dealing with that is by having a sentinel or NaN for integers, which might be a thought worth considering that I don't think I've seen elsewhere, but it might not help much if it becomes an awkward run-time precondition instead of a type. David's saturation idea might be a +/- INF sentinel too and perhaps helpful for physical units. It would be nice to see a decimal precision type integer (which I used a lot, e.g. cents or 0.0001 precision) or fixed point extensions or accommodations as these also repurpose integers, such as Robert's safe<money> suggestion. Those types would also be greatly enhanced by having the same wonderful range benefits. Combined with units, you might just save a few red-faces, satellites, or lives with non-stupid financial and physical ranges :-) $0.02, --Matt.
On 12/17/15 1:43 AM, Matt Hurd wrote:
On 17 December 2015 at 19:12, Robert Ramey
wrote: On 12/16/15 11:16 PM, David Stone wrote:
I have written a similar library (http://doublewise.net/c++/bounded/), and I have a few questions about your implementation.
I'm very aware of this. I attended your excellent presentation of you bounded integer library at C++Now 2014. I've also referenced your library in the documentation for safe_numeric. I was working on the safe numeric library at the time so that's why I was interested.
Great initiatives!
Here's an interesting fact. I was on the program committee for C++ con 2015. David and I each submitted proposals on our respective libraries. Neither one excited any interest at all among the reviewers. None. I'm quite astounded that there seems to be no concern at all among working programmers nor academics about addressing this problem in a practical manner. I've learned the hard way that the documentation for such a library has to spend significant effort in making the case for it's use. This is in contrast to the serialization library which had no such problem. So I appreciate efforts among the community to promote these ideas - it's going to be long slog. Robert Ramey
Matt: Matheus Izvekov has created a nice fixed-point library that lets you specify exact precision. It also interoperates with my bounded::integer library as one possible back-end. ==Tangent== For a money type, we often think of money as being any number of dollar digits followed by exactly two cent digits. I work in finance, however, and this is not always true. Certain instruments are traded in 1/8 of a cent, meaning that they would actually require five decimal digits past the point to represent. Currency conversions (how many US dollars can I buy with one Euro?) just as bad. They were typically traded in pips, which is 4 decimal places, but now they are traded in fractional pips, which is 5. There is no reason to believe that this will remain at 5 forever. The typical representation in the wire protocol for all of these prices is to effectively send two integers and treat it as a rational (the wire representation for those two integers varies, but that is the ultimate idea). I don't know how much a "standard" money class should deal with these 'side issues', as most people don't care about pips and pipettes. However, many of the people who deal with money the most do, and would be unable to use fixed_point as their underlying money type.
Matheus Izvekov has created a nice fixed-point library that lets you specify exact precision. It also interoperates with my bounded::integer library as one possible back-end.
Thanks David. I'll have a look as I suspect it may be a good replacement for my home-brew especially as it has traits.
==Tangent==
For a money type, we often think of money as being any number of dollar digits followed by exactly two cent digits. I work in finance, however, and this is not always true. Certain instruments are traded in 1/8 of a cent, meaning that they would actually require five decimal digits past the point to represent. Currency conversions (how many US dollars can I buy with one Euro?) just as bad. They were typically traded in pips, which is 4 decimal places, but now they are traded in fractional pips, which is 5. There is no reason to believe that this will remain at 5 forever.
The typical representation in the wire protocol for all of these prices is to effectively send two integers and treat it as a rational (the wire representation for those two integers varies, but that is the ultimate idea).
I don't know how much a "standard" money class should deal with these 'side issues', as most people don't care about pips and pipettes. However, many of the people who deal with money the most do, and would be unable to use fixed_point as their underlying money type.
Finance is a funny beast. Thanks again and kind regards, --Matt.
2015-12-17 9:12 GMT+01:00 Robert Ramey
thinking I should eliminate it as it would defeat the whole purpose of the library.
I can see value in such policy. In the e documentation, in section
"Eliminate Runtime Cost" (
https://htmlpreview.github.io/?https://raw.githubusercontent.com/robertramey...)
you show a great application of the library. If I am interested in
improving performance still, I would change the policy form "throw upon
overflow" to "ignore overflow", in hope that the later skips the check
altogether.
Then, using type
safe_signed_range
On 17/12/2015 21:12, Robert Ramey wrote:
On 12/16/15 11:16 PM, David Stone wrote:
How difficult would it be to add in the ability to provide 'clamping' or 'saturation' behavior? If you go below the min, set the value to the min, and if you go above the max, set to the max.
I never considered this, and from what I know about the design of the library, I don't think it would be practical to implement on top of what I have now.
FWIW, in most cases where I have range concerns (mostly squeezing ints into smaller data types for protocol reasons), clamping is the behaviour that I want.
On Dec 17, 2015, at 12:16 AM, David Stone
mailto:david@doublewise.net> wrote: I have written a similar library (http://doublewise.net/c++/bounded/ http://doublewise.net/c++/bounded/), and I have a few questions about your implementation.
I am curious whether either of you (Robert and David) have handled the need to evaluate the condition “is X (an object of type with some _potentially_ wide range) within bounds of Y (an object of type with some _potentially_ narrow range)? I have use cases in which I would like use bounded types, but there are situations in which I need to capture the result of something like in_range<Y>(x). I am not seeing this in your libraries but perhaps I am missing something. It does seem quite feasible to include and would seem to incur no runtime cost under the same conditions that the construction Y y = {x} would not. If I am missing this in your libraries, please let me know; otherwise, I throw this out as a suggested addition. Cheers, Brook
On Dec 17, 2015, at 12:16 AM, David Stone
mailto:david@doublewise.net> wrote: I have written a similar library (http://doublewise.net/c++/bounded/ http://doublewise.net/c++/bounded/), and I have a few questions about your implementation.
I am curious whether either of you (Robert and David) have handled the need to evaluate the condition “is X (an object of type with some _potentially_ wide range) within bounds of Y (an object of type with some _potentially_ narrow range)? I have use cases in which I would
On 1/12/16 8:46 AM, Brook Milligan wrote: like use bounded types, but there are situations in which I need to capture the result of something like in_range<Y>(x). I am not seeing this in your libraries but perhaps I am missing something. It does seem quite feasible to include and would seem to incur no runtime cost under the same conditions that the construction Y y = {x} would not.
If I am missing this in your libraries, please let me know; otherwise, I throw this out as a suggested addition.
I'm not 100% sure I understand what you mean, but that's not going to stop me from commenting. The following applies to the safe numerics library. all safe types include a closed interval min,max as part of the type. At compile time, arithmetic operations take this information into account. So for example for safe types x and y x = y is checked at compile time that the range of x is compared to the range of y. if the range of x includes that of y - there is no runtime checking as is unnecessary. if the the ranges of x and y do not intersect - there's a static assert. if the range of x includes the range of y, the operation is performed with no runtime checking. The above occurs with all binary operations and compositions thereof. So it seems that what you want is already incorporated. The public API doesn't expose the underlying range information. It's not private though. You should look into the boost - math - interval arithmetic library. This does the job at runtime and provides more features and more access the information about the type. It also addresses issues related to floating point types. Robert Ramey
On 1/12/16 11:05 AM, Robert Ramey wrote: Oops
x = y
is checked at compile time that the range of x is compared to the range of y.
if the range of x includes that of y - there is no runtime checking as is unnecessary.
if the the ranges of x and y do not intersect - there's a static assert.
otherwise, there is a check at runtime which guarantees that y is in the specified legal range for x Robert Ramey
On Jan 12, 2016, at 12:05 PM, Robert Ramey
On 1/12/16 8:46 AM, Brook Milligan wrote:
On Dec 17, 2015, at 12:16 AM, David Stone
mailto:david@doublewise.net> wrote: I have written a similar library (http://doublewise.net/c++/bounded/ http://doublewise.net/c++/bounded/), and I have a few questions about your implementation.
I am curious whether either of you (Robert and David) have handled the need to evaluate the condition “is X (an object of type with some _potentially_ wide range) within bounds of Y (an object of type with some _potentially_ narrow range)? I have use cases in which I would like use bounded types, but there are situations in which I need to capture the result of something like in_range<Y>(x). I am not seeing this in your libraries but perhaps I am missing something. It does seem quite feasible to include and would seem to incur no runtime cost under the same conditions that the construction Y y = {x} would not.
If I am missing this in your libraries, please let me know; otherwise, I throw this out as a suggested addition.
I'm not 100% sure I understand what you mean, but that's not going to stop me from commenting. The following applies to the safe numerics library.
Perhaps I did not ask my question as clearly as I should have. Sorry.
My understanding, which seems to be reinforced by your response, is that all the normal arithmetic and comparison operations involving safe types either complete with mathematically correct results or fail in some way that depends on the policy. Further, this may or may not involve runtime checks depending on the nature of the specific arguments with respect to ranges, etc.
What I am not seeing in the design is a means of inquiring in advance whether or not an operation will succeed without actually performing the operation. This might be relevant for other cases, but I am most interested in making use of the type system, which by design already includes the relevant range information, to facilitate inquiry regarding assignment or construction. Consider the following code:
int x = { /* something possibly large */ };
safe_int
On 1/12/16 11:31 AM, Brook Milligan wrote:
On Jan 12, 2016, at 12:05 PM, Robert Ramey
wrote: int x = { /* something possibly large */ }; safe_int y = x; // clearly this requires a run-time check and may fail Instead, I would like to be able to, for example, branch; something like this:
int x = { /* something possibly large */ }; if (is_convertible
>(x) { do_something_with small values(); } else { do_something_with_large_values(); }
It's going to be a little trickier than that. One basic problem is that
there more than two cases, yes, no, have to check at runtime.
Turns out that similar facilty is used by the library and is available
to users. Downside is there is no example/tutorial so one would
actually have to read the documentation. The general procedure would be:
// create an interval from the safe type
using namespace boost::numeric;
using interval_t = safe_integerstd::int8_t>;
intervalstd::int8_t i = {
std::numeric::limits
In some cases, e.g., if the positions of the int and safe_int<> were reversed above, no runtime checking is required but the appropriate branch is executed. In others, such as actually illustrated, runtime checking is required just as it would be in an assignment.
In either case, however, the result leverages the type system you have developed to inquire about the relative ranges and values in (I feel) an expressive and compact way.
Is this possible currently? Can it be incorporated given the design of the library? Don’t you already have the internals in place to do something equivalent that could be exposed in fashion like this?
I hope that is clearer. Thanks again.
Cheers, Brook
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Jan 12, 2016, at 1:50 PM, Robert Ramey
It's going to be a little trickier than that. One basic problem is that there more than two cases, yes, no, have to check at runtime.
Yes, but I expected this to boil down to one of two cases depending on policy: - yes, no, or fail to compile, or - yes, no (will compile but may or may not invoke a runtime check) I thought those were basically the options for any of the operations and that the difference was based upon the policy choice. Do I misunderstand?
The above return a "checked_result
" - which is decribed in another part of the documentation. Basically its similar to "optional" so it either returns a valid new interval or an error condition which you can test for.
It seems that the best correspondence might be something like the following:
typedef safe_int
On 1/12/16 1:32 PM, Brook Milligan wrote:
On Jan 12, 2016, at 1:50 PM, Robert Ramey
wrote:
It seems that the best correspondence might be something like the following:
typedef safe_int
narrow_type; int x = { /* potentially something large */ }; if (checked::cast (x).no_exception()) { do_something_with_small_values(); } else { do_something_with_large_values(); } Is that the correct semantics?
Looks OK. Should I be worried about the following comment from your docs: "Note that this type is an internal feature of the library and shouldn't be exposed to library users because it has some unsafe behavior.”? That seems worrisome and was a reason this did not register earlier. I don't remember what I was thinking when I wrote that. But it might be that I didn't want users to use the function without checking the return value.
This solution suggests that wrapping this particular construct in something that is unambiguously safe would be a good idea:
template < typename R, typename T > bool is_convertible<R>(T const& t) { return checked_cast<R>(t).no_exception(); }
Does that make sense?
I'd have to think about it. but it looks OK. BTW there is a function in mpl "is_convertible" which implements this behvaior at compiler time - not runtime as here. So you might want to change the name
Cheers, Brook
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Jan 12, 2016, at 7:30 PM, Robert Ramey
I don't remember what I was thinking when I wrote that. But it might be that I didn't want users to use the function without checking the return value.
Perhaps that is a more useful statement for the documentation. It is certainly more informative.
This solution suggests that wrapping this particular construct in something that is unambiguously safe would be a good idea:
template < typename R, typename T > bool is_convertible<R>(T const& t) { return checked_cast<R>(t).no_exception(); }
Does that make sense?
I'd have to think about it. but it looks OK.
BTW there is a function in mpl "is_convertible" which implements this behvaior at compiler time - not runtime as here. So you might want to change the name
Oops. I forgot the constexpr part. That makes it work at compile time just fine. It seems to me that the appropriate place for such a function like this is in your safe numeric library, not elsewhere. It is closely allied to the semantics of what a safe numeric type means. Additionally, if nothing else, it serves as an example of the proper use of the checked_result type. Cheers, Brook
On 1/12/16 6:57 PM, Brook Milligan wrote:
On Jan 12, 2016, at 7:30 PM, Robert Ramey
wrote: I don't remember what I was thinking when I wrote that. But it might be that I didn't want users to use the function without checking the return value.
Perhaps that is a more useful statement for the documentation. It is certainly more informative.
This solution suggests that wrapping this particular construct in something that is unambiguously safe would be a good idea:
template < typename R, typename T > bool is_convertible<R>(T const& t) { return checked_cast<R>(t).no_exception(); }
Within the design intention of the library of the library one would simply safe_... x; int y try{ y = x; // or x = y; } catch(cont std::exception e &){ // cast failed ... }
On Jan 12, 2016, at 10:36 PM, Robert Ramey
wrote: Within the design intention of the library of the library one would simply
safe_... x; int y
try{ y = x; // or x = y; } catch(cont std::exception e &){ // cast failed … }
Fair enough. However, my understanding of best practice is that using exceptions for non exceptional behavior, e.g., branching on a condition, is an indication of incomplete design and runs counter to the goal of making code clear and expressive, something this library strives to do and otherwise achieves very well in my opinion. Is there some reason that this use case--inquiring about the potential safety of a value—should not be an inherent element of the design? Is it not a valid use case to have a value in a wide type for which one needs to know whether it can be converted to a particular safe type? Is this not required for some aspects of interoperability with “non-safe” types? I am left wondering why something as basic as this is so controversial, especially given that there seem to be no runtime costs in exactly the same set of cases for which the above code has no runtime costs. Why sacrifice code clarity by not supporting this? Cheers, Brook
On 1/13/16 6:58 AM, Brook Milligan wrote:
On Jan 12, 2016, at 10:36 PM, Robert Ramey
wrote: Within the design intention of the library of the library one would simply
safe_... x; int y
try{ y = x; // or x = y; } catch(cont std::exception e &){ // cast failed … }
Is there some reason that this use case--inquiring about the potential safety of a value—should not be an inherent element of the design? Is it not a valid use case to have a value in a wide type for which one needs to know whether it can be converted to a particular safe type? Is this not required for some aspects of interoperability with “non-safe” types?
I am left wondering why something as basic as this is so controversial, especially given that
As you've noticed, the functionality you're referring to has to be part of such a library implementation. In the course of making the library, I decided that this functionality was interesting in it's own right and important for future evolution of the library (safe_money anyone?). So I documented these implementation features - checked_result, checked and interval. And each of these "sub-libraries" has it's own set of tests and documentation pages. Maybe they should be three separate libraries. But this opens up a huge amount of new territory. I have to limit the scope in order to actually deliver something. So it's safe integer. So to me, your question would be - why aren't these "libraries" exposed to the end user. The short answer is: it's a lot of work to make a library useful to an end user. One needs: a) correctly factored code b) correctly documented code c) tutorial examples and use cases. - which requires going back to a) Which doesn't sound all that bad. But the problem is that by the time you get c) done, you can't help but discover that a) and b) need to be updated - which turns out to be a lot more time consuming than first meets the eye. If I were to continue on this, I'd make the example/use case for extending the library beyond integers to user-defined integer like types such as "money". But that would feed back to ... - I'm not sure. But surely this would be a lot more work. And I've worked on this waaaaay past my original intention. Then there is the question of demand for such a thing. The demand and interest for such a library such as safe integer is much, much much less than I anticipated. So I'm not very excited on spending more time on it. As an aside there has been work on similar to parts of the library a) Interval library. 1) Boost Interval Library. Speaking from memory, this has more emphasis on floating point issues such as rounding and such. I don't remember if it placed so much emphasis on operator overloading to make it easily plug in compatible. I don't think it specifically addressed integer problems such as converting between integer types, overflows and the like. Didn't work at compile time. 2) Bruno Laland (?) had a compile time interval library. It was undocumented and didn't handle the corner cases of overflow etc. So this argued for making Interval as a library. b) checked operations. This is sort of no brainer. It almost has to be way it is. There are lots of libraries like this. But I couldn't find one which which was factored right, supported constexpr and had compile time handling to avoid unnecessary operations. For wider use, this would have to be modified to define the template types for other than integers. This could turn out to be stickier than meets the eye. c) checked_result. This seems obvious in retrospect - but it wasn't until I spend significant time iterating the loop above. This is basically boost optional with more than one failure condition. In my recollection there was a huge discussion about such a component as part of the review for AFIO. I never understood what the big deal was. But whenever the term "monad" gets injected into the discussion, things tend to start circling the drain. It was certainly tricky. I wanted to use Boost variant (or some other variant) but these are not literal types so can't be used with constexpr. And there is the problem of implicit conversions which I had a lot of difficulty getting correct - assuming they are even now. So there you have it. Three libraries which are mostly implemented and documented - missing examples and use cases. If you want to make a name for yourself, you could split out any of these and propose it as boost library - or just use it as is, or make the extra documentation and tests and I'll consider adding it to the safe integer library. there seem to be no runtime costs in exactly the same set of cases for which the above code has no runtime costs. Why sacrifice code clarity by not supporting this? I don't think I'm sacrificing code clarity. I hope the above gives some insight. Robert Ramey
On Jan 13, 2016, at 10:07 AM, Robert Ramey
wrote: As you've noticed, the functionality you're referring to has to be part of such a library implementation. In the course of making the library, I decided that this functionality was interesting in it's own right and important for future evolution of the library (safe_money anyone?). So I documented these implementation features - checked_result, checked and interval. And each of these "sub-libraries" has it's own set of tests and documentation pages. Maybe they should be three separate libraries.
[ snip ]
If I were to continue on this, I'd make the example/use case for extending the library beyond integers to user-defined integer like types such as "money". But that would feed back to ... - I'm not sure. But surely this would be a lot more work. And I've worked on this waaaaay past my original intention. Then there is the question of demand for such a thing. The demand and interest for such a library such as safe integer is much, much much less than I anticipated. So I'm not very excited on spending more time on it.
Perhaps this (perceived lack of interest) is the fundamental issue. I have been following the discussion on your safe numerics with much interest, because for my applications (scientific computing) we need much _better_ support for this sort of thing, not less; the status quo is definitely not suitable. I continually run up against the challenge of wanting numeric types that I can just use with confidence that they perform reasonably from a mathematical point of view. Your library does exactly this for integers, with the tiny exception that it is awkward to query the validity of a value with respect to whether it matches the constraints of a narrower type. To my mind, the whole concept of a safe numeric library is essential, so I am sorry that you perceive little interest. I realize your focus to date has been on native integers, so the following clearly represents an extension to your intent for the current implementation, but not to your motivation. I will mention it, because I feel it indicates that your library already has almost everything needed and that very little extra would be needed for generality. I say this being fully aware of the iteration loop that you have been involved with and have described earlier. In the course of this discussion I have looked through the code quite a bit and it seems that there is relatively little that limits your library to integers. Here is where I see the dependence on integer types: - Reliance on std::numeric_limits for the bounds. A trivial change to numeric/convertible/bounds.hpp from the Numeric.Conversion library will make it constexpr-capable and allow a completely generic definition of bounds. Doing this requires addition of one file for the time being (the constexpr equivalent of bounds.hpp which needlessly isn’t as far as I know; maybe that has been fixed by now?) and switching the references to std::numeric_limits<T>.min() and std::numeric_limits<T>::max() to bounds<T>::lowest() and bounds<T>::highest(). The bounds<T> class can be specialized for any type, so even user-defined types will work here with the right specializations and the specializations for native types are trivial in terms of std::numeric_limits. Except for the appropriate namespace for specializations, this is an implementation detail for your library and so should affect nothing visible externally, including documentation; I’ll send you the bounds code if you like. - Reliance on non-type template parameters for things like safe_int. Those, however, are implemented in terms of the underlying basic_… types, which presumably are not obliged to use non-type template parameters; integral_constants for example would seem to work fine for every case you are actually using. Again, that is an implementation detail as far as I can tell. From there, a clear definition of the appropriate concept for types specifying the bounds can be defined so that this is a customization point for other types. This opens the way for a generic safe_numeric type, but has no impact on the utility of your current safe_int types. - Ignoring issues like denormalization. Clearly this is irrelevant for integral types, but not for floating point. Likewise, a full set of policies for floating point would need to be richer. However, I feel that a significant collection of use cases are covered by just considering the range and ignoring denormalization. It would be unfortunate to block useful advances for wont of complete generic policies for floating point. Otherwise, I am not seeing (or at least remembering) anything that is really dependent on integral types. A great step forward would be to act on these observations and take your great foundation one more step so that it is much more generic. Then we will have safe numerics as they should be done, something none of the other libraries you have mentioned really accomplishes. Thanks very much for for the lengthy discussion and for working on this foundational library. I strongly share your vision that c++ needs safe numeric types and hope that your library will grow to support that need. Cheers, Brook
On 1/13/16 10:07 AM, Brook Milligan wrote:
Perhaps this (perceived lack of interest) is the fundamental issue.
It is a big issue.
we need much _better_ support for this sort of thing, ... so I am sorry that you perceive little interest...
I realize your focus to date has been on native integers, so the following clearly represents an extension to your intent for the current implementation, but not to your motivation. I will mention it, because I feel it indicates that your library already has almost everything needed and that very little extra would be needed for generality. I say this being fully aware of the iteration loop that you have been involved with and have described earlier.
I think you're too optimistic. Damian Vicino has been working on safe float. The first thing we've discovered is that it has an entirely separate set of issues come to fore. Our view is that this is not going to be just a question just extending the safe integer library.
In the course of this discussion I have looked through the code quite a bit and it seems that there is relatively little that limits your library to integers. Here is where I see the dependence on integer types:
- Reliance on std::numeric_limits for the bounds. A trivial change to numeric/convertible/bounds.hpp from the Numeric.Conversion library will make it constexpr-capable and allow a completely generic definition of bounds.
I spend a lot of time looking at Boost Convert documentation and failed to understand what it does and how do use it. I may or may not have looked at the code - I don't remember. I leveraged on the std::numeric_limits because: a) it had what I needed. b) its delivered with every C++ compiler in a way which is adjusted to the target environment. That is I don't have figure out for every platform what the size of an int is. c) It is extendable. That is, all safe types have their own numeric/limits class - even those which are created on the stack during the composition of binary operators into more complex arithmetic expressions. d) It is extendable to other integer like types. I would expect to see a numeric limits class for money, and by extension for safe<money> e) it's there - using diminishes the code I have to write and maintain. f) it's public - so there is a chance that if someone makes another integer like type that I don't anticipate - there's a chance the safe integer library might work with it - or at least diminish the effort to make that new type a safe one. e) So it presents a generic interface to meta data about numeric types. Its a good basis for such a library.
switching the references to std::numeric_limits<T>.min() and std::numeric_limits<T>::max() to bounds<T>::lowest() and bounds<T>::highest().
OK maybe I misread your comment. I thought you were referring to boost convert. I don't remember anything about boost numeric bounds. I'm sure I looked at at it as I searched all of boost - and a bunch more stuff - in order to diminish my task. as I said - std::numeric_limits does everything that's needed, works, is maintained and is free.
- Ignoring issues like denormalization. Clearly this is irrelevant for integral types, but not for floating point. Likewise, a full set of policies for floating point would need to be richer. However, I feel that a significant collection of use cases are covered by just considering the range and ignoring denormalization. It would be unfortunate to block useful advances for wont of complete generic policies for floating point.
useful - but more to it than meets the eye. see above.
Otherwise, I am not seeing (or at least remembering) anything that is really dependent on integral types.
LOL - look harder
A great step forward would be to act on these observations and take your great foundation one more step so that it is much more generic.
Feel free to take your best shot !
Hi Brook,
On Tue, Jan 12, 2016 at 9:46 AM, Brook Milligan
On Dec 17, 2015, at 12:16 AM, David Stone
> wrote: I have written a similar library (http://doublewise.net/c++/bounded/ < http://doublewise.net/c++/bounded/>), and I have a few questions about your implementation.
I am curious whether either of you (Robert and David) have handled the need to evaluate the condition “is X (an object of type with some _potentially_ wide range) within bounds of Y (an object of type with some _potentially_ narrow range)?
In my bounded::integer library, I have a few functions in the detail
namespace.
value_fits_in_type<type>(value);
type_overlaps_range<type>(min, max);
type_fits_in_range<type>(min, max);
types_overlap
On Dec 17, 2015, at 12:16 AM, David Stone
mailto:david@doublewise.net> wrote: I have written a similar library (http://doublewise.net/c++/bounded/ http://doublewise.net/c++/bounded/), and I have a few questions about your implementation.
I am curious whether either of you (Robert and David) have handled the need to evaluate the condition “is X (an object of type with some _potentially_ wide range) within bounds of Y (an object of type with some _potentially_ narrow range)? I have use cases in which I would like use bounded types, but there are situations in which I need to capture the result of something like in_range<Y>(x). I am not seeing this in your libraries but perhaps I am missing something. It does seem quite feasible to include and would seem to incur no runtime cost under the same conditions that the construction Y y = {x} would not. If I am missing this in your libraries, please let me know; otherwise, I throw this out as a suggested addition. Cheers, Brook
Hi Robert, Robert Ramey wrote:
I've also made a proposal for the C++ Standards committee to include a simplified version of this library as part of he C++ standard.
You can see the proposal at http://www.rrsd.com/software_development/safe_numerics/proposal.pdf
I've had a quick look at this PDF. Some random thoughts: - The "safe integer" solution that I've heard most about is Miscrosoft's, which I don't think is one of those that you cite in your references. - At the top of page 3 of your PDF there's an example where you square an int8_t and assign the result to an int8_t, and say this can't overflow. Either I'm missing something (which is quite possible!) or you meant to assign to a wider result type. - I suspect that in my code, the consequences of an exception that I hadn't considered could be just as bad as an overflow that I'd not considered! Regards, Phil.
On 12/10/15 10:44 AM, Phil Endecott wrote:
Hi Robert,
Robert Ramey wrote:
I've also made a proposal for the C++ Standards committee to include a simplified version of this library as part of he C++ standard.
You can see the proposal at http://www.rrsd.com/software_development/safe_numerics/proposal.pdf
I've had a quick look at this PDF. Some random thoughts:
- The "safe integer" solution that I've heard most about is Miscrosoft's, which I don't think is one of those that you cite in your references.
- At the top of page 3 of your PDF there's an example where you square an int8_t and assign the result to an int8_t, and say this can't overflow. Either I'm missing something (which is quite possible!) or you meant to assign to a wider result type.
damn - you're right. The multiplication can't overflow, but the subsequent assignment can. I'll change this.
- I suspect that in my code, the consequences of an exception that I hadn't considered could be just as bad as an overflow that I'd not considered!
LOL - Then this library is not for you! Actually, I'm guessing you have a lot of company here. I proposed a talk at CPPcon on this and there was not interest among the reviewers. It's hard to tell, but it seemed that it was just a not a problem. Another sentiment I've seen expressed is that this is only appropriate for less skilled programmers who don't really know about how to write code. In the documentation at www.blincubator.com I've tried to dispell the nothing that this can be addressed in an ad hoc manner. To me this is a 30 year festering carbuncle on the face of C++/C. For the language to permit the writing of an arithmetical expression and to permit it to fail silently, is a recipe for disaster which are are suffering from on a daily basis. The amazing thing to me is that all languages have have this problem - even those which are interpreted!!! How have computer engineers been able to ignore/forget what the fundamental purpose is about - to provide correct answers. We're using C++ to write code for self driving cars - and no one cares about this. I can't express how disheartening to me this is. BUT now we have a realistic solution!!!. I believe this is a practical, correct, elegant alternative which we can add on to C++ via a library such as this. Then C++ can stand alone not only as the way to create the most efficient programs but the most correct one as well. There will be no serious competitor. And this is testement to the foresight, vision and genius of our community leaders. This library depends upon constexpr, operator overloading and other (recent) C++ features. I believe that C++/14 is going to usher in a whole new err for computation. Now if could only get a utf8 codecvt facet which works. Robert Ramey
On Dec 10, 2015, at 2:04 PM, Robert Ramey
To me this is a 30 year festering carbuncle on the face of C++/C. For the language to permit the writing of an arithmetical expression and to permit it to fail silently, is a recipe for disaster which are are suffering from on a daily basis. The amazing thing to me is that all languages have have this problem - even those which are interpreted!!!
Python has bigints by default, though it has other issues that make it unsafe.
BUT now we have a realistic solution!!!. I believe this is a practical, correct, elegant alternative which we can add on to C++ via a library such as this. Then C++ can stand alone not only as the way to create the most efficient programs but the most correct one as well. There will be no serious competitor.
I too have been concerned about this danger of C++, as well as others. My solution is to develop a new programming language that combines C++ staples (such as regular types and convenient control of memory layout) with typical scripting language features (e.g. enforced memory safety), as well as its own quirks (aggressive compile-time computation, for one). For starters it runs interpreted, but it's intended to be translated into other source languages, including C++. There's more info at http://www.vcode.org/, if anyone's curious.
I believe that C++/14 is going to usher in a whole new err for computation.
Freudian typo? :-) Josh
On 12/10/15 1:30 PM, Josh Juran wrote:
There's more info at http://www.vcode.org/, if anyone's curious.
looks like there's still some work to be done here.
I believe that C++/14 is going to usher in a whole new err for computation.
Freudian typo? :-)
LOL - maybe - or maybe not Robert Ramey
On 11 December 2015 10:19 Alex Perry wrote:
On 10 December 2015 19:05 Robert Ramey [mailto:ramey@rrsd.com] wrote:
...
I believe that C++/14 is going to usher in a whole new err for computation.
...
How cynical but how true on many levels ... (or did you mean era?)
On 10 December 2015 21:31 Josh Juran [mailto:jjuran@gmail.com] wrote:
Freudian typo? :-)
Should have read the next message in the digest before trying to be clever and funny. It was much less so after seeing someone else had already picked this up... Alex
On 10/12/2015 07:49, Robert Ramey wrote:
Arithmetic operations in C++ are NOT guaranteed to yield a correct mathematical result. This feature is inherited from the early days of C. The behavior of int, unsigned int and others were designed to map closely to the underlying hardware. Computer hardware implements these types as a fixed number of bits. When the result of arithmetic operations exceeds this number of bits, the result will not be arithmetically correct.
I have crafted a library to address this issue once and for all. You can find out more about this by checking out the page for Safe Numerics at the boost library incubator. www.blincubator.com
I hereby request that this library be added to the boost review queue.
In case you haven't seen it, Multiprecision's cpp_int type has a very similar "checked" mode that adds similar safety.
I've also made a proposal for the C++ Standards committee to include a simplified version of this library as part of he C++ standard.
You can see the proposal at http://www.rrsd.com/software_development/safe_numerics/proposal.pdf
In:
"Performance will depend on the implementation and subject to the
constraints above. This design will permit the usage of template
meta-programming to eliminate runtime performance penalties in some
cases. In the following example, there is no runtime penalty required to
guarantee that incorrect results will never be generated."
The example that follows surely can overflow:
void f(safe
On 12/10/15 10:47 AM, John Maddock wrote:
In:
"Performance will depend on the implementation and subject to the constraints above. This design will permit the usage of template meta-programming to eliminate runtime performance penalties in some cases. In the following example, there is no runtime penalty required to guarantee that incorrect results will never be generated."
The example that follows surely can overflow:
void f(safe
i){ safe j; j = i * i; // C++ promotion rules make overflow impossible! std::cout << j; } OK so the temporary returned by i*i *may* not overflow (depending on the size of type int), but the assignment to j most certainly can
correct, I'll fix this example - I would expect that to throw, it will
and if not it drives a coach and horses right through the proposal IMO.
agreed. All binary operators are overloaded - including assigment.
What happens with floating point types? Can values overflow to infinity or is this trapped? How about divide by zero or indeed zero/zero (NaN)? What about underflow?
currently only integer types are addressed. Damion Vicino has take on task of extending the library to floating point types. He is a post doctoral student at INRIA and I believe he is well suited to the task. Turns out that implementing "safe" functionality for floating point numbers has a completely different set of issues and considerations. I'm sure this is not news to you. it's almost a separate library. He worked on this as a Google Summer of Code project. If you want to lend a hand to him this, I'm sure it would be appreciated - and, very, very, valuable. BTW - Since I have a (fatal) tendency to think big, I named the library Safe Numerics rather than Safe Integer. But I think the safe integer is very useful in it's own right.
And finally, I think there needs to be good constexpr support - in fact that could be the killer feature that sets the library apart.
The implementation contains a module which does compile time range arithmetic which is all constexpr. Before exploiting this, I was stuck - now I'm free. BTW - the library (www.blincubator.com version) requires gcc 6.0+ in order to compile - due to constexpr problems in previous gcc compilers - I don't know about msvc.
Finally your documents title is "Java Printing" which you may wish to correct ;)
What does it take to get a break here? Robert Ramey
What happens with floating point types? Can values overflow to infinity or is this trapped? How about divide by zero or indeed zero/zero (NaN)? What about underflow?
Hi John, I started working on a library for having “safe-float” implementation during the last GSOC. It is still WIP. If you want to take a look at it. I coded 2 different approaches. First one, I think more naive here: — Documentation: http://sdavtaker.github.io/safefloat/doc/html/index.html http://sdavtaker.github.io/safefloat/doc/html/index.html — Code: https://github.com/sdavtaker/safefloat https://github.com/sdavtaker/safefloat And then a second one, I believe better approach here: — Code: https://github.com/sdavtaker/safe_numerics/tree/safe_float/ https://github.com/sdavtaker/safe_numerics/tree/safe_float/ — Doc included in the tree. Any feedback on it will help a lot in deciding how to keep going. If you interested in more details, I can expand further, I have also some documentation that is not available in the web yet. Best regards, Damian
Le 10/12/2015 08:49, Robert Ramey a écrit :
Arithmetic operations in C++ are NOT guaranteed to yield a correct mathematical result. This feature is inherited from the early days of C. The behavior of int, unsigned int and others were designed to map closely to the underlying hardware. Computer hardware implements these types as a fixed number of bits. When the result of arithmetic operations exceeds this number of bits, the result will not be arithmetically correct.
I have crafted a library to address this issue once and for all. You can find out more about this by checking out the page for Safe Numerics at the boost library incubator. www.blincubator.com
I hereby request that this library be added to the boost review queue.
I've also made a proposal for the C++ Standards committee to include a simplified version of this library as part of he C++ standard.
You can see the proposal at http://www.rrsd.com/software_development/safe_numerics/proposal.pdf
Hi Robert,
these links are broken
Library Implementation
https://htmlpreview.github.io/?https://raw.githubusercontent.com/robertramey...
Checked_Result<R>
https://htmlpreview.github.io/?https://raw.githubusercontent.com/robertramey...
Checked Integer Arithmetic
https://htmlpreview.github.io/?https://raw.githubusercontent.com/robertramey...
Interval Arithmetic
https://htmlpreview.github.io/?https://raw.githubusercontent.com/robertramey...
Performance Tests
https://htmlpreview.github.io/?https://raw.githubusercontent.com/robertramey...
minor comments:
* I believe the name safe<T> is too broad for the kind of types this
class can wrap.
* Remove any adherence to boost in the standard proposal (include
these links are broken
<snip> I'm still working on these sections - I'll tweak things so that's clear.
minor comments:
* I believe the name safe<T> is too broad for the kind of types this class can wrap.
Hmmmm - The documentation of the library, the concepts etc. are themselves pretty general. Take the concept Integer. If someone comes up with a modular integer type or a money type, and these types fullfill the stated concepts in the documentation, I would expect the "safe" functionality to work. Most of the library is implemented not in terms of particular types - but rather the "numeric traits" as specified in std::numeric_limits. So, although right now it's only implemented for integer like types - but that's an implementation feature not a design choice. We're thinking big here! To me, the implications of a safe<money> type are huge and I'm willing to be at least one wall street institution might have gone bust for lack of this data type. Another consideration is that there are other libraries aimed at the same problem which are called "safe.." or SafeInt or ???.
* Remove any adherence to boost in the standard proposal (include
Hmmm - I'm not seeing this. The only reference to boost I find in the standard proposal is my affiliation - which I'm a very proud of!
* I see that boost::numeric::safe and std::safe differ on the template parameters. I see why the standard version is not configurable and just throws.
However, I don't see why this doesn't applies to the proposal for Boost. What is not good for the standard can be good for Boost?
The short answer is: The standard proposal is for wimpy programmers who don't use Boost. The boost proposal is for programmers with huevos. The longer answer is: The current standard libraries don't use policies. They all throw standard exceptions so I thought it best to follow this example. The policies - particularly "automatic" add a large amount of complexity to the library. This makes it more complex to learn how to use, much more effort for vendors to implement. I want to see a library in the standard which is a no-brainer to use so that everyone will use it to trap stupid bugs which occur all the time. I expect to see some programmers who want to tweak the operation using the policies to get the absolute best performance or compile time guarantee of no errors (like life critical software). So I see two different "markets". And then there is the trying to please everyone issue. If I choose one path above - I'll get a howl from the other camp. Better than trying to please everyone by concocting a mishmash - I'll just define two different "levels" the standard one being a simple to use subset of the boost one. Robert Ramey
Le 11/12/2015 00:48, Robert Ramey a écrit :
these links are broken
<snip>
I'm still working on these sections - I'll tweak things so that's clear. There are other links that are broken, native, ExceptionPolicy.
minor comments:
* I believe the name safe<T> is too broad for the kind of types this class can wrap.
Hmmmm - The documentation of the library, the concepts etc. are themselves pretty general. Take the concept Integer. If someone comes up with a modular integer type or a money type, and these types fullfill the stated concepts in the documentation, I would expect the "safe" functionality to work. Most of the library is implemented not in terms of particular types - but rather the "numeric traits" as specified in std::numeric_limits. So, although right now it's only implemented for integer like types - but that's an implementation feature not a design choice.
We're thinking big here! To me, the implications of a safe<money> type are huge and I'm willing to be at least one wall street institution might have gone bust for lack of this data type.
Another consideration is that there are other libraries aimed at the same problem which are called "safe.." or SafeInt or ???. I will add numeric in the name. std::safe<T> don't says this is a numeric type.
* Remove any adherence to boost in the standard proposal (include
Hmmm - I'm not seeing this. The only reference to boost I find in the standard proposal is my affiliation - which I'm a very proud of!
It seems the only one is
6.1.6 Header
#include
* I see that boost::numeric::safe and std::safe differ on the template parameters. I see why the standard version is not configurable and just throws.
However, I don't see why this doesn't applies to the proposal for Boost. What is not good for the standard can be good for Boost?
The short answer is:
The standard proposal is for wimpy programmers who don't use Boost. The boost proposal is for programmers with huevos.
The longer answer is:
The current standard libraries don't use policies. They all throw standard exceptions so I thought it best to follow this example.
The policies - particularly "automatic" add a large amount of complexity to the library. This makes it more complex to learn how to use, much more effort for vendors to implement. I want to see a library in the standard which is a no-brainer to use so that everyone will use it to trap stupid bugs which occur all the time. I expect to see some programmers who want to tweak the operation using the policies to get the absolute best performance or compile time guarantee of no errors (like life critical software). So I see two different "markets".
And then there is the trying to please everyone issue. If I choose one path above - I'll get a howl from the other camp. Better than trying to please everyone by concocting a mishmash - I'll just define two different "levels" the standard one being a simple to use subset of the boost one.
You have almost convinced me.
Hmm, but why the simple approach that is valid for the standard is not
good enough for Boost?
I believe the good thing to do is to provide in boost both safe<T> and
basic_safe
On 12/10/15 3:07 PM, Vicente J. Botet Escriba wrote:
minor comments:
* I see that boost::numeric::safe and std::safe differ on the template parameters.
I'm not seeing this. The boost type signature includes two optional type parameters. So the template signature (excepting namespace) is identical between the std and boost versions. I see why the standard version is not configurable and just
throws. However, I don't see why this doesn't applies to the proposal for Boost.
What is not good for the standard can be good for Boost?
I think I answered this. The standard version is subset of the Boost version. The are compatible - but the boost version has additional functionality which is more cumbersome to implement and requires more investment of effort on the part of the user to understand. std is a subset of the boost one. Robert Ramey
participants (14)
-
Alex Perry
-
Andrzej Krzemienski
-
Brook Milligan
-
Damian Vicino
-
David Stone
-
Edward Diener
-
Gavin Lambert
-
John Maddock
-
Josh Juran
-
Matt Hurd
-
Paul A. Bristow
-
Phil Endecott
-
Robert Ramey
-
Vicente J. Botet Escriba