Re: [boost] Checked Integral Class

> -----Original Message----- > From: boost-bounces@lists.boost.org > [mailto:boost-bounces@lists.boost.org]On Behalf Of Rob Stewart > Sent: Friday, September 16, 2005 2:06 PM > To: boost@lists.boost.org > Cc: boost@lists.boost.org > Subject: Re: [boost] Checked Integral Class > > > From: "Matt Doyle" <mdoyle@a-m-c.com> > > > From: "michael toksvig" <michaeltoksvig@yahoo.com> > > > > > add two numbers in the 1-10 range, and you invariably end=20 > > > up with a number=20 > > > > in the 2-20 range > > > > Yep. However, when considering constrained ranges, I find it > > > surprising that the result would be *permitted* to exceed the > > > constraints of the inputs. > > > > IOW, I'd have expect the result to be constrained to be 2-10. > > > Yes, that means that combinations exist that violate that > > > constraint, but at least it wouldn't surprise me. > > > > > YES, that's exactly the point I was trying to make earlier > - you just = > > did a much better job of explaining it :) > > Thanks, if in fact I was addressing your concern. > > > IMO, the constraints and/or policies set forth when the > type was defined = > > must apply for the lifetime of the type. > > To be certain we're talking about the same thing, Michael and I > have been discussing the result of a computation. Built-in types > regularly undergo promotions when participating in such > expressions, so the question is whether something similar should > happen for a checked integral class and what form it should take. > > Michael was positing that the result type should account for the > full range of values possible from the computation. I argued > against that. > > I should also point out that built-in types don't behave that > way; they can overflow silently, but they don't magically gain > range. (The analogy isn't strong, but has some bearing, I > think.) I think we are talking about the same thing but from different angles, consider this simple case; typedef constrained_value<0, 10, policy> MyCV; MyCV a = 6; MyCv b = 8; MyCv c = a + b; // Should fail If the user wants "c" to be able to hold the result of "a + b" he should define a new type that would accommodate it. We are on the same page aren't we? > > -- > > Rob Stewart stewart@sig.com > Software Engineer http://www.sig.com > Susquehanna International Group, LLP using std::disclaimer; > _______________________________________________ > Unsubscribe & other changes: > http://lists.boost.org/mailman/listinfo.cgi/boost > Scanned by Fortigate {X3BTB534}

From: "Matt Doyle" <mdoyle@a-m-c.com>
From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of Rob Stewart From: "Matt Doyle" <mdoyle@a-m-c.com>
From: "michael toksvig" <michaeltoksvig@yahoo.com>
Please don't overquote.
To be certain we're talking about the same thing, Michael and I have been discussing the result of a computation. Built-in types regularly undergo promotions when participating in such expressions, so the question is whether something similar should happen for a checked integral class and what form it should take. =20 Michael was positing that the result type should account for the full range of values possible from the computation. I argued against that. =20 I should also point out that built-in types don't behave that way; they can overflow silently, but they don't magically gain range. (The analogy isn't strong, but has some bearing, I think.)
I think we are talking about the same thing but from different angles, = consider this simple case;
typedef constrained_value<0, 10, policy> MyCV; MyCV a =3D 6; MyCv b =3D 8; MyCv c =3D a + b; // Should fail
If the user wants "c" to be able to hold the result of "a + b" he should = define a new type that would accommodate it.
If MyCV::operator + returns MyCV, then a + b won't work, but the overflow is only detected at runtime. If MyCV::operator + returns a new type that computes the return type as having a different range (0-20, in this case), then any two MyCV's can be added without error. The question is what happens to the result. In your example, the result is assigned to a MyCV, so the range of the result is constrained, at runtime, to 0-10, regardless of the result's type. That's MyCV's job. Either Michael's or my notion of the return type will give you the same result, at least if there is a converting constructor to account for Michael's approach. That is, MyCV would need a constructor template that accepts other checked types so long as the ranges aren't completely incompatible. For example, you wouldn't want the compiler to allow construction of a MyCV from a checked type with range 20-100. Getting back to your example, that converting constructor could accept the result because the computed range, 0-20, has values that will fit in MyCV's range. Whether the actual value assigned at runtime satisfies the range check is a separate matter. For your example, the result, 14, exceeds MyCV's maximum, so you'd get a runtime error. The real difference then, arises with how you use the result of a computation. If you pass it to a function template, for example, our approaches would result in different instantiations. My approach would retain the original range checking (0-10), whereas Michael's approach would have a new range (0-20). That's where I question the validity of his approach. If the two types were passed to a function that expected a MyCV, then it plays out the same as your example.
We are on the same page aren't we?
Are we? -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

Rob Stewart wrote:
I think we are talking about the same thing but from different angles, = consider this simple case;
typedef constrained_value<0, 10, policy> MyCV; MyCV a =3D 6; MyCv b =3D 8; MyCv c =3D a + b; // Should fail
If the user wants "c" to be able to hold the result of "a + b" he should = define a new type that would accommodate it.
The real difference then, arises with how you use the result of a computation. If you pass it to a function template, for example, our approaches would result in different instantiations. My approach would retain the original range checking (0-10), whereas Michael's approach would have a new range (0-20). That's where I question the validity of his approach.
From my point of view passing the result of the operation to a template function is the only difference (please, correct me if I'm forgetting something). And since we are talking about runtime operations (no metaprogramming at this point) such issue can easily be solved by using some additional template function "constrain", for example, that would ensure that the result is in specified range:
template< typanme T > void foo (T const& cv); typedef constrained_value<0, 10, policy> MyCV; MyCV a = 6; MyCV b = 8; MyCV c = a + b; // Should fail typedef constrained_value<0, 20, policy> MyCV2; MyCV2 d = a + b; // Should be ok foo(a + b); // Shall instantiate // foo< constrained_value<0, 20, policy> >() foo(constrain< 0, 10 >(a + b)); // Should fail, otherwise it would // instantiate foo< constrained_value<0, 10, policy> >() To my mind such approach is quite sufficient since (IMHO) in most cases the extending ranges would be more natural.

matt, i don't think your code example disambiguates what rob and i are disagreeing on when you say: > MyCv c = a + b; // Should fail you don't say why you think it should fail. there are (at least) 3 possibilities: a) at compile time (because it is an error to try to assign a 0-20 number to a 0-10 number) b) at runtime, during the assignment (because it is an error to assign a 0-20 number to a 0-10 number if it is outside the 0-10 range) c) at runtime, during the addition (because it is an error to add two 0-10 numbers if they add up to more than 10) i think a) and b) are both useful, and i think rob is advocating c) regards, /michael toksvig "Matt Doyle" <mdoyle@a-m-c.com> wrote in message news:12F38DF504DEFB4FA6BBF26668FE96DA02A633@python.a-m-c.com... > -----Original Message----- > From: boost-bounces@lists.boost.org > [mailto:boost-bounces@lists.boost.org]On Behalf Of Rob Stewart > Sent: Friday, September 16, 2005 2:06 PM > To: boost@lists.boost.org > Cc: boost@lists.boost.org > Subject: Re: [boost] Checked Integral Class > > > From: "Matt Doyle" <mdoyle@a-m-c.com> > > > From: "michael toksvig" <michaeltoksvig@yahoo.com> > > > > > add two numbers in the 1-10 range, and you invariably end=20 > > > up with a number=20 > > > > in the 2-20 range > > > > Yep. However, when considering constrained ranges, I find it > > > surprising that the result would be *permitted* to exceed the > > > constraints of the inputs. > > > > IOW, I'd have expect the result to be constrained to be 2-10. > > > Yes, that means that combinations exist that violate that > > > constraint, but at least it wouldn't surprise me. > > > > > YES, that's exactly the point I was trying to make earlier > - you just = > > did a much better job of explaining it :) > > Thanks, if in fact I was addressing your concern. > > > IMO, the constraints and/or policies set forth when the > type was defined = > > must apply for the lifetime of the type. > > To be certain we're talking about the same thing, Michael and I > have been discussing the result of a computation. Built-in types > regularly undergo promotions when participating in such > expressions, so the question is whether something similar should > happen for a checked integral class and what form it should take. > > Michael was positing that the result type should account for the > full range of values possible from the computation. I argued > against that. > > I should also point out that built-in types don't behave that > way; they can overflow silently, but they don't magically gain > range. (The analogy isn't strong, but has some bearing, I > think.) I think we are talking about the same thing but from different angles, consider this simple case; typedef constrained_value<0, 10, policy> MyCV; MyCV a = 6; MyCv b = 8; MyCv c = a + b; // Should fail If the user wants "c" to be able to hold the result of "a + b" he should define a new type that would accommodate it. We are on the same page aren't we? > > -- > > Rob Stewart stewart@sig.com > Software Engineer http://www.sig.com > Susquehanna International Group, LLP using std::disclaimer; > _______________________________________________ > Unsubscribe & other changes: > http://lists.boost.org/mailman/listinfo.cgi/boost > -------------------------------------------------------------------------------- > Scanned by Fortigate {X3BTB534} > -------------------------------------------------------------------------------- > _______________________________________________ > Unsubscribe & other changes: > http://lists.boost.org/mailman/listinfo.cgi/boost

From: "michael toksvig" <michaeltoksvig@yahoo.com>
matt, i don't think your code example disambiguates what rob and i are disagreeing on
when you say:
MyCv c = a + b; // Should fail
you don't say why you think it should fail. there are (at least) 3 possibilities: a) at compile time (because it is an error to try to assign a 0-20 number to a 0-10 number) b) at runtime, during the assignment (because it is an error to assign a 0-20 number to a 0-10 number if it is outside the 0-10 range) c) at runtime, during the addition (because it is an error to add two 0-10 numbers if they add up to more than 10)
i think a) and b) are both useful, and i think rob is advocating c)
Yes, but using a checking policy, you can have all three, though I'm not sure how to express the difference between a) and c). "compile_time_check" and "run_time_check" are insufficient. You might need separate policies to dictate how conversions and operations are checked. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;
participants (4)
-
Andrey Semashev
-
Matt Doyle
-
michael toksvig
-
Rob Stewart