Checked Integral Class

I've written a template class to hold a range of valid integral values. My intent was to mimic Ada's ability to define a type like this: type SmallInt is range -10 .. 10; One can then declare objects of this type and any subsequent assignment that violated this range constraint woud throw an exception. I have built a C++ template class that does the same thing: template<typename T, T min, T max> struct CheckedIntegralValue To define a type that can hold the same range as the example above: typedef CheckedIntegralValue<int, -10, 10> SmallIntType; SmallIntType i = -10;//OK SmallIntType i2 = -100;//Will throw an exception at run-time for value out-of-range I won't include the whole thing here, but I can do so if there is enough interest. I have defined most of the operators one needs to use this type just as one would use a 'normal' integer. Would anyone be interested in something like this in the Boost libraries? Regards, Dan McLeran

I've written a template class to hold a range of valid integral values. My intent was to mimic Ada's ability to define a type like this:
type SmallInt is range -10 .. 10;
One can then declare objects of this type and any subsequent assignment that violated this range constraint woud throw an exception.
I have built a C++ template class that does the same thing:
template<typename T, T min, T max> struct CheckedIntegralValue
To define a type that can hold the same range as the example above:
typedef CheckedIntegralValue<int, -10, 10> SmallIntType;
SmallIntType i = -10;//OK SmallIntType i2 = -100;//Will throw an exception at run-time for value out-of-range
I won't include the whole thing here, but I can do so if there is enough interest. I have defined most of the operators one needs to use this type just as one would use a 'normal' integer.
Would anyone be interested in something like this in the Boost libraries?
Since the assignment check is done at runtime (as it has to be), why make the range restriction at compile time? Isn't it more useful to have a class that works more like this: === CODE === CheckedIntegralValue x; x.setLowerLimit(-5); x.setUpperLimit(1000); x = 10; x = 9000; // Fails at runtime =========== Does this make sense? --Steve

Stephen Gross wrote:
Since the assignment check is done at runtime (as it has to be), why make the range restriction at compile time? Isn't it more useful to have a class that works more like this:
=== CODE === CheckedIntegralValue x; x.setLowerLimit(-5); x.setUpperLimit(1000); x = 10; x = 9000; // Fails at runtime ===========
Does this make sense?
Yes, it does, except: - it means that the limits are run-time properties of each object, which adds to their size (likely 3 times!) - it means that you cannot use the bounds for static deductions, which can actually ensure that if some operations are always safe, then no run-time checks are necessary (and therefore there is no overhead) or on the contrary, if the operation is always wrong then there is no sense to even allow it to compile. The only advantage of the run-time approach is that you can set the limits based on the information that is not yet available at compile-time (like bounds read from the database). In my opinion both approaches have their uses, just like std::vector and boost::array. -- Maciej Sobczak http://www.msobczak.com

Maciej Sobczak wrote:
In my opinion both approaches have their uses, just like std::vector and boost::array.
Right. We could support both ways by making the type of range the value is checked against configurable. I think of a static_range<> for compile-time checked values and dynamic_ranges to be setup at runtime. // compile time checked value typedef static_range<-10,+10> my_static_constraint; typedef constraint_value< my_static_constraint, some_more_policies > ct_value; ct_value ctv = 303; // asserts or throws or ... // runtime checked value typedef dynamic_range<int> my_dynamic_constraint; typedef constraint_value< my_dynamic_constraint, some_more_policies > rt_value; rt_value rtv; rtv.lower_bound( 0 ); rtv.upper_bound( 40 ); rtv = 20; // good rtv = -1; // bad Do you think thats possible?

I actually have a run-time version of this as well, but I prefer to use the compile-time version wherever I can for the reasons posted by Marciej Sobczak. "Stephen Gross" <sgross@darwin.epbi.cwru.edu> wrote in message news:dg9m12$p92$1@sea.gmane.org...
I've written a template class to hold a range of valid integral values. My intent was to mimic Ada's ability to define a type like this:
type SmallInt is range -10 .. 10;
One can then declare objects of this type and any subsequent assignment that violated this range constraint woud throw an exception.
I have built a C++ template class that does the same thing:
template<typename T, T min, T max> struct CheckedIntegralValue
To define a type that can hold the same range as the example above:
typedef CheckedIntegralValue<int, -10, 10> SmallIntType;
SmallIntType i = -10;//OK SmallIntType i2 = -100;//Will throw an exception at run-time for value out-of-range
I won't include the whole thing here, but I can do so if there is enough interest. I have defined most of the operators one needs to use this type just as one would use a 'normal' integer.
Would anyone be interested in something like this in the Boost libraries?
Since the assignment check is done at runtime (as it has to be), why make the range restriction at compile time? Isn't it more useful to have a class that works more like this:
=== CODE === CheckedIntegralValue x; x.setLowerLimit(-5); x.setUpperLimit(1000); x = 10; x = 9000; // Fails at runtime ===========
Does this make sense?
--Steve
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi, Dan McLeran wrote:
I've written a template class to hold a range of valid integral values. My intent was to mimic Ada's ability to define a type like this:
type SmallInt is range -10 .. 10;
One can then declare objects of this type and any subsequent assignment that violated this range constraint woud throw an exception.
Careful with that. This approach, in my humble opinion, is very limited. Why do you assume that exception is The Right Thing when the out of bound condition is detected? These are things that I would like to have, *depending on situation*: 1. throw some exception (this is what you have) 2. abort immediately without doing *anything* (note that exception causes some code to be executed, like destructors of automatic objects - I might want to avoid this and just kill the program immediately) 3. wrap the result (kind of modulo arithmetic, but more general because allowing to have the lower bound different than 0 - say that if the range is 10..20 then 23 should be wrapped to be 13) 4. saturate the result (if you have the range 0..100 and the result of some computation is 150, cut it to 100 and continue with it) - actually, saturated arithmetic is very convenient, for example in signal processing 5. actually allow the out-of-range value, but log that fact somewhere or send me an e-mail, etc. 6. ... I leave the list open, beacuse it is clear that the library itself should be customizeable in this regard. More to this - the bare checked class is not really emulating what Ada supports. What about this (Ada): type SmallInt is range -10 .. 10; subtype EvenSmallerInt is SmallInt range -5 .. 5; Now, you can use EvenSmallerInt wherever SmallInt is expected, because one is a subtype of another. Ada does not allow multiple supertypes, although that is a logical extension of the idea, quite useful in my opinion and worth to have. You will find a very sketchy and incomplete implementation of all these ideas at: <http://www.msobczak.com/prog/bin/safetypes.tar.gz> (3kB) -- Maciej Sobczak : http://www.msobczak.com/ Programming : http://www.msobczak.com/prog/

I like the idea of having the response to the over/under range condition configurable. Maybe a policy class could handle what to do in that situation? I guess I got a little tunnel vision trying to emulate what Ada does in response to an invalid assignment, which is raise Constraint_Error. I agree that this does not emulate exactly what is possible in Ada, but I liked the idea of being able to specify that a type can only hold a specific range of values. "Maciej Sobczak" <prog@msobczak.com> wrote in message news:43289177.9030904@msobczak.com...
Hi,
Dan McLeran wrote:
I've written a template class to hold a range of valid integral values. My intent was to mimic Ada's ability to define a type like this:
type SmallInt is range -10 .. 10;
One can then declare objects of this type and any subsequent assignment that violated this range constraint woud throw an exception.
Careful with that. This approach, in my humble opinion, is very limited. Why do you assume that exception is The Right Thing when the out of bound condition is detected?
These are things that I would like to have, *depending on situation*:
1. throw some exception (this is what you have)
2. abort immediately without doing *anything* (note that exception causes some code to be executed, like destructors of automatic objects - I might want to avoid this and just kill the program immediately)
3. wrap the result (kind of modulo arithmetic, but more general because allowing to have the lower bound different than 0 - say that if the range is 10..20 then 23 should be wrapped to be 13)
4. saturate the result (if you have the range 0..100 and the result of some computation is 150, cut it to 100 and continue with it) - actually, saturated arithmetic is very convenient, for example in signal processing
5. actually allow the out-of-range value, but log that fact somewhere or send me an e-mail, etc.
6. ...
I leave the list open, beacuse it is clear that the library itself should be customizeable in this regard.
More to this - the bare checked class is not really emulating what Ada supports. What about this (Ada):
type SmallInt is range -10 .. 10; subtype EvenSmallerInt is SmallInt range -5 .. 5;
Now, you can use EvenSmallerInt wherever SmallInt is expected, because one is a subtype of another. Ada does not allow multiple supertypes, although that is a logical extension of the idea, quite useful in my opinion and worth to have.
You will find a very sketchy and incomplete implementation of all these ideas at:
<http://www.msobczak.com/prog/bin/safetypes.tar.gz> (3kB)
-- Maciej Sobczak : http://www.msobczak.com/ Programming : http://www.msobczak.com/prog/ _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

also, the type of a binary operator's return value should combine the intervals of the operands e.g. given: CheckedIntegralValue<int, -10, 100> a; CheckedIntegralValue<int, -100, 20> b; then the type of a+b should be CheckedIntegralValue<int, -110, 120> and given: CheckedIntegralValue<int, -110, 120> sum; then sum = a+b, sum = a, and sum = b should all compile just fine conversely, a = sum, b = sum, and a = b should not this may have been implied, just wanted to make sure regards, /michael toksvig "Dan McLeran" <dan.mcleran@seagate.com> wrote in message news:dga6uu$c6j$1@sea.gmane.org...
I like the idea of having the response to the over/under range condition configurable. Maybe a policy class could handle what to do in that situation? I guess I got a little tunnel vision trying to emulate what Ada does in response to an invalid assignment, which is raise Constraint_Error.
I agree that this does not emulate exactly what is possible in Ada, but I liked the idea of being able to specify that a type can only hold a specific range of values.
"Maciej Sobczak" <prog@msobczak.com> wrote in message news:43289177.9030904@msobczak.com...
Hi,
Dan McLeran wrote:
I've written a template class to hold a range of valid integral values. My intent was to mimic Ada's ability to define a type like this:
type SmallInt is range -10 .. 10;
One can then declare objects of this type and any subsequent assignment that violated this range constraint woud throw an exception.
Careful with that. This approach, in my humble opinion, is very limited. Why do you assume that exception is The Right Thing when the out of bound condition is detected?
These are things that I would like to have, *depending on situation*:
1. throw some exception (this is what you have)
2. abort immediately without doing *anything* (note that exception causes some code to be executed, like destructors of automatic objects - I might want to avoid this and just kill the program immediately)
3. wrap the result (kind of modulo arithmetic, but more general because allowing to have the lower bound different than 0 - say that if the range is 10..20 then 23 should be wrapped to be 13)
4. saturate the result (if you have the range 0..100 and the result of some computation is 150, cut it to 100 and continue with it) - actually, saturated arithmetic is very convenient, for example in signal processing
5. actually allow the out-of-range value, but log that fact somewhere or send me an e-mail, etc.
6. ...
I leave the list open, beacuse it is clear that the library itself should be customizeable in this regard.
More to this - the bare checked class is not really emulating what Ada supports. What about this (Ada):
type SmallInt is range -10 .. 10; subtype EvenSmallerInt is SmallInt range -5 .. 5;
Now, you can use EvenSmallerInt wherever SmallInt is expected, because one is a subtype of another. Ada does not allow multiple supertypes, although that is a logical extension of the idea, quite useful in my opinion and worth to have.
You will find a very sketchy and incomplete implementation of all these ideas at:
<http://www.msobczak.com/prog/bin/safetypes.tar.gz> (3kB)
-- Maciej Sobczak : http://www.msobczak.com/ Programming : http://www.msobczak.com/prog/ _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Actually, I hadn't thought about that possibility. How would one implement this? CheckedIntegralValue<int, -10, 100> a = -10; CheckedIntegralValue<int, -100, 20> b = -100; ???????????????????????????????? c = a + b; One could have a template to figure this out but I would leave it up to the developer to specify the type of c. You could do something like this: template<typename T1, typename T2> struct AdditionTypeResolver { typedef typename CheckedIntegralValue<T1::min + T2::min, T1::max + T2::max> type; }; typedef CheckedIntegralValue<int, -10, 100> Type1; typedef CheckedIntegralValue<int, -100, 20> Type2; Type1 a = -10; Type2 b = -100; AdditionTypeResolver<Type1, Type2 >::type c = a + b; "michael toksvig" <michaeltoksvig@yahoo.com> wrote in message news:dgcini$ble$1@sea.gmane.org...
also, the type of a binary operator's return value should combine the intervals of the operands
e.g. given: CheckedIntegralValue<int, -10, 100> a; CheckedIntegralValue<int, -100, 20> b;
then the type of a+b should be CheckedIntegralValue<int, -110, 120>
and given: CheckedIntegralValue<int, -110, 120> sum;
then sum = a+b, sum = a, and sum = b should all compile just fine
conversely, a = sum, b = sum, and a = b should not
this may have been implied, just wanted to make sure
regards,
/michael toksvig
"Dan McLeran" <dan.mcleran@seagate.com> wrote in message news:dga6uu$c6j$1@sea.gmane.org...
I like the idea of having the response to the over/under range condition configurable. Maybe a policy class could handle what to do in that situation? I guess I got a little tunnel vision trying to emulate what Ada does in response to an invalid assignment, which is raise Constraint_Error.
I agree that this does not emulate exactly what is possible in Ada, but I liked the idea of being able to specify that a type can only hold a specific range of values.
"Maciej Sobczak" <prog@msobczak.com> wrote in message news:43289177.9030904@msobczak.com...
Hi,
Dan McLeran wrote:
I've written a template class to hold a range of valid integral values. My intent was to mimic Ada's ability to define a type like this:
type SmallInt is range -10 .. 10;
One can then declare objects of this type and any subsequent assignment that violated this range constraint woud throw an exception.
Careful with that. This approach, in my humble opinion, is very limited. Why do you assume that exception is The Right Thing when the out of bound condition is detected?
These are things that I would like to have, *depending on situation*:
1. throw some exception (this is what you have)
2. abort immediately without doing *anything* (note that exception causes some code to be executed, like destructors of automatic objects - I might want to avoid this and just kill the program immediately)
3. wrap the result (kind of modulo arithmetic, but more general because allowing to have the lower bound different than 0 - say that if the range is 10..20 then 23 should be wrapped to be 13)
4. saturate the result (if you have the range 0..100 and the result of some computation is 150, cut it to 100 and continue with it) - actually, saturated arithmetic is very convenient, for example in signal processing
5. actually allow the out-of-range value, but log that fact somewhere or send me an e-mail, etc.
6. ...
I leave the list open, beacuse it is clear that the library itself should be customizeable in this regard.
More to this - the bare checked class is not really emulating what Ada supports. What about this (Ada):
type SmallInt is range -10 .. 10; subtype EvenSmallerInt is SmallInt range -5 .. 5;
Now, you can use EvenSmallerInt wherever SmallInt is expected, because one is a subtype of another. Ada does not allow multiple supertypes, although that is a logical extension of the idea, quite useful in my opinion and worth to have.
You will find a very sketchy and incomplete implementation of all these ideas at:
<http://www.msobczak.com/prog/bin/safetypes.tar.gz> (3kB)
-- Maciej Sobczak : http://www.msobczak.com/ Programming : http://www.msobczak.com/prog/ _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

From: "Dan McLeran" <dan.mcleran@seagate.com>
template<typename T1, typename T2> struct AdditionTypeResolver { typedef typename CheckedIntegralValue<T1::min + T2::min, T1::max + T2::max> type; };
typedef CheckedIntegralValue<int, -10, 100> Type1; typedef CheckedIntegralValue<int, -100, 20> Type2; Type1 a = -10; Type2 b = -100; AdditionTypeResolver<Type1, Type2 >::type c = a + b;
What if T1::min and T2::min, for example, have different signs?
"michael toksvig" <michaeltoksvig@yahoo.com> wrote in message news:dgcini$ble$1@sea.gmane.org... [big snip]
Please have a look at http://www.boost.org/more/discussion_policy.htm#effective, particularly the parts on overquoting and quotation style. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

This could be checked at compile-time. I personally don't want to pursue this path. I'd rather leave it up to the developer to specify the type of the result value of this addition. "Rob Stewart" <stewart@sig.com> wrote in message news:200509152136.j8FLa52H011209@shannonhoon.balstatdev.susq.com...
From: "Dan McLeran" <dan.mcleran@seagate.com>
template<typename T1, typename T2> struct AdditionTypeResolver { typedef typename CheckedIntegralValue<T1::min + T2::min, T1::max + T2::max> type; };
typedef CheckedIntegralValue<int, -10, 100> Type1; typedef CheckedIntegralValue<int, -100, 20> Type2; Type1 a = -10; Type2 b = -100; AdditionTypeResolver<Type1, Type2 >::type c = a + b;
What if T1::min and T2::min, for example, have different signs?
"michael toksvig" <michaeltoksvig@yahoo.com> wrote in message news:dgcini$ble$1@sea.gmane.org... [big snip]
Please have a look at http://www.boost.org/more/discussion_policy.htm#effective, particularly the parts on overquoting and quotation style.
-- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer; _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

From: "michael toksvig" <michaeltoksvig@yahoo.com>
What if T1::min and T2::min, for example, have different signs?
Please keep attributions in your replies when you quote a previous poster. See http://www.boost.org/more/discussion_policy.htm#effective. It helps keep track of things better.
i don't see why that would make a difference
If you add types with T1::min == -10, and T2::min == 20, the computed minimum would be 10, which is less restrictive than T2::min. It's surprising for the result type to have a less restrictive range than the inputs. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

"Rob Stewart" <stewart@sig.com> wrote in message news:200509161700.j8GH0nHo017101@shannonhoon.balstatdev.susq.com... It's surprising for the result type to have a less restrictive range than the inputs.
add two numbers in the 1-10 range, and you invariably end up with a number in the 2-20 range that is both more (in the low end) and less (in the high end) restrictive than the inputs i don't find that surprising at all; indeed, i wouldn't consider using a checked integral class that did not reflect that but that may just be me, since i can't afford to test at runtime something that could have been tested at compile time other people may have other considerations. which is just fine, a good library doesn't try to be everything to everybody /michael toksvig

From: "michael toksvig" <michaeltoksvig@yahoo.com>
"Rob Stewart" <stewart@sig.com> wrote in message news:200509161700.j8GH0nHo017101@shannonhoon.balstatdev.susq.com... It's surprising for the result type to have a less restrictive range than the inputs.
add two numbers in the 1-10 range, and you invariably end up with a number in the 2-20 range
Yep. However, when considering constrained ranges, I find it surprising that the result would be *permitted* to exceed the constraints of the inputs. IOW, I'd have expect the result to be constrained to be 2-10. Yes, that means that combinations exist that violate that constraint, but at least it wouldn't surprise me.
that is both more (in the low end) and less (in the high end) restrictive than the inputs
Yes, but I find that surprising.
i don't find that surprising at all; indeed, i wouldn't consider using a checked integral class that did not reflect that
Consider that the class is used to model, say, permitted voltages in an electonic circuit. Say that an input on the circuit is constrained by only being able to support voltages of 2-5V. (If you connect two batteries in series, the voltages add.) If each battery's voltage is expressed with a checked integer of the 2-5V range, and their voltages are summed, your result would be a checked integer with a 4-10V range. That exceeds the permitted input voltage range and two batteries of 4V each would blow the input. How would you handle that in your model? In my approach, the result would be a value with a 4-5V range. That fits within the 2-5V range and so a successful sum of battery voltages doesn't exceed the range of the input. Unless I've missed something, your approach requires a runtime check to see whether the actual result of the addition exceeds the input voltage range during some later computation or comparison. My approach catches the range error during the addition. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

"Rob Stewart" <stewart@sig.com> wrote in message news:200509162032.j8GKWh51025673@shannonhoon.balstatdev.susq.com... Consider that the class is used to model, say, permitted voltages in an electonic circuit. Say that an input on the circuit is constrained by only being able to support voltages of 2-5V. (If you connect two batteries in series, the voltages add.) If each battery's voltage is expressed with a checked integer of the 2-5V range, and their voltages are summed, your result would be a checked integer with a 4-10V range. That exceeds the permitted input voltage range and two batteries of 4V each would blow the input. How would you handle that in your model?
given a constrained value type, CV<min, max, policy> and the following: CV<2, 5, some_policy> batteryA, batteryB; you could decide either to do: CV<2, 5, compiletime_policy> input = batteryA+batteryB; // compiler will kindly alert you to the fact that there is a potential problem here or, if you are confident the potential problem doesn't occur: CV<2, 5, assert_policy> input = batteryA+batteryB; // at runtime, if something bad happens, it will assert
In my approach, the result would be a value with a 4-5V range. That fits within the 2-5V range and so a successful sum of battery voltages doesn't exceed the range of the input.
Unless I've missed something, your approach requires a runtime check to see whether the actual result of the addition exceeds the input voltage range during some later computation or comparison. My approach catches the range error during the addition.
-- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer; _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

From: "michael toksvig" <michaeltoksvig@yahoo.com>
"Rob Stewart" <stewart@sig.com> wrote in message news:200509162032.j8GKWh51025673@shannonhoon.balstatdev.susq.com... Consider that the class is used to model, say, permitted voltages in an electonic circuit. Say that an input on the circuit is constrained by only being able to support voltages of 2-5V. (If you connect two batteries in series, the voltages add.) If each battery's voltage is expressed with a checked integer of the 2-5V range, and their voltages are summed, your result would be a checked integer with a 4-10V range. That exceeds the permitted input voltage range and two batteries of 4V each would blow the input. How would you handle that in your model?
given a constrained value type, CV<min, max, policy> and the following: CV<2, 5, some_policy> batteryA, batteryB;
you could decide either to do: CV<2, 5, compiletime_policy> input = batteryA+batteryB; // compiler will kindly alert you to the fact that there is a potential problem here
IOW, it won't compile, right? That still doesn't address what the return type of the addition expression is. I think you're saying that the return type would have the computed range and that the construction of input would fail because the computed and specified ranges differ.
or, if you are confident the potential problem doesn't occur: CV<2, 5, assert_policy> input = batteryA+batteryB; // at runtime, if something bad happens, it will assert
That means the result is validated when input is constructed, but still requires a template constructor which can accept a CV<X,Y,P> where X is not necessarily the same as the new object's minimum, Y is not necessarily the same as the new object's maximum, and P is not necessarily the same as the new object's checking policy. If the result of the addition is saved with the expanded range then the point at which you detect the overflow is removed from the point at which you caused it, which can be a problem. Since you've introduced the idea of compile_time_check and run_time_check (better names, I think), you leave it to the library user to decide which approach is most appropriate in the context. That's probably the only way to settle the issue. In my example, I'm thinking that the correct behavior is to disallow combining two 2-5V batteries if using the run_time_check policy because it is a design time problem I was trying to present. I can see, however, that such batteries don't necessarily have a full charge, so a specific combination of such batteries can actually work, so using run_time_check could be acceptable if you were modeling real circuits rather than designing them. [snip: Don't overquote, please] -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

a counter-question, rob: how would you handle the addition of 2 numbers between 90 and 100, in your model? there are no possible outputs (180-200) within the input range (90-100). if, as i believe you suggest, the output range must match the input range, you would have to cast the inputs to match the output range. but the input and output ranges are disjunct one solution would be to cast the inputs to the union of the output and input ranges (90-200), then perform the addition. of course, you would then want to cast the result back to the real output range of 180-200 that seems cumbersome to me, but more importantly, you lose the opportunity for letting the compiler provide valuable validation of your design in my field (digital electronics) at least, there is little use for operators that are unable to produce results outside of the input range examples of trivial operations that do not adhere to that limitation include: - adding two 5 bit integers (result must be stored in a 6 bit integer) - multiplying an 11 bit integer by a 16 bit integer (result must be stored in a 27 bit integer) - shifting an 11 bit integer up by 5 positions (result must be stored in a 16 bit integer) granted, my field is somewhat specialized, but i don't think that the case where the minimum and maximum voltage of both your batteries happens to match the minimum and maximum voltage of your input is all that common. is it really that practical to hook two 2V-5V batteries in serial up to a 2V-5V input? more common than a 4V-10V input? or even a 2V-7V input? i mean: the latter would at least allow you to fully charge one of the batteries without destroying the system regards, /michael toksvig

From: "michael toksvig" <michaeltoksvig@yahoo.com>
a counter-question, rob: how would you handle the addition of 2 numbers between 90 and 100, in your model?
there are no possible outputs (180-200) within the input range (90-100). if, as i believe you suggest, the output range must match the input range, you would have to cast the inputs to match the output range. but the input and output ranges are disjunct
I concede. That case is compelling. If the library user tries to put the result in a 90-100 range object, it will fail to compile because there's no way for that to work. Note that if the destination type can possibly hold the result, then the check has to wait for run time. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

From: "michael toksvig" <michaeltoksvig@yahoo.com>
also, the type of a binary operator's return value should combine the intervals of the operands
e.g. given: CheckedIntegralValue<int, -10, 100> a; CheckedIntegralValue<int, -100, 20> b;
then the type of a+b should be CheckedIntegralValue<int, -110, 120>
and given: CheckedIntegralValue<int, -110, 120> sum;
then sum = a+b, sum = a, and sum = b should all compile just fine
conversely, a = sum, b = sum, and a = b should not
If you think that's appropriate, then there needs to be a range-checked, converting constructor from values with differing minima and maxima. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

Dan McLeran wrote:
I've written a template class to hold a range of valid integral values. My intent was to mimic Ada's ability to define a type like this: type SmallInt is range -10 .. 10;
One can then declare objects of this type and any subsequent assignment that violated this range constraint woud throw an exception. I have built a C++ template class that does the same thing:
template<typename T, T min, T max> struct CheckedIntegralValue
Just a small proposal. It would be nice if CheckedIntegralValue would chose the actual type of value itself. For example, for range -10 .. 10 it would use signed char, for -1000 .. 1000 signed int, and for 0 .. 300 unsigned int. Of course, its type detecting logic should be overrideable. I think, it would look something like: template< signed long long min, signed long long max, typename T = typename auto_detected< min, max >::type, typename BoundingPolicyT = throw_on_overflow > struct CheckedIntegralValue;

I like this idea. We should keep this in mind while moving forward. "Andrey Semashev" <andysem@mail.ru> wrote in message news:dgcgp3$52u$1@sea.gmane.org...
Dan McLeran wrote:
I've written a template class to hold a range of valid integral values. My intent was to mimic Ada's ability to define a type like this: type SmallInt is range -10 .. 10;
One can then declare objects of this type and any subsequent assignment that violated this range constraint woud throw an exception. I have built a C++ template class that does the same thing:
template<typename T, T min, T max> struct CheckedIntegralValue
Just a small proposal. It would be nice if CheckedIntegralValue would chose the actual type of value itself. For example, for range -10 .. 10 it would use signed char, for -1000 .. 1000 signed int, and for 0 .. 300 unsigned int. Of course, its type detecting logic should be overrideable. I think, it would look something like:
template< signed long long min, signed long long max, typename T = typename auto_detected< min, max >::type, typename BoundingPolicyT = throw_on_overflow > struct CheckedIntegralValue;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 9/15/05 3:05 PM, "Andrey Semashev" <andysem@mail.ru> wrote: [SNIP previous post]
Just a small proposal. It would be nice if CheckedIntegralValue would chose the actual type of value itself. For example, for range -10 .. 10 it would use signed char, for -1000 .. 1000 signed int, and for 0 .. 300 unsigned int. Of course, its type detecting logic should be overrideable. I think, it would look something like:
template< signed long long min, signed long long max, typename T = typename auto_detected< min, max >::type, typename BoundingPolicyT = throw_on_overflow > struct CheckedIntegralValue;
I don't think the double-long types are guaranteed to be usable as value-based template parameters, since they're not officially in C++ (yet). That's why I didn't use them in Boost.CRC. (That was several years ago, so maybe that changed. Also, someone else told me that, I never checked it myself.) -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com
participants (8)
-
Andrey Semashev
-
Dan McLeran
-
Daryle Walker
-
Maciej Sobczak
-
michael toksvig
-
Rob Stewart
-
Sascha Seewald
-
Stephen Gross