rational with finite precision

i think the documentation on boost::rational is misleading, and it cannot be used with finite precision (e.g. the built-in) types at all: whenever the result of an operation cannot be represented exactly, it returns something totally bogus for example with rational<short> let's try to add 191/181 (=1.055...) and 197/193 (=1.021...) the exact result is 72520/34933 (=2.076...), and we get 6984/-30603 (=-0.229...)! i don't think the intended audience can "code in anticipation of such issues" (the task of determining when these issues come up has the same complexity as doing the actual arithmetic) imho this also means that the pre-calculated gcd() trick in op+= doesn't buy much: finite precision calculations will still fail randomly and my guess would be that it only makes unlimited precision calculations slower (2 gcd() calls + 4 op/()s instead of 1 gcd() call) fixing rational<> to work with finite precision integers so that for calculations like the above we get the best representable approximation (in this case 3197/1540 = 2.076, precise to 7 digits) is pretty straightforward -- the gcd() function implicitly calculates all representable approximations -- but would require rewriting the whole class br, andras

Andras Erdei <aerdei <at> gmail.com> writes:
i think the documentation on boost::rational is misleading, and it cannot be used with finite precision (e.g. the built-in) types at all: whenever the result of an operation cannot be represented exactly, it returns something totally bogus
This is true for *any* operations on the built-in types. If you over/under- flow, the results are meaningless.
for example with rational<short> let's try to add 191/181 (=1.055...) and 197/193 (=1.021...) the exact result is 72520/34933 (=2.076...), and we get 6984/-30603 (=-0.229...)!
You didn't state what platform you were running on, but I figure it's likely that shorts are 16 bits. Therefore both the numerator and denominator of the result are out of range (greater than 32768), so there is no possible way that rational<short> can represent this number. What can you expect it to do in such case?
i don't think the intended audience can "code in anticipation of such issues" (the task of determining when these issues come up has the same complexity as doing the actual arithmetic)
I will agree that it's much less obvious when you will get into trouble, since the numbers were of modest magnitude -- though still greater than sqrt(32768) which is a (very rough) indication of potential trouble. (And you did choose a rare case, where all the numerators and denominators are prime numbers.) In this case the obvious solution is rational<long>, but similar examples can of course be created for it. In general, you can never use finite precision and expect accurate results without knowing something about the numbers you are using. That's true for all finite precision types.
fixing rational<> to work with finite precision integers so that for calculations like the above we get the best representable approximation (in this case 3197/1540 = 2.076, precise to 7 digits) is pretty straightforward -- the gcd() function implicitly calculates all representable approximations -- but would require rewriting the whole class
If someone wants the best approximation rather than the exact answer, they would probably just be using floating point. rational is used when people want an exact answer, not just an approximation. I think it would be more useful to add a mechanism (probably optional) for reporting when such overflows ocurred (or some equivalent to floating point NANs and INFs, perhaps). -- Mickey Moore

Mickey Moore wrote:
If someone wants the best approximation rather than the exact answer, they would probably just be using floating point. rational is used when people want an exact answer, not just an approximation.
Agreed.
I think it would be more useful to add a mechanism (probably optional) for reporting when such overflows ocurred (or some equivalent to floating point NANs and INFs, perhaps).
I'm now the maintainer of Boost.Rational, although I haven't made any changes yet (the only change I've been planning to make is to make int_type public). It might be useful to introduce a checked_rational template, interconvertible with rational, which throws upon overflow. When I get a little less far behind with my current projects, I'm planning to post here asking for feature requests. Jonathan

On Sun, 12 Dec 2004 17:53:08 +0000 (UTC), Mickey Moore <mgmoore@austin.rr.com> wrote:
This is true for *any* operations on the built-in types. If you over/under- flow, the results are meaningless.
no, it is more like 1.0/3.0 giving -0.1415 because 1/3 cannot be represented exactly as a floating point value (or 1/3 giving -20; imho not what a user expect)
You didn't state what platform you were running on, but I figure it's likely that shorts are 16 bits.
yes, i have chosen (a 16 bit) short to make the example small
Therefore both the numerator and denominator of the result are out of range (greater than 32768), so there is no possible way that rational<short> can represent this number. What can you expect it to do in such case?
return the (unique) best approximation, in this case 3197/1540, or (an easier way out) to state in the docs that boost::rational<> only works with unlimited precision types
In general, you can never use finite precision and expect accurate results
no, but imho i can expect reasonable results (the rationale for using boost::rational instead of the built-in floating point types would be to get slower, but useable results even when floats return garbage)
If someone wants the best approximation rather than the exact answer, they would probably just be using floating point. rational is used when people want an exact answer, not just an approximation.
if i understand you correctly, what you say is that rational<> will never be instantiated with finite precision integers anyway (no exact answer then), so there is no reason not to change the docs afaik rationals (and interval arithmetic and numerous other workarounds) are used in applications where floating point arithmetic gives you unusable results (CAD, electronic circuit design and so on), because finite rational arithmetic has much nicer properties than finite floating-point
I think it would be more useful to add a mechanism (probably optional) for reporting when such overflows ocurred (or some equivalent to floating point NANs and INFs, perhaps).
i do not think overflow is the correct term in this case: the result (2.076...) is well inside the range of representable values [you do not say that 1.0/3.0 overflows (or underflows), so you also cannot say it about the example i have given] br, andras

Andras Erdei wrote:
Mickey Moore wrote:
... What can you expect it to do in such case?
return the (unique) best approximation, in this case 3197/1540, or (an easier way out) to state in the docs that boost::rational<> only works with unlimited precision types
This is essentially what the docs currently state: "The rational number class is designed for use in conjunction with an unlimited precision integer class." It would be easy to prevent rational from being instantiated with built-in integral types, but this would just make the library less useful. Perhaps the disclaimer should be featured more prominently.
If someone wants the best approximation rather than the exact answer, they would probably just be using floating point. rational is used when people want an exact answer, not just an approximation.
if i understand you correctly, what you say is that rational<> will never be instantiated with finite precision integers anyway (no exact answer then), so there is no reason not to change the docs
afaik rationals (and interval arithmetic and numerous other workarounds) are used in applications where floating point arithmetic gives you unusable results (CAD, electronic circuit design and so on), because finite rational arithmetic has much nicer properties than finite floating-point
It sounds like you are describing a numeric type which could coexist with boost::rational. Perhaps you should implement it and post it. Jonathan

On Sun, 12 Dec 2004 12:35:29 -0700, "Jonathan Turkanis" <technews@kangaroologic.com> wrote:
This is essentially what the docs currently state: "The rational number class is designed for use in conjunction with an unlimited precision integer class." It would be easy to prevent rational from being instantiated with built-in integral types, but this would just make the library less useful. Perhaps the disclaimer should be featured more prominently.
[disclaimer: i'm not native english] rational.html in boost 1.30.2 says: "Any of the built-in integer types provided by the C++ implementation are supported as values for I. User-defined types may also be used, but..." this gives the impression of non-builtins being the exception "It is therefore likely that the rational number class will in many cases be used with limited-precision integer types, such as the built-in int type." this one, too "When used with a limited precision integer type, the rational class suffers from many of the precision issues which cause difficulty with floating point types. While it is likely that precision issues will not affect simple uses of the rational class, users should be aware that such issues exist." imho with a built-in type rational<> has more issues than float, and not just precision, but also garbage instead of rounding, and it is likely to affect even simple uses
It sounds like you are describing a numeric type which could coexist with boost::rational. Perhaps you should implement it and post it.
it does not necessarily have to be separate from rational<> if there is interest in using it, but no one is interested in implementing it, and it is not a problem that it will take months for me (end of semester, exams time) then yes, i can do it br, andras

Andras Erdei wrote:
On Sun, 12 Dec 2004 12:35:29 -0700, "Jonathan Turkanis" <technews@kangaroologic.com> wrote:
This is essentially what the docs currently state: "The rational number class is designed for use in conjunction with an unlimited precision integer class." It would be easy to prevent rational from being instantiated with built-in integral types, but this would just make the library less useful. Perhaps the disclaimer should be featured more prominently.
[disclaimer: i'm not native english]
rational.html in boost 1.30.2 says:
"Any of the built-in integer types provided by the C++ implementation are supported as values for I. User-defined types may also be used, but..."
this gives the impression of non-builtins being the exception
I think you are right that this is misleading. The qualifications in "Design Notes" --> "Limited-range integer types" need to be stated in the requirements section. Still, it may be true that use of built-in intergal types is the most common use, and I believe this is okay as long as users are aware of the issues.
"It is therefore likely that the rational number class will in many cases be used with limited-precision integer types, such as the built-in int type."
this one, too
"When used with a limited precision integer type, the rational class suffers from many of the precision issues which cause difficulty with floating point types. While it is likely that precision issues will not affect simple uses of the rational class, users should be aware that such issues exist."
I think these passages are consistent with the statement that boost::rational is "designed for use in conjunction with an unlimited precision integer class."
imho with a built-in type rational<> has more issues than float, and not just precision, but also garbage instead of rounding, and it is likely to affect even simple uses
It sounds like you are describing a numeric type which could coexist with boost::rational. Perhaps you should implement it and post it.
it does not necessarily have to be separate from rational<>
Yes, I got that part ;-)
if there is interest in using it, but no one is interested in implementing it, and it is not a problem that it will take months for me (end of semester, exams time) then yes, i can do it
Okay, let's see if there's interest.
br, andras
Jonathan

On Sun, 12 Dec 2004 13:47:54 -0700, Jonathan Turkanis wrote
imho with a built-in type rational<> has more issues than float, and not just precision, but also garbage instead of rounding, and it is likely to affect even simple uses
It sounds like you are describing a numeric type which could coexist with boost::rational. Perhaps you should implement it and post it.
it does not necessarily have to be separate from rational<>
Yes, I got that part ;-)
But it would be nice if it was extracted from rational<> to make it more general purpose.
if there is interest in using it, but no one is interested in implementing it, and it is not a problem that it will take months for me (end of semester, exams time) then yes, i can do it
Okay, let's see if there's interest.
Well, I have some interest in the general problems of number type representations / problems -- so I'm interested. And let me see if I can expand the scope and motivate others ;-) In date-time there are a number of little number utilities -- some should probably come out as first class libraries with docs instead of being buried as implementations. For example, there is date-time/int_adapter.hpp -- a template that adapts an integer to allow for special values (infinities and not a number). constrained_value (fancier version written up in last months' CUJ) that effectively prevents assignment, etc to an integer outside a compile-time defined range (eg: seconds are always 0-60) -- has policy-based error handling. wrapping_int creates a integer type that 'wraps around' on adding or subtracting. So, for example, if the range is 1-12 if value is 11 and you add 2 you get 1. This might be handy in creating good behavior for and angle class -- values always 0-360. The issues you raise w.r.t to rounding etc are things that have general application. Finally, it would be nice if someone would actually submit infinite precision number types and resurrect the fixed decimal proposal (it died after it was rejected in review). The rounding and accuracy functions contained in these types are nasty to get right and it's sad that, once again, C++ programmers are left on their own to solve one of these issues. Java, for example, has BigInteger and BigDecimal types available -- lots of scripting languages have these types. (Oh, and yes I know there are some of these classes out there as open source, but let's get them into Boost!) Jeff

At 04:48 PM 12/12/2004, Jeff Garland wrote:
Finally, it would be nice if someone would actually submit infinite precision number types
I agree strongly. It has always seemed to me to be a real hole in Boost not to have an infinite precision number type.
and resurrect the fixed decimal proposal (it died after it was rejected in review).
A decimal number proposal is starting to move through the C and C++ committees. It is based on work done at IBM, and is designed to work as either library software, or with a hardware accelerator, IIRC. Thus anyone interested might want to become familiar with that proposal before doing any new decimal work. --Beman

On Sun, 12 Dec 2004 21:18:30 -0500, Beman Dawes wrote
and resurrect the fixed decimal proposal (it died after it was rejected in review).
A decimal number proposal is starting to move through the C and C++ committees. It is based on work done at IBM, and is designed to work as either library software, or with a hardware accelerator, IIRC. Thus anyone interested might want to become familiar with that proposal before doing any new decimal work.
I assume this is the work from here? http://www2.hursley.ibm.com/decimal/ http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1016.htm Problem I see is that IBM wants a $3000 license fee to use this. I thought things that went into the standard had to be available on an 'any-use' license like boost -- or are they planning on changing the license if this is accepted? Jeff

At 10:06 PM 12/12/2004, Jeff Garland wrote:
On Sun, 12 Dec 2004 21:18:30 -0500, Beman Dawes wrote
and resurrect the fixed decimal proposal (it died after it was rejected in review).
A decimal number proposal is starting to move through the C and C++ committees. It is based on work done at IBM, and is designed to work as either library software, or with a hardware accelerator, IIRC. Thus anyone interested might want to become familiar with that proposal before doing any new decimal work.
I assume this is the work from here?
http://www2.hursley.ibm.com/decimal/ http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1016.htm
Problem I see is that IBM wants a $3000 license fee to use this. I
Yes, but also be sure to read Bill Plauger's critique of N1016: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1567.htm thought
things that went into the standard had to be available on an 'any-use' license like boost -- or are they planning on changing the license if this is accepted?
I don't know but can find out. Do you have a reference that discusses the license? --Beman

On Mon, 13 Dec 2004 16:09:21 -0500, Beman Dawes wrote
Problem I see is that IBM wants a $3000 license fee to use this. I thought >things that went into the standard had to be available on an 'any-use' >license >like boost -- or are they planning on changing the license if this is >accepted?
I don't know but can find out. Do you have a reference that discusses the license?
If you follow the links on the IBM website to the download page you will see a license link which then asks for a $3000 payment :-( It's a sure way to discourage adoption to only the most highly motivated. BTW, what I saw on the IBM website was strictly a C library which would need a C++ wrapper -- there is a link to one of those that a non-IBM person has written. The C++ wrappter didn't look like it was up to boost standards although it might be a nice starting point if all the licensing could get worked out... Jeff

At 04:29 PM 12/13/2004, Jeff Garland wrote:
On Mon, 13 Dec 2004 16:09:21 -0500, Beman Dawes wrote
Problem I see is that IBM wants a $3000 license fee to use this. I thought >things that went into the standard had to be available on an 'any-use' >license >like boost -- or are they planning on changing the license if this is >accepted?
I don't know but can find out. Do you have a reference that discusses the license?
If you follow the links on the IBM website to the download page you will see a license link which then asks for a $3000 payment :-( It's a sure way to discourage adoption to only the most highly motivated.
OK, I see it and have queried the committee to see whether the fee applies just to the IBM implementation, or would also apply to decimal TR implementors or users.
BTW, what I saw on the IBM website was strictly a C library which would need a C++ wrapper -- there is a link to one of those that a non-IBM person has written. The C++ wrappter didn't look like it was up to boost standards although it might be a nice starting point if all the licensing could get
worked out...
I gather from a quick reading of PJP's critique that there may be a lot of change before any proposal is actually accepted. IIRC, PJP is the TR project manager, so his views will presumably carry a lot of weight. --Beman

At 05:38 PM 12/13/2004, Beman Dawes wrote:
At 04:29 PM 12/13/2004, Jeff Garland wrote:
On Mon, 13 Dec 2004 16:09:21 -0500, Beman Dawes wrote
Problem I see is that IBM wants a $3000 license fee to use this. I thought >things that went into the standard had to be available on an 'any-use' >license >like boost -- or are they planning on changing the license if this is >accepted?
I don't know but can find out. Do you have a reference that discusses the license?
If you follow the links on the IBM website to the download page you will see a license link which then asks for a $3000 payment :-( It's a sure way
to
discourage adoption to only the most highly motivated.
OK, I see it and have queried the committee to see whether the fee applies just to the IBM implementation, or would also apply to decimal TR implementors or users.
Apparently the IBM license fee applies to commercial users (whatever that means) of their decNumber C library. Thus Boost wouldn't want to provide a C++ wrapper that required use of the IBM decNumber C library. PJP points out that the IBM decNumber library "is not the subject of the decimal TR." --Beman

On Sun, 12 Dec 2004 13:47:54 -0700, "Jonathan Turkanis" <technews@kangaroologic.com> wrote:
"When used with a limited precision integer type, the rational class suffers from many of the precision issues which cause difficulty with floating point types. While it is likely that precision issues will not affect simple uses of the rational class, users should be aware that such issues exist."
maybe i'm not describing the problem clearly (or misunderstand something): every other finite precision type rounds or truncates the result, while rational<> returns unrelated garbage when it is not exactly representable it is important that we are not talking about overflow (results out of the representable range), but "sideflow", when the result falls between two representable numbers to make sure this doesn't happen, you have to emulate the behaviour of rational<> (check during runtime whether the intermediate dens and nums are relative prime etc); this checking has the same complexity as actually doing the arithmetic = to use the class you have to reimplement it to me this spells: boost::rational in it's current form cannot be used with any finite precision type, built-in or UDT, ever, for any purpose, and people trying to use it might get scared away from boost itself br, andras

Andras Erdei wrote:
On Sun, 12 Dec 2004 13:47:54 -0700, "Jonathan Turkanis" <technews@kangaroologic.com> wrote:
"When used with a limited precision integer type, the rational class suffers from many of the precision issues which cause difficulty with floating point types. While it is likely that precision issues will not affect simple uses of the rational class, users should be aware that such issues exist."
maybe i'm not describing the problem clearly (or misunderstand something):
every other finite precision type rounds or truncates the result, while rational<> returns unrelated garbage when it is not exactly representable
Depending on your requirements, the rounded or truncated results could be considered "unrelated garbage."
it is important that we are not talking about overflow (results out of the representable range), but "sideflow", when the result falls between two representable numbers
I understand the distinction you are making. But I don't yet see why it's necessarily worse than overflow.
to make sure this doesn't happen, you have to emulate the behaviour of rational<> (check during runtime whether the intermediate dens and nums are relative prime etc); this checking has the same complexity as actually doing the arithmetic = to use the class you have to reimplement it
This sounds like an argument for a checked_rational type, not for automatic rounding.
to me this spells: boost::rational in it's current form cannot be used with any finite precision type, built-in or UDT, ever, for any purpose, and people trying to use it might get scared away from boost itself
Boost.Rational has been around for almost 5 years now, and I don't believe it's had this effect.
br, andras
Jonathan

On Sun, 12 Dec 2004 18:10:58 -0700, Jonathan Turkanis <technews@kangaroologic.com> wrote:
This sounds like an argument for a checked_rational type, not for automatic rounding.
actually adding a "rounded" flag (e.g. stored in the sign bit of the denominator) is trivial and does not increase the complexity of the rounding code the thing is: rounding is a natural thing to rationals, not something superimposed on the arithmetic -- the gcd() function actually computes the sequence of more and more precise approximations until it reaches the exact number -- and i feel that with throwing this away we get rid of the very reason to use finite precision rationals (finite precision, fixed or floating slash) rational numbers are in many ways superior to (finite precision, fixed or) floating point systems, for example: - rationals are radix-independent while floats reflect the interaction of the radix and the number rather than the properties of the represented number itself (thus adding another layer of anomalies on top of the limitations inherent in finite precision) - of any two rational systems one is a refinement of the other (portability, here we come!), while different radix-based systems are simply different (and you are always using two of them, even on your PC: decimal for i/o and binary for computations) - rounding prefers simple fractions (there is a mathematical definition that nicely matches the intuitive concept) -- so even when your intermediate values are approximiations, you will still automagically get the exact result if it was 1/3
Depending on your requirements, the rounded or truncated results could be considered "unrelated garbage."
that is true: but then i should not be using any kind of finite precision arithmetic in the first place afaik cryptography and banking are the only such fields, everywhere else the goal is not to get exact results, but to get useable results usually your inputs aren't exact in the first place (no point in calculating the average salary of the people in your survey to 20 decimal places, as the individual salaries given are already rounded estimates), and even in cases where they are, it is pointless to calculate the length of the walls in the house you designed down to the atomic level imho the usual problem is that your intermediate values are inconsistent: the corner points of the surfaces of your wall are not equiplanar, so the roof becomes self-intersecting with a negative surface value
I understand the distinction you are making. But I don't yet see why it's necessarily worse than overflow.
overflow is the most natural consequence of using finite precision arithmetic. it is the easiest to understand, predict and prevent (in most applications you know in advance the magnitude of the numbers) garbage instead of rounding is counter-intuitive, unpredictable and practically impossible to prevent; would 1.0/3.0 result in a negative number instead of 0.333... no-one would be happy with floats
Boost.Rational has been around for almost 5 years now, and I don't believe it's had this effect.
do you know anyone who has been using it with built-ins? (this problem surfaced the second day i started using it, and i have spent a lot of time looking for the bug in my code before checking the boost::rational implementation :O) br, andras

Andras Erdei wrote:
On Sun, 12 Dec 2004 18:10:58 -0700, Jonathan Turkanis <technews@kangaroologic.com> wrote:
This sounds like an argument for a checked_rational type, not for automatic rounding.
actually adding a "rounded" flag (e.g. stored in the sign bit of the denominator) is trivial and does not increase the complexity of the rounding code
From my point of view, it defintely can't be the default because this would change the behavior of existing code. However, I'm open to adding a rounding
<snip discussion> I agree with many of your points, except the main one: that Boost.Rational with built-in intergal types is useless without rounding. One way to make the class more flexible would be to introduce a rounding policy as the second template parameter. Various rounding policies could include: - no rounding, and no notification that the result of an operation is inexact - no rounding, with an assert when the the result of an operation is inexact - no rounding, with an exception when the the result of an operation is inexact - round to the nearest representable value - round towards zero I think you are arguing that the second to the last is easy to compute, and should be the default. Right? policy or adopting some other solution which would allow users to chose the other behaviors. Please feel free to offer suggestions or contribute code. The only thing I'm against is changing the default behavior, unless there is wide agreement that the current version is broken.
Boost.Rational has been around for almost 5 years now, and I don't believe it's had this effect.
do you know anyone who has been using it with built-ins?
No, I've only been the maintainer for a short time.
(this problem surfaced the second day i started using it, and i have spent a lot of time looking for the bug in my code before checking the boost::rational implementation :O)
I'm sorry about that. I think we're agreed that the warning in the documentation should be more prominent.
br, andras
Jonathan

On Tue, 14 Dec 2004 12:30:55 -0700, Jonathan Turkanis <technews@kangaroologic.com> wrote:
I agree with many of your points, except the main one: that Boost.Rational with built-in intergal types is useless without rounding.
if you are really, really convinced that this is the case, there is a code snippet at http://www.ccg.hu/pub/src/rat.cpp -- it shows that the current finite precision boost::rational semantics can be implemented a lot more efficiently
- round to the nearest representable value I think you are arguing that ... is easy to compute, and should be the default. Right?
yes br, andras

Andras Erdei wrote:
On Tue, 14 Dec 2004 12:30:55 -0700, Jonathan Turkanis <technews@kangaroologic.com> wrote:
I agree with many of your points, except the main one: that Boost.Rational with built-in intergal types is useless without rounding.
if you are really, really convinced that this is the case,
That what is the case?
there is a code snippet at http://www.ccg.hu/pub/src/rat.cpp -- it shows that the current finite precision boost::rational semantics can be implemented a lot more efficiently
Thank you. I'll take a careful look at it as soon as I am a little less far behind on my current projects. Jonathan

On Wed, 15 Dec 2004 11:20:12 -0700, Jonathan Turkanis <technews@kangaroologic.com> wrote:
I agree with many of your points, except the main one: that Boost.Rational with built-in intergal types is useless without rounding.
if you are really, really convinced that this is the case,...
That what is the case?
sorry: that rational with the current (non-rounding) finite semantics is o.k. for some purpose
I'll take a careful look at it as soon as I am a little less far behind on my current projects.
ok, let me know what you think when you have checked; also, when i have time, i'll try to extend it a bit more br, andras

On Mon, Dec 13, 2004 at 01:51:20AM +0100, Andras Erdei wrote:
maybe i'm not describing the problem clearly (or misunderstand something):
every other finite precision type rounds or truncates the result, while rational<> returns unrelated garbage when it is not exactly representable
The garbage is not completely unrelated - it's caused by overflow. There is no rounding or truncating in this operation: int i = INT_MAX * INT_MAX; but the result is wrong.
it is important that we are not talking about overflow (results out of the representable range), but "sideflow", when the result falls between two representable numbers
I disagree - the problem is that an intermediate value overflows, not that the final result is not an integer. Maybe boost::rational could be made to use a large type for all builtin integral types. That way the intermediate result would be found using a higher precision than the final result and the chances of the intermediate result overflowing would be lessened. For all signed integral types use long and for all unsigned use unsigned long. Could even use long long where available, but that might affect performance negatively. jon -- "Reality is that which, when you stop believing in it, doesn't go away." - Phillip K. Dick

On Sun, 12 Dec 2004 17:53:08 +0000 (UTC), Mickey Moore <mgmoore@austin.rr.com> wrote:
I will agree that it's much less obvious when you will get into trouble, since the numbers were of modest magnitude -- though still greater than sqrt(32768) which is a (very rough) indication of potential trouble.
this can be misleading: yes, you can add two rational<short>s together if all their nums and dens are smaller than 181 (a sufficient but not necessary condition); but to be able to add the results of two additions, the limit goes down to 13, then to 3, then to 1
In this case the obvious solution is rational<long>
with long calculating the limits this way you get 46340, 215, 14, 3, 1 -- even in a very simple program containing a few operators the arguments can only safely have the values zero and one... br, andras

On 12/11/04 5:03 AM, "Andras Erdei" <aerdei@gmail.com> wrote: [The problem: It occurs when using boost::rational with limited-size integer types, typically the built-in ones. A basic rational operation may have the numerator and/or denominator overflowing, leading to a garbage "sideflowed" result, even though the true result is between representable values.] [SNIP]
fixing rational<> to work with finite precision integers so that for calculations like the above we get the best representable approximation (in this case 3197/1540 = 2.076, precise to 7 digits) is pretty straightforward -- the gcd() function implicitly calculates all representable approximations -- but would require rewriting the whole class
I would like to see precise directions of the algorithms that give the closest representable result (that have to try avoiding internal overflow). -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

On Mon, 20 Dec 2004 19:52:30 -0500, Daryle Walker <darylew@hotmail.com> wrote:
I would like to see precise directions of the algorithms that give the closest representable result (that have to try avoiding internal overflow).
i'm not sure what you mean by "have to try avoiding internal overflow": the algorithm is the Euclidean algorithm for computing the GCD, and it computes the best rational approximation when there is an overflow, but it does not avoid the overflow euclid(n,m) b_-2 = n ; p_-2 = 0 ; q_-2 = 1 b_-1 = m ; p_-1 = 1 ; q_-1 = 0 loop a_i := b_i-2 / b_i-1 p_i := p_i-1 * a_i + p_i-2 q_i := q_i-1 * a_i + q_i-2 if p_i or q_i overflows, return (p_i-1,q_i-1) b_i := b_i-2 % b_i-1 if 0 == b_i, return (p_i,q_i) with our previous example (191/181 + 197/193 = 72520/34933) i a b p q p/q -2 72520 0 1 -1 34933 1 0 0 2 2654 2 1 2.00... 1 13 431 27 13 2.076... 2 6 68 164 79 2.07594... 3 6 23 1011 487 2.075975... 4 2 22 2186 1053 2.075973... 5 1 1 3197 1540 2.07597402... 6 22 0 72520 34933 2.075974007... overflow, return (3197,1540) * * * * the math background (no proofs, simplified, from Matula-Kornerup: Foundations of Finite Precision Rational Arithmetic, Computing Suppl. 2, 1980, pp 85-111) the convergents p_i/q_i and algorithm have the properties: - best rational approximation: if r/s != p/q, s <= q_i then abs( r/s - p/q ) > abs( p_i/q_i - p/q ) - quadratic convergence: 1/( q_i * ( q_i+1 + q_i ) ) < abs( p_i/q_i = p/q ) <= 1/( q_i * q_i+1 ) - alternating convergence: p_0/q_0 < p_2/q_2 < ... <= p/q <= ... < p_3/q_3 < p_1/q_1 - if a convergent cannot be represented (overflow) then no subsequent convergent can be represented - fixed point: iff p/q can be represented, then euclid(p/q)==(p,q) - monotonic: if x < y then euclid(x) <= euclid(y) - median-rounding: if a value falls between two representable numbers, the median of those is the splitting point for the rounding; the bounds are rounded towards the simpler rational - with rational<n> the rouding error at most 1/n, but usually it's around 1/n^2; the wide rounding intervals belong to simpler fractions (where q is small compared to n) with n = 2^31-1 (a 32 bit int) probability of rouding errors: P( error > 1e-18 ) < 0.43 P( error > 1e-15 ) < 4.3e-4 P( error > 1e-11 ) < 4.3e-8 P( error > 1e-9.3 ) = 0 br, andras

On 12/22/04 4:56 AM, "Andras Erdei" <aerdei@gmail.com> wrote:
On Mon, 20 Dec 2004 19:52:30 -0500, Daryle Walker <darylew@hotmail.com> wrote:
I would like to see precise directions of the algorithms that give the closest representable result (that have to try avoiding internal overflow).
i'm not sure what you mean by "have to try avoiding internal overflow": [TRUNCATE]
That you can't continue to use a variable from an intermediate result that has overflowed, since it's garbage. This means ordering your calculations so the answers always fit within the variable's type. (As you later explained, that doesn't preclude algorithms that take a new direction upon realizing an upcoming overflow.) -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com
participants (7)
-
Andras Erdei
-
Beman Dawes
-
Daryle Walker
-
Jeff Garland
-
Jonathan Turkanis
-
Jonathan Wakely
-
Mickey Moore