
On 9/14/05, Joel Eidsath <jeidsath@gmail.com> wrote:
Given n bits one can only represent 2^n numbers at most. (In the case of boost::rational, substantially less, since 1/2 = 4/8 and so on.)
substantially less is not as bad as one might think, the loss is about 1 bit, depending on n (but always less than two bits)
That simply translates to a set of points on the number line. If one were to choose a random real number, which would be less on average: its distance to the nearest boost:rational representation or its distance to the nearest n-bit floating point representation? Since the n-bit floats are equally spaced
[i think fixed point and fixed slash are (asimptotically) uniformly distributed, and floating point and floating slash are (asimptotically) log uniformly distributed]
and all representations are unique, the answer is that with n-bits, you can get closer to an arbitrary real number with a floating point representation than with boost::rational.
the distribution of numbers representable with num/den pairs is arithmetic friendly, and of those representable as floating point is arithmetic unfriendly, but the explanation is very long and math intensive, so i am cornered the best i can offer here is an autoritarian "proof" of one particular difference; you were referring to knuth, so let's quote him: "For example, suppose we are doing [fixed precision rational arithmetic] so that the representable numbers (u/u') have -128 < u < 128 and 0 <= u' < 255. This isn't much precision, but it is enough to give us a feel [] Suppose we have a calculation that would take the overall form 22/7 = 314/159 + 1300 / 1113 if we were working in exact rational arithmetic, but the intermediate quantities have had to be rounded to representable numbers. In this case 314/159 would round to (79/40) and 1300/1113 would round to (7/6). The rounded terms sum to 79/40 + 7/6 = 377/120, which rounds to (22/7); so we have obtained the correct answer even though three roundings were required. This example was not specially contrived. When the answer to a problem is a simple fraction [fixed precision rational arithmetic] tends to make the intermediate rounding errors cancel out." D.E. Knuth: The Art of Computer Programming, Volume 2, Chapter 4.5.1 (in contrast, floating point tends to accumulate rounding errors)
Getting to the issue of CPU cycles. Your previous email has several problems. For one thing, arbitrary floating points are not constructed with a numerator and denominator. You can construct them with a string of digits, for example.
[to my understanding you were referring to the boost::rational ctor that takes a num and a den, so i compared that kind of initialisation, but] in theory, if a float or a rational is constructed using a num and a den, both takes O( log n ) steps; if they are constructed using a string of digits, still both takes O( log n ) steps; if the float is constructed using a precalculated float representation and a rational using a precalculated normalized num and den, both takes O( 1 ) the practical difference is that for built-in floats you have compiler and hw support, and it happens in compile-time, and for user-defined rationals you do no not, and it happens at run-time, in sw to an extent, the compiler support can be emulated; but we are unlikely to ever have hw support for rationals (there are several articles about efficient hw implementation of rationals, but i don't know about any PCs shipped with such hw :O)
Further, you neglect the fact that gcd calls have to be made after every arithmetical operation, not simply on construction. This represents a huge effciency hit, not a small effciency hit.
that's why i said you should have compared addition, not construction, as that is the operation that really have different time complexity (and also reciprocate, in the opposite direction) but there are other considerations: users may be happier with getting the correct result slightly slower than getting a bad answer a bit faster
Lastly, you didn't compare worst cases.
?
1) It takes more memory on average.
or less
2) It's not just a little slower, it's far slower,
addition is the only operation with worse time-complexity (i assume "far slower" means that), and i think it is up to the user whether he can live with that compromise
and worst case quickly gets uncomputable.
?
3) And most importantly: It's nearly useless for science and engineering because you cannot use infinite series functions like sin, sqrt, and so on. (You can hack it with series approximations, but you wind up redoing all the work of an arbitrary float class.)
swap the words float and rational and the above sentence still stands :O)
It is quite true that there is a problem space where boost::rational is the most appropriate solution. It's just not as big as you seem to think it is.
that could be right br, andras