
Andras Erdei wrote:
my guess would be "something closer to 1/2 than calculating with floating point arithmetic using the same number of bits (or maybe even the same number of CPU cycles)"
Wrong. Given n bits one can only represent 2^n numbers at most. (In the case of boost::rational, substantially less, since 1/2 = 4/8 and so on.) That simply translates to a set of points on the number line. If one were to choose a random real number, which would be less on average: its distance to the nearest boost:rational representation or its distance to the nearest n-bit floating point representation? Since the n-bit floats are equally spaced and all representations are unique, the answer is that with n-bits, you can get closer to an arbitrary real number with a floating point representation than with boost::rational. Getting to the issue of CPU cycles. Your previous email has several problems. For one thing, arbitrary floating points are not constructed with a numerator and denominator. You can construct them with a string of digits, for example. Further, you neglect the fact that gcd calls have to be made after every arithmetical operation, not simply on construction. This represents a huge effciency hit, not a small effciency hit. Lastly, you didn't compare worst cases. So problems with using boost::rational instead of an arbitrary floating point class are these: 1) It takes more memory on average. 2) It's not just a little slower, it's far slower, and worst case quickly gets uncomputable. 3) And most importantly: It's nearly useless for science and engineering because you cannot use infinite series functions like sin, sqrt, and so on. (You can hack it with series approximations, but you wind up redoing all the work of an arbitrary float class.) It is quite true that there is a problem space where boost::rational is the most appropriate solution. It's just not as big as you seem to think it is. Joel Eidsath