
Daryle Walker wrote:
Then how would you implement them?
Here would be a very naive implementation just to illustrate the concept: use an vector where each element represents a digit. Store the index where the integer ends, and the whole number begins. I.e., to store 186.0007, the vecor will hold: 1, 8, 6, 0, 0, 0, 7. The index where the integer ends is 2.
You missed the point of my question. Physically, your assertion is true (since an irrational result would need infinite memory to hold it). However, many of the standard functions conceptually return irrational results. We simply get an approximation returned. The precision of the approximation is automatically determined by the properties of the built-in numeric type. What would be the limitation on your version of these functions? Would you let a single result fill up as much memory as possible? If not, what's the cut-off? Would the cut-off be fixed or determined at run-time (noting that the latter could change the function signature by requiring an extra parameter for the cut-off limit's data)?
A call to something like boost::set_precision(150) will give you 150 decimal digit precision in all your calculations. A call to boost::set_precision(200) would give you 200 decimal digit precision. That is the entire point of any "arbitrary precision library." If you have never used one, try taking a look at NTL or something to get an idea of how it works.
I don't think it can be a simple disabling-flag. It changes the semantics of the type and its functions.
No it doesn't. You would use separate storage for error range data.
Maybe. And remove from the front too. These would be needed for digit-shift operations (<< and >>).
I doubt that those operations will be implemented. Bit-shift operators don't have much meaning for non built-in types. And faking it will be prohibitively expensive. As a side point, remember that there will be no "digits." Numbers will be stored in base-"numeric_limits<unsigned int>::max()".
Will access to the internals improve the implementation of GCD? If not, we already have GCD and LCM functions in Boost.
Yeah, I knew about those. I hadn't gotten around to checking them out yet. If they use Euclid's algorithm rather than the binary algorithm, then I won't use them.
I think they're all Euclid-based. (They can't assume that the numeric type is binary-based.)
No. The binary gcd algorithm. See Knuth "Semi-numerical algorithms."
7) It should work well with other Boost libraries where possible. 8) Divide by zero errors for integers should be handled with an exception. 9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
Your rational numbers would be distinct from boost::rational<your::integer>? What would be the advantage?
See top.
But would it really be a cost savings?
Read this and get back to me: http://www.shoup.net/ntl/doc/RR.txt Joel Eidsath