
On 9/5/05 8:21 PM, "Joel Eidsath" <jeidsath@gmail.com> wrote:
Daryle Walker wrote:
[SNIP]
You missed the point of my question. Physically, your assertion is true (since an irrational result would need infinite memory to hold it). However, many of the standard functions conceptually return irrational results. We simply get an approximation returned. The precision of the approximation is automatically determined by the properties of the built-in numeric type. What would be the limitation on your version of these functions? Would you let a single result fill up as much memory as possible? If not, what's the cut-off? Would the cut-off be fixed or determined at run-time (noting that the latter could change the function signature by requiring an extra parameter for the cut-off limit's data)?
A call to something like boost::set_precision(150) will give you 150 decimal digit precision in all your calculations. A call to boost::set_precision(200) would give you 200 decimal digit precision. That is the entire point of any "arbitrary precision library." If you have never used one, try taking a look at NTL or something to get an idea of how it works.
OK. When you say "arbitrary precision," you mean that a precision limit must be set (at run-time) before an operation. Most people use "arbitrary precision" to mean unlimited precision, not your "run-time cut-off" precision.
I don't think it can be a simple disabling-flag. It changes the semantics of the type and its functions.
No it doesn't. You would use separate storage for error range data.
Is this separate from the precision data stored in each number? If you don't change the signatures of each standard math function to include a parameter for a disable-flag or error-range, then you need to implement them as a global value (with all the disadvantages globals have).
Maybe. And remove from the front too. These would be needed for digit-shift operations (<< and >>).
I doubt that those operations will be implemented. Bit-shift operators don't have much meaning for non built-in types. And faking it will be prohibitively expensive. As a side point, remember that there will be no "digits." Numbers will be stored in base-"numeric_limits<unsigned int>::max()".
The operators would shift by digit places. Your precision APIs already assume that your numbers act like having radix-10 place-value semantics, so the shift operators would work with radix-10 too (i.e. multiplying or dividing by powers of 10). But as you said, if you don't actually implement the numbers that way, like using an INT_MAX radix, then faking it would be expensive.
Will access to the internals improve the implementation of GCD? If not, we already have GCD and LCM functions in Boost.
Yeah, I knew about those. I hadn't gotten around to checking them out yet. If they use Euclid's algorithm rather than the binary algorithm, then I won't use them.
I think they're all Euclid-based. (They can't assume that the numeric type is binary-based.)
No. The binary gcd algorithm. See Knuth "Semi-numerical algorithms."
Did you think I mistakenly said "binary-based" for "binary gcd"? I didn't; they mean separate things. I have the book; I've read the algorithm. Even though the binary GCD algorithm works for any representation, it's optimal for radix-2 representations, especially in computers with bit-level operations. (If a type matches that, then the algorithm can be done with shifts and masks.) I don't know if the binary GCD algorithm is more efficient if you cannot assume that the numeric type has an optimal representation. (Also, I didn't know of the binary GCD algorithm when I wrote the code.)
7) It should work well with other Boost libraries where possible. 8) Divide by zero errors for integers should be handled with an exception. 9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
Your rational numbers would be distinct from boost::rational<your::integer>? What would be the advantage?
See top.
[The top was SNIPped.] I didn't see any advantage or disadvantage there.
But would it really be a cost savings?
Read this and get back to me: http://www.shoup.net/ntl/doc/RR.txt
I didn't see a cost savings (or penalty) there. It does something akin to the built-in floating-point types on common platforms (which are IEEE-754) with a binary mantissa and exponent. It's a tradeoff, giving up arbitrary denominators for quicker manipulation. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com