
[I'm beginning a new thread, as the original subject doesn't really reflect the content of my message below.] On 3/9/2011 6:13 AM, Chad Nelson wrote:
On Mon, 7 Mar 2011 17:15:58 -0800 Steven Watanabe<watanabesj@gmail.com> wrote: [...]
Thoughts on separating out the arithmetic algorithms: [...] * I think this is a much harder problem than those demanding this feature seem to think.
You're right, Steven, it's likely not an easy task; there hasn't been much discussion about it, so there's a lot of unknowns. I guess we'll see how hard it is once we start trying.
Just off the top of my head: - Memory management can't just be ignored. Unlike the STL algorithms, the amount of space required for the results is dependent on the values involved. It isn't a direct function of the size of the input. Not to mention that all but the most basic functions need to create intermediate results.
At least the the basic arithmetic operations, we can give a reasonable upper bound on the size of the result given the size of the inputs. You can estimate the size of a sum or difference to within a carry digit (assuming no cancellation), and likewise for multiplication and division. For more complicated operations, yes, the unknown size of the result would complicate the interface. We might need to group algorithms into a number of different categories; for example, the following four: the size of the result is (nearly) precisely determined by the size of the inputs; the size of the result is bounded by (a function of) the size of the inputs; the size of the result can be determined from the inputs prior to constructing the result, with no or little loss in efficiency; and the size of the result is undetermined until it has been fully constructed.
The design I see would require all such user-supplied magnitude types to have a reallocation function that the library can call. If that function doesn't provide the memory requested, the user gets only the lower bits of the correct result. The code for handling that is already in the library, for the fixed-length integers.
I'd like to hear more specifics about this idea...if you have more specifics at the moment, that is.
- Different representations of integers can't easily be encapsulated behind a concept that the generic algorithms can use. There are a quite a few possible representations. A class can pick one representation and stick with it. Generic algorithms would need to handle anything.
I'm envisioning a "generic integer" is not much more than a range of digits representing the base B expansion. Possibly with an optional sign, or with an implied sign based on a 2's complement representation. Is that too vague, or would that not cover some reasonable integer implementations?
I think that can be handled by putting the onus of dealing with it onto the person providing the new type.
E.g., by supplying a policy class of fundamental operations...? For example, it seems for a generic addition algorithm, one needs to be able to add two digits and generate a truncated sum + carry bit. I can see at least two ways to do this, depending on if your digits are base 2^b or base 2^(b/2), where b is the bits in an unsigned int (say). The latter is easy to get a sum + carry in "regular" C++; the former is more compact and would likely benefit from some assembly language specializations to detect carries in additions. Since there isn't a canonical "add with carry" algorithm, it could be supplied by a policy parameter. [Maybe there *is* a canonical "add with carry" algorithm, but hopefully the overall point is clear.] Perhaps such discussions on the design of a generic interface are best left for after the review. I couldn't help but throw some ideas out there already, though ;) - Jeff