
On Thu, Jun 10, 2010 at 5:54 PM, Chad Nelson <chad.thecomfychair@gmail.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 06/10/2010 11:58 AM, Giovanni Piero Deretta wrote:
The alternative, you say, is "I'd have to have two functions for each operations". Can you elaborate?
One to calculate how much memory would be needed, and (probably) allocate it -- the calculation is different for every operation. The other to do the actual calculation.
I do not buy this:
Too bad, it's on sale and you're missing a really great deal. ;-)
:) I would actually find some use for a big int class and I hope that yours find its way into boost as it is the most promising so far.
as an example, std::copy does not need to know the size of the output sequence nor need to know how to grow it (one can for example use back_inserter for automatic growth). Couldn't xint use a similar strategy?
std::copy doesn't need to know how to grow the output sequence, but it *does* need to be passed the back_inserter, which needs to know how to grow it.
yes, so?
What's the difference between having a calculation function that takes the components of raw integers and a class to reallocate them, and simply sending in the raw integer objects that already have all the components and know how to reallocate themselves?
Those are not the options. The choice is between having a function that takes one or more generic input ranges and an output range (as most std algos) and a function that only works on some specific integer type. The advantage would be that the big int algos would work on any sequence of bytes (or whatever smallest amount of computation you use... probably ints?). If I wanted to I could simply use a vector<char> (or deque or whatever) and back_inserter without having to write anything more. And wouldn't need to use your raw integer object. As you have experimented yourself in this list, the bigint class itself has proved to be the most contentious issue as everybody has his own requirements (move vs reference count, fixed buffer vs reallocation, etc).
For an heap allocated big int the ouptut iterator can take care of any reallocation, while for a fixed point big int, in case of overflow, one could chose between UB, assert or throw. In principle there is no reason for algorithms to deal with memory allocation details.
Any algorithm that has to modify the size of a buffer has to deal with memory allocation, one way or another, at some level.
The fact that std::back_inserter deal with memory allocation in no way implies that std::copy (or any other algorithm) needs to deal with memory allocation. BTW, the fact that it doesn't need to, does not mean that it can't. It is perfectly acceptable (and in fact desirable) for std::copy to be specialized to treat optimally back_inserter iterators to known containers (for example by explicitly calling reserve).
The more indirect you make it, the harder it is to get speed out of it. I'm trying to make the fastest pure-C++ large-integer implementation I can.
generic programming in C++ is about having the most flexible abstraction with the smallest abstraction penality. What do you think will prevent you from extracting the highest performance? There are probably many people on the boost mailing list that could give advice. -- gpd