
Did I ever mention that I love what you have done with this stuff? It's simply great programming!
All your fault for writing the arithmetic code though ;-)
But let's face it, my O(N^2) is very slowwww for digits.
Up to a point - for smaller digit counts it's usually the best you can do in practice.
I've got to do better than that!
What would you say if I suggested, late in the game, that I add, *at least*, a naive FFT multiplication based on complex decimation in radix-2. We won't break any speed records or catch GMP at a million digits, but at least we won't end up with a stand-still at 10,000 decimal digits.
In your opinion, should I take a crack at this if I find a spare afternoon?
Maybe. My understanding is that FFT doesn't become viable until the digit count grows truely huge? Would the Karatsuba algorithm be viable sooner and help more users? My gut feeling is that the current implementation is perfectly fine up to a few hundred (and maybe a thousand?) decimal places, which will probably take care of most use cases.... well my use cases anyway! So maybe examples are a higher priority for now? We could also go through the current the code looking for optimisation oportunities - for example would the inversion code be more efficient using Halley rather than Newton iteration (see for example http://numbers.computation.free.fr/Constants/Algorithms/inverse.html)? Also multiplication and division by integer (relatively common operations?) as special cases.
Of course, in the future we need to cooperate with FFT specialists who could get the speed of big-num types (with BPL licensing) up to state.of-the-art. But maybe we can get just a bit more functionality now before review and potential first distrubition.
For sure, there's always much more that can be done, it's just a case of knowing when to stop! ;-) Cheers, John.