
On Fri, 23 Mar 2007 20:32:55 +0100, Kevin Sopp <baraclese@googlemail.com> wrote:
(1) evaluate the need for a specific allocator because std::allocator doesn't provide a reallocate method, and growing allocated space could be a better policy than a create/destroy policy;
Both can be supported easily because you can simulate a realloc via std::allocator functions. So you just need to specialize a struct on the allocator type then either simulate the realloc() or call the real realloc().
Ok, I thought to something like this: template< typename BlockType = block_t, typename Allocator = bigint_allocator<BlockType> > class big_integer; where block_t is architecture dependent (but we can think it as unsigned int) and we have: template< typename T > class bigint_allocator : public std::allocator<T> { public: pointer reallocate( pointer p, size_type n ); }; I've omitted ctors and typedefs.
Both can be supported easily because you can simulate a realloc via std::allocator functions
do you mean that it's possible to use std::allocator methods in order to grow allocated space without the need to initialize it again with old data ? or do you mean only that it's possible to implement both policies: the first with std::realloc and the second using std::allocator ? In the second case the bigint_allocator template signature could become template< typename T, typename policy_tag > where policy_tag could be realloc_policy_tag or destroycreate_policy_tag (two empty structs, but it could be also possible use an enum as template parameter).
(3) use different algorithms according to the size of the numbers involved, the GMP library can be taken as an example and Knuth's TCP Vol 2 as a reference for the algorithms' implementation;
One can make these thresholds user definable at compile time because they may be different for each platform. One could write a simple test which prints suggested thresholds.
it's a good idea :)
(10) about fixed size big integer I had tought to something like this: template< unsigned int N > class big_int; where N are the number of bits which have to be used;
Or template<unsigned int N, typename T = unsigned int> where T is the base type of the value array, i.e. array<T> data_;
oh yes, I actually meant something like that.
(4) decide if we have to use exception handling C++ constructs for arithmetic exceptions ( divide by zero, etc) or simply abort the program; this is essentially a matter of performance;
Not just a matter of performance but also one of interface, usually bigint will be used with data that comes from outside the program so programmer has to be prepared to catch bigint exceptions. This could be parameterized <bool use_exceptions = true/false>. A member variable to hold some state about the bigint will be useful, that way you can check for overflow, divide by zero and encode +inf, -inf, etc.
(12) it would be nice to support infinity and indefinite forms, but it could mean performance problems because we'd have to evaluate all possible cases, maybe it will be better realize a specific wrapper/adaptor class for such a goal, moreover this class adaptor could be used also for standard integral type;
I suggest as above having a state variable, I suspect checking for the states gets lost in the noise especially when the numbers get large.
I don't know if checking for states doesn't matter compared to the complexity of multi precision algorithms. It's possible, I should try. parametrization for infinity, NaN and exception it's ok; anyway what I wanted to point out it's that in the case there is a performance loss, managing infinity and undefined states, then it is better to implement different classes and it's so even using template parameters: for instance classes big_integer<WITH_INF> and big_integer<WITHOUT_INF> ( WITH_INF = true, WITHOUT_INF = false, and I've omitted the other template parameters ) will exdend a common class in such a way to share as much as possible, but they will be two different template specialization of <bool use_inf>.
Question : is it possible to get good performance without implement the basic algorithms in assembler ?
First bigint needs to be portable then one can easily add optimized paths for different architectures (and assemblers). About the question, I don't know - if you do alot of number crunching fast is never fast enough ;)
Btw, this paper made big int division much more understandable to me: http://www.cs.ubc.ca/local/reading/proceedings/spe91-95/spe/vol24/issue6/spe...
Kevin _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Thanks a lot for your comments and for the article link I'll read it as soon as possible :) Regards, Marco -- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/