
For me the rational data type is very important and should wrap the gmp mpq_type. It would be nice if the library could use mp_int with Boost.Rational to implement its own standalone mp_rational datatype, but I would prefer to use the gmp type whenever possible.
Support for mpq_t is on my (long) todo list.
By the way, I have put extensive thought into how to build a good expression template system for dealing with multi-precision arithmetic and would like to be involved in some of the discussion of the design of the system. Particularly, the most important thing about providing expression templates for multi-precsision is eliminating allocations and deallocations. The allocations can easily dominate the runtime of a multiprecision algorithm.
I suspect this may depend on the data type - for real-valued types, I've done some experimenting with a port of the LINPACK benchmark to C++ that shows *almost* no difference between expression template enabled code (whether mine, or existing wrappers such as mpf_class) and more traditional class abstractions. I need to do some experimenting and profiling, but it seems like the runtime is completely dominated by the cost of multiplication/division, and even though my code generates fewer temporaries that either mpf_class or traditional non-ET wrappers, that simply doesn't take much off the runtime. Of course LINPACK is an old Fortran program written in a very idiomatic style that probably doesn't really stretch the expression template code at all. So arguably we need better test cases - the special functions in Boost.Math would make good candidates I guess.
There are many options and considerations for how to optimize the allocations, such as object pools, thread safety etc. Anyone who uses multiprecision numerical data types in the same way they would use built-in numerical data types is writing code that runs significantly slower than the equivalent C code that doesn't abstract away allocation and deallocation and forces the programmer to choose where and when to allocate so that they choose the proper place. I saw large speedups in my Polygon library by recycling gmpq_type variables rather than declaring them in the inner most scope in which they were used.
I take it you mean that within the "big number" class type initialized mpq_t variables get cached and recycled? That's an interesting idea, and not something I've investigated so far - not least because of the issues you highlight. One thing I do want to investigate for fixed-precision real-valued types (with mpfr or mpf backends) is to eliminate the allocation altogether by placing the storage directly in the class (if it makes sense for not-too large precisions). I have a couple of questions relating to the polygon library if that's OK? * Is there a concept check program anywhere that I can plug my types into and verify they meet the libraries conceptual requirements? * Is there a good program for testing "big number" performance within your library (for either large integers or rationals)? And yes, we would very much like your input ;-) With regard to expression templates, can I direct you to the "big_number" directory of the sandbox and the wildly out of date docs at http://svn.boost.org/svn/boost/sandbox/big_number/libs/math/doc/html/index.h... In particular the conceptual requirements required for backend types are constantly evolving - both as the needs of new backends are evaluated - and as I come up with new "good ideas" ;-) Nonetheless I would much welcome your input, and will try and add a rationale type backend ASAP for you to experiment with. Regards, John.