
I took a look at your linked source and there were quite a few files there. Which ones pertain to large integer types.
All of them ;-) boost/mp_math/mp_int.hpp is the file that should be included, it includes all the other headers.
Can your digit_type be 64 bit unsigned integer? If so you have pretty much the ideal variable precision integer.
It can, but only if you have a 128 bit word type (necessary for intermediate results in the calculations) or if all important calculations were coded in assembler. Then I could take advantage of add with carry instructions etc and handle carries in place.
--snip-- private: digit_type* digits_; size_type used_, capacity_; int sign_; }; --end snip--
Why not simply use std::vector<digit_type, Allocator> instead of managing your own array?
Because I would need direct access to its internal size member.
mp_int is actually overkill for my immediate problem, but would certainly do the job and allow me to make a general solution. It looks like something we need in a place like boost.
I managed to solve my immediate problem by distributing division and recovering the lost precision due to truncation with modulus operations. I had to use 64 bit unsigned integers with associated sign stored in a separate integer to perform hastily hacked up 65 bit signed arithmetic. I used this to compute the product of deltas between signed 32 bit integers and work with those values to eventually compute the intersection of two line segments in a way that is robust to overflow and truncates the result minimally and predictably.
So what you actually need is a fixed precision integer that lives on the stack. mp_int involves memory allocation and algorithms designed for very large integers which is probably overkill.
When do you think your mp_int library will be ready for review?
It will be there soon.