
* I think that the fact that operands of different backends can not be mixed on the same operation limits some interesting operations:
I would expect the result of unary operator-() always signed? Is this operation defined for signed backends?
It is, but I'm not sure it's useful. I don't reach to find it now on the documentation for mp_number, neither on the code. Could you point me where it is defined?
In code? mp_number_base.cpp#583. In the docs, looks like I missed it :-( Will add (likewise unary +).
It's a mp_uint128_t, and the result is the same as you would get for a built in 128 bit unsigned type that does 2's complement arithmetic. This is intentional, as the intended use for fixed precision cpp_int's is as a replacement for built in types. I could understand that you want the class cpp_int behave as the builtin types, but I can understand also that others expect that a high level numeric class shouldn't suffer from the inconveniences the builtin types suffer and be closer to the mathematical model. I expected mp_number to manage with these different expectation using a different backend, but maybe my expectations are wrong.
There are a lot of possible different behaviour possible and only a limited amount of time :-( At this stage cpp_int is intended to be a basic "proof of principle" implementation, useful, but it doesn't provide everything that could be done.
It would be great if the tutorial could show that it is possible however to add a mp_uint128_t and a mp_int256_t, or isn't it possible? I guess this is possible, but a conversion is needed before adding the operands. I don't know if this behavior is not hiding some possible optimizations.
Not currently possible (compiler error).
Why? mp_uint128_t is not convertible to mp_int256_t?
Because we deliberately choose not to provide it. On a technical level the code looks like: template <class Backend> some-return-type operator+(const mp_number<Backend>&, const mp_number<Backend>&); So the operator overload can not be deduced for differing backends. Interestingly, had these been non-templates then it would have worked as you expected (the conversion would have been found).
I thought about mixed operations early on and decided it was such a can of worms that I wouldn't go there at this time. Basically there are enough design issues to argue about already ;-)
As for example?
Well, you've raised quite a few ;-) Interface, naming convensions, expression templates, scope....
However, consider this: in almost any non-trivial cenario I can think of, if mixed operations are allowed, then expression template enabled operations will yield a different result to non-expression template operations. Why? could you clarify?
Consider the Horner polynomial evaluation example: a = (c1 * x + c2) * x + c3; Expression templates transform this into: a = c1 * x; // evaluated in place using "a" as temporary storage if required. a += c2; a *= x; a += c3; Now suppose that the constants cN, x and a all have different precisions. Rounding will change depending whether you evaluate using temporaries as "(c1 * x + c2) * x + c3" and then assign (and possibly round) to "a", or evaluate in place as above.
In fact it's basically impossible for the user to reason about what expression templates might do in the face of mixed precision operations, and when/if promotions might occur. For that reason I'm basically against them, even if, as you say, it might allow for some optimisations in some cases. It is not only an optimization matter. When working with fixed precision it important to know what is the result type precision of an arithmetic operation that don't loss information by overflow or resolution.
Right. So make sure you're using the correct type. Basically we're saying "if you want to do mixed precision arithmetic, then you have to decide (by using casts) which of the precisions is the correct one, and specify that explicitly in the code".
I don't understand, how is that different from the number of decimal digits? Oh I got it now. the decimal digits concern the mantissa and not the digits of the fractional part?
Yes, all the digits in the mantissa, both the whole and fractional parts. Remember this is *floating* point, not fixed point.
* What about adding Throws specification on the mp_number and backend requirements operations documentation?
Well mostly it would be empty ;-) But yes, there are a few situations were throwing is acceptable, but it's never a requirement. By empty, do you mean that the operation throws nothing? if yes, this is a important feature and/or requirement.
No, I mean we have nothing to say about it - the front end doesn't *require* that the backend do any particular thing, throw or not throw. It's up to the backend to decide if throwing is or is not appropriate.
BTW, I see in the reference "Type mp_number is default constructible, and both copy constructible and assignable from: ... Any type that the Backend is constructible or assignable from. " I would expect to have this information in some way on the tutorial.
It should be in the "Constructing and Interconverting Between Number Types" section of the tutorial, but will check. I didn't find it there.
It's rather brief, but it's the last item: " Other interconversions may be allowed as special cases, whenever the backend allows it: mpf_t m; // Native GMP type. mpf_init_set_ui(m, 0); // set to a value; mpf_float i(m); // copies the value of the native type. " There's more specifics in the tutorial for each backend, for example the mpfr section has: "As well as the usual conversions from arithmetic and string types, instances of mp_number<mpfr_float_backend<N> > are copy constructible and assignable from: The GMP native types mpf_t, mpz_t, mpq_t. The MPFR native type mpfr_t. The mp_number wrappers around those types: mp_number<mpfr_float_backend<M> >, mp_number<mpf_float<M> >, mp_number<gmp_int>, mp_number<gmp_rational>. "
If not, what about a mp_number_cast function taking as parameter a rounding policy?
I think it would be very hard to a coherant set of rounding policies that were applicable to all backends... including third party ones that haven't been thought of yet. Basically ducking that issue at present :-( Could we expect this as an improvement for future releases?
Maybe ;-) I hope we can improve cpp_dec_float at some point, a coherent interface for all types I suspect may elude us as we can't force a particular backend that's mostly third party implemented to follow some model. It would be pretty hard to impose a rounding model on gmp's mpf_t for example - short of reimplementing mpfr - and I can't see us ever doing that!
Yes, but it's irrelevant / an implementation detail. The optional requirements are there for optimisations, the user shouldn't be able to detect which ones a backend choses to support. Even the conversion constructors?
OK, you got me on those those, I was thing of say the various eval_add / _multiplty / _subtract overloads which are just there to optimise certain operations.
* Is there a difference between implicit and explicit construction?
Not currently. So, I guess that only implicit construction is supported. I really think that mp_number should provide both constructors if the backed provide them.
Yes, it's now on the list of things to try and implement - I want to keep the code pretty stable at present though as the review is iminent.
I don't know, I'd have to think about that, what compilers support that now? gcc and clang atleast. Does msvc 11?
I don't think so.
* Are implicit conversion possible?
To an mp_number, never from it. Do you mean that there is no implicit conversion from mp_number to a builtin type?
Correct, that would be unsafe and surprising IMO.
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them?
I'm also not sure if it's possible, or even what we would gain - I can't offhand think of any interfaces that could use constexp for example. It depends on the backend. But construction from builtins and most of the arithmetic operations could be constexpr.
I'd have to think about that and experiment (it's fixing up the internals to be constexp safe that could be tricky). The expression template arithmetic ops couldn't ever be constexp, possibly the non-expression template ones could be though.
Good question, although:
* I think it's pretty common to write "mynumber << 4" and expect it to compile. Is it so hard to write "mynumber << 4u"?
Hard no, surprising that you have to do that yes. I can just hear the support requests comming in now...
* Why the "Non-member standard library function support" can be used only with floating-point Backend types? Why not with fixed-point types?
Because we don't currently have any to test this with. Well, you can replace the documentation to say just that.
OK. Regards, John.