[mp_math] a multi precision signed integer library

Hi all, I've been busy with porting the public domain C library libtommath to C++. The code is quite usable as it is and some simple documentation exists in the libs/mp_math/doc/html subfolder. I've implemented all C++ operators for the mp_int<> type and it can thus be used just like a built-in integer type. Some things like random number generation and primality testing have not been documented yet. Please have a look, I'm looking for feedback at the moment so that I can plan on how to proceed. http://www.boost-consulting.com/vault/index.php?action=downloadfile&filename=mp_math_v01.zip&directory=Math%20-%20Numerics& alternatively go to boost vault /math - numerics subfolder and download mp_math_v01.zip Kevin

"Kevin Sopp" wrote in message
Hi all, I've been busy with porting the public domain C library libtommath to C++. The code is quite usable as it is and some simple documentation exists in the libs/mp_math/doc/html subfolder. I've implemented all C++ operators for the mp_int<> type and it can thus be used just like a built-in integer type. Some things like random number generation and primality testing have not been documented yet. Please have a look, I'm looking for feedback at the moment so that I can plan on how to proceed.
Kevin, Just for your info I designed another interface, see document N2143 on: http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/#mailing2007-01 The root question here seems to be: do you use runtime polymorphism (virtual functions) or compile time polymorphism (templates). My argument is that as this class is so close to the hardware, and performance is so important, that runtime polymorphism will in this case provide the runtime flexibility needed. In your design the algorithms will probably end up in headers (just like the STL), while my algorithms will end up in DLLs. In other words: my design considers the allocator and the traits as implementation details (although in my design it is possible to change the allocator dynamically), while your design considers these as design parameters. In other words: in my opinion an integer is not a container, so compile-time flexibility is not needed, while runtime flexibility is needed for combination in expressions of signed/unsigned/modular types and base tpye pointers to derived type objects. Thus these two designs are more or less orthogonal. Regards, Maarten.

Hi Maarten, On Sun, Mar 30, 2008 at 10:13 PM, Maarten Kronenburg <M.Kronenburg@inter.nl.net> wrote:
Just for your info I designed another interface, see document N2143 on:
I knew about your document after I searched the internet for ideas on how to design the interface of the mp_int class template.
http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/#mailing2007-01 The root question here seems to be: do you use runtime polymorphism (virtual functions) or compile time polymorphism (templates). My argument is that as this class is so close to the hardware, and performance is so important, that runtime polymorphism will in this case provide the runtime flexibility needed.
I never really understood why you chose this approach, it is really different from the rest of the standard library. I understand that you achieve interoperability with the other integer types but this could be hardcoded once all integer types to be standardized are known. Also I think that interoperability of the integer types is a minor concern and as such should not govern the major design decision.
In your design the algorithms will probably end up in headers (just like the STL), while my algorithms will end up in DLLs. In other words: my design considers the allocator and the traits as implementation details (although in my design it is possible to change the allocator dynamically), while your design considers these as design parameters.
The traits parameter is totally implementation defined. I included it because I wanted to give the user a way to customize some of the internal workings of the class, usually in C libraries this is done via macro definitions at compile time. Most users don't need to bother but if you're a poweruser with very large calculations you have at least a few knobs to finetune the internals. There was almost never a question in my mind about the allocator parameter. After all it would be strange not to have one for a class that will potentially allocate a large amount of memory.
In other words: in my opinion an integer is not a container, so compile-time flexibility is not needed, while runtime flexibility is needed for combination in expressions of signed/unsigned/modular types and base tpye pointers to derived type objects.
It does not look like a container but internally it is one. From a Standard perspective it makes sense to give it an allocator parameter, as this is what a user will expect. Kevin

"Kevin Sopp" wrote in message > Hi Maarten,
On Sun, Mar 30, 2008 at 10:13 PM, Maarten Kronenburg wrote:
Just for your info I designed another interface, see document N2143 on:
I knew about your document after I searched the internet for ideas on how to design the interface of the mp_int class template.
http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/#mailing2007-01 The root question here seems to be: do you use runtime polymorphism
functions) or compile time polymorphism (templates). My argument is
(virtual that as
this class is so close to the hardware, and performance is so important, that runtime polymorphism will in this case provide the runtime flexibility needed.
I never really understood why you chose this approach, it is really different from the rest of the standard library. I understand that you achieve interoperability with the other integer types but this could be hardcoded once all integer types to be standardized are known. Also I think that interoperability of the integer types is a minor concern and as such should not govern the major design decision.
In your design the algorithms will probably end up in headers (just
Interoperability is a major concern, because it is also available for the base type ints: int x = -5; unsigned int y = 10; x -= y; y -= x; etc. The STL is made with templates, and rightly so, because containers have template parameters (e.g. what type they should contain). But this does not mean that other designs should be "templatized". Sometimes programmers need runtime polymorphism to achieve runtime flexibility, and in my opinion the integer class is an example. like the
STL), while my algorithms will end up in DLLs. In other words: my design considers the allocator and the traits as implementation details (although in my design it is possible to change the allocator dynamically), while your design considers these as design parameters.
The traits parameter is totally implementation defined. I included it because I wanted to give the user a way to customize some of the internal workings of the class, usually in C libraries this is done via macro definitions at compile time. Most users don't need to bother but if you're a poweruser with very large calculations you have at least a few knobs to finetune the internals. There was almost never a question in my mind about the allocator parameter. After all it would be strange not to have one for a class that will potentially allocate a large amount of memory.
In my opinion implementation parameters should be kept out of the interface. I agree about the allocator, this is why in my design I have added an allocator, only it is set at runtime.
In other words: in my opinion an integer is not a container, so
compile-time
flexibility is not needed, while runtime flexibility is needed for combination in expressions of signed/unsigned/modular types and base tpye pointers to derived type objects.
It does not look like a container but internally it is one. From a Standard perspective it makes sense to give it an allocator parameter, as this is what a user will expect.
In this view everything is internally a container. In my view a container is something that can contain anything (the template type parameter), while an integer can only contain contiguous bits. Yes I agree that the STL uses allocator template parameters, but what if I want to concatenate two standard strings with different allocators? In my design adding two integers with different allocators is possible because they are derived. Whether signed, unsigned, modular, allocated etc., because they are derived from the same base class (in this case integer), they can be mixed in expressions, just like the base type ints. Perhaps you think this interoperability is not so important, but what if large user groups use this and suddenly they require this? In a way this is the same issue as with the impossibility of derivation from the STL string (no virtual destructor). In the case of this integer class I don't think we can ignore this. Regards, Maarten.

On Mon, Mar 31, 2008 at 6:39 PM, Maarten Kronenburg <M.Kronenburg@inter.nl.net> wrote:
"Kevin Sopp" wrote in message > Hi Maarten,
On Sun, Mar 30, 2008 at 10:13 PM, Maarten Kronenburg
wrote:
Just for your info I designed another interface, see document N2143 on:
I knew about your document after I searched the internet for ideas on how to design the interface of the mp_int class template.
http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/#mailing2007-01 The root question here seems to be: do you use runtime polymorphism (virtual functions) or compile time polymorphism (templates). My argument is that as this class is so close to the hardware, and performance is so important, that runtime polymorphism will in this case provide the runtime flexibility needed.
I never really understood why you chose this approach, it is really different from the rest of the standard library. I understand that you achieve interoperability with the other integer types but this could be hardcoded once all integer types to be standardized are known. Also I think that interoperability of the integer types is a minor concern and as such should not govern the major design decision.
Interoperability is a major concern, because it is also available for the base type ints: int x = -5; unsigned int y = 10; x -= y; y -= x; etc.
You can easily make two multi precision integers or policies using different allocators interoperate: Int<signed, my_alloc> x = -5; Int<unsigned, your_alloc> y = 10; x -= y; y -= x; This can be almost trivially made to work, I think. In case of x + y, you just need to decide which combination of rhs and lhs policies to use.
The STL is made with templates, and rightly so, because containers have template parameters (e.g. what type they should contain). But this does not mean that other designs should be "templatized". Sometimes programmers need runtime polymorphism to achieve runtime flexibility, and in my opinion the integer class is an example.
It is easy to add runtime polymorphism to static polymorphic desings. It is impossible to do it the other way.
In your design the algorithms will probably end up in headers (just like the STL), while my algorithms will end up in DLLs. In other words: my design considers the allocator and the traits as implementation details (although in my design it is possible to change the allocator dynamically), while your design considers these as design parameters.
The traits parameter is totally implementation defined. I included it because I wanted to give the user a way to customize some of the internal workings of the class, usually in C libraries this is done via macro definitions at compile time. Most users don't need to bother but if you're a poweruser with very large calculations you have at least a few knobs to finetune the internals. There was almost never a question in my mind about the allocator parameter. After all it would be strange not to have one for a class that will potentially allocate a large amount of memory.
In my opinion implementation parameters should be kept out of the interface. I agree about the allocator, this is why in my design I have added an allocator, only it is set at runtime.
If you use a scoped allocator (a-la alloca) as a template parameter, a decent compiler could completely optimize it away. It is very hard (although not impossible) to do the same if you have a runtime indirection. -- gpd

On Mon, Mar 31, 2008 at 6:39 PM, Maarten Kronenburg wrote:
"Kevin Sopp" wrote in message > Hi Maarten,
On Sun, Mar 30, 2008 at 10:13 PM, Maarten Kronenburg
wrote:
Just for your info I designed another interface, see document
N2143 on:
I knew about your document after I searched the internet for ideas on how to design the interface of the mp_int class template.
http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/#mailing2007-01
The root question here seems to be: do you use runtime
functions) or compile time polymorphism (templates). My argument is
(virtual that as
this class is so close to the hardware, and performance is so important, that runtime polymorphism will in this case provide the runtime flexibility needed.
I never really understood why you chose this approach, it is really different from the rest of the standard library. I understand that you achieve interoperability with the other integer types but this could be hardcoded once all integer types to be standardized are known. Also I think that interoperability of the integer types is a minor concern and as such should not govern the major design decision.
Interoperability is a major concern, because it is also available for
"Giovanni Piero Deretta" wrote in message polymorphism the
base type ints: int x = -5; unsigned int y = 10; x -= y; y -= x; etc.
You can easily make two multi precision integers or policies using different allocators interoperate:
Int<signed, my_alloc> x = -5; Int<unsigned, your_alloc> y = 10; x -= y; y -= x;
This can be almost trivially made to work, I think.
In case of x + y, you just need to decide which combination of rhs and lhs policies to use.
The STL is made with templates, and rightly so, because containers have template parameters (e.g. what type they should contain). But this does not mean that other designs should be "templatized". Sometimes programmers need runtime polymorphism to achieve runtime flexibility, and in my opinion
Probably you would use conversion operators. But runtime polymorphism allows basetype pointers to derived type objects, and still it works (through the vtable). So runtime polymorphism can never be fully replaced by something else. the
integer class is an example.
It is easy to add runtime polymorphism to static polymorphic desings. It is impossible to do it the other way.
This would mean changing the design. Let's do it right from the beginning. In my opinion my design is the right one, but of course anyone is free to use another.
In your design the algorithms will probably end up in headers
(just
like the
STL), while my algorithms will end up in DLLs. In other words: my design considers the allocator and the traits as implementation details (although in my design it is possible to change the allocator dynamically), while your design considers these as design parameters.
The traits parameter is totally implementation defined. I included it because I wanted to give the user a way to customize some of the internal workings of the class, usually in C libraries this is done via macro definitions at compile time. Most users don't need to bother but if you're a poweruser with very large calculations you have at least a few knobs to finetune the internals. There was almost never a question in my mind about the allocator parameter. After all it would be strange not to have one for a class that will potentially allocate a large amount of memory.
In my opinion implementation parameters should be kept out of the interface. I agree about the allocator, this is why in my design I have added an allocator, only it is set at runtime.
If you use a scoped allocator (a-la alloca) as a template parameter, a decent compiler could completely optimize it away. It is very hard (although not impossible) to do the same if you have a runtime indirection.
Perhaps there are some developments with STL allocators that I don't know. But after the optimization it still uses an allocator, right? But I admit that an allocator in my design (with an allocated_integer) would have to be set first at runtime, otherwise an error is generated. But users that don't need another allocator, just use the base class integer, and don't bother with any allocator. In my design it is also possible to share allocators, as they are static. Regards, Maarten.

On Mon, Mar 31, 2008 at 8:10 PM, Maarten Kronenburg <M.Kronenburg@inter.nl.net> wrote:
"Giovanni Piero Deretta" wrote in message
On Mon, Mar 31, 2008 at 6:39 PM, Maarten Kronenburg
wrote:
"Kevin Sopp" wrote in message > Hi Maarten,
On Sun, Mar 30, 2008 at 10:13 PM, Maarten Kronenburg
wrote:
Just for your info I designed another interface, see document
I knew about your document after I searched the internet for ideas on how to design the interface of the mp_int class template.
http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/#mailing2007-01
The root question here seems to be: do you use runtime
functions) or compile time polymorphism (templates). My argument is
(virtual that as
this class is so close to the hardware, and performance is so important, that runtime polymorphism will in this case provide the runtime flexibility needed.
I never really understood why you chose this approach, it is really different from the rest of the standard library. I understand that you achieve interoperability with the other integer types but this could be hardcoded once all integer types to be standardized are known. Also I think that interoperability of the integer types is a minor concern and as such should not govern the major design decision.
Interoperability is a major concern, because it is also available for
N2143 on: polymorphism the
base type ints: int x = -5; unsigned int y = 10; x -= y; y -= x; etc.
You can easily make two multi precision integers or policies using different allocators interoperate:
Int<signed, my_alloc> x = -5; Int<unsigned, your_alloc> y = 10; x -= y; y -= x;
This can be almost trivially made to work, I think.
In case of x + y, you just need to decide which combination of rhs and lhs policies to use.
Probably you would use conversion operators.
Not necessarily, you just make your operators templated on the policy. No need to do any conversions.
But runtime polymorphism allows basetype pointers to derived type objects, and still it works (through the vtable). So runtime polymorphism can never be fully replaced by something else.
Of couse you need runtime indirection if you need dynamic indirection :). But 99.9% of the times you do not need it. When you need it is easy to add. BTW, note that std::function and std::shared_ptr use dynamic indirection for their allocators because it basically comes from free: - function need to do a dynamic dispatch anyway to copy objects, so doing type erasure on the allocator is basically free. - shared_ptr need dynamic dispatch to handle pointers to incomplete types and pointer to non virtual bases, so again, the allocator handling comes from free. Also in this class you only need to invoke the allocator on construction and on destruction, so not doing type erasure wouldn't really buy you much.
The STL is made with templates, and rightly so, because containers have template parameters (e.g. what type they should contain). But this does not mean that other designs should be "templatized". Sometimes programmers need runtime polymorphism to achieve runtime flexibility, and in my opinion the integer class is an example.
It is easy to add runtime polymorphism to static polymorphic desings. It is impossible to do it the other way.
This would mean changing the design. Let's do it right from the beginning.
Well, having a templated allocator increases both flexibility and performance, so IMHO *is* the right design.
In my opinion my design is the right one, but of course anyone is free to use another.
Of course.
In your design the algorithms will probably end up in headers
(just
like the
STL), while my algorithms will end up in DLLs. In other words: my design considers the allocator and the traits as implementation details (although in my design it is possible to change the allocator dynamically), while your design considers these as design parameters.
The traits parameter is totally implementation defined. I included it because I wanted to give the user a way to customize some of the internal workings of the class, usually in C libraries this is done via macro definitions at compile time. Most users don't need to bother but if you're a poweruser with very large calculations you have at least a few knobs to finetune the internals. There was almost never a question in my mind about the allocator parameter. After all it would be strange not to have one for a class that will potentially allocate a large amount of memory.
In my opinion implementation parameters should be kept out of the interface. I agree about the allocator, this is why in my design I have added an allocator, only it is set at runtime.
If you use a scoped allocator (a-la alloca) as a template parameter, a decent compiler could completely optimize it away. It is very hard (although not impossible) to do the same if you have a runtime indirection.
Perhaps there are some developments with STL allocators that I don't know. But after the optimization it still uses an allocator, right?
Not necessarily. If the allocator is just reusing a region of stack memory, what's left of the allocator might just be a couple of pointer operations. Same thing for arena allocators.
But I admit that an allocator in my design (with an allocated_integer) would have to be set first at runtime, otherwise an error is generated. But users that don't need another allocator, just use the base class integer, and don't bother with any allocator.
You hardly bother with an allocator when using std::vector, but it is still there in case you need it.
In my design it is also possible to share allocators, as they are static.
There is nothing in the Allocator concept that prevents it to be shared. In fact most current allocators are static and shared, because it is not guaranteed that stateful allocators are correctly handled by the standard library (stateful allocators will be guaranteed to work in C++0x and already do work with some standard library implementations and some boost containers). -- gpd

Probably you would use conversion operators.
Not necessarily, you just make your operators templated on the policy. No need to do any conversions.
But runtime polymorphism allows basetype pointers to derived type objects, and still it works (through the vtable). So runtime polymorphism can never be fully replaced by something else.
Of couse you need runtime indirection if you need dynamic indirection :). But 99.9% of the times you do not need it. When you need it is easy to add.
I'm not so sure it will be 0.01%. This integer will be used by MANY users, and you try to explain to a user that it does not work once you use a base pointer to a derived object. This will be one of the most important classes of C++, and I want it to work ALWAYS. Therefore in my opinion in this case there is no alternative for runtime polymorphism. Regards, Maarten.

On Mon, Mar 31, 2008 at 9:22 PM, Maarten Kronenburg <M.Kronenburg@inter.nl.net> wrote:
Probably you would use conversion operators.
Not necessarily, you just make your operators templated on the policy. No need to do any conversions.
But runtime polymorphism allows basetype pointers to derived type objects, and still it works (through the vtable). So runtime polymorphism can never be fully replaced by something else.
Of couse you need runtime indirection if you need dynamic indirection :). But 99.9% of the times you do not need it. When you need it is easy to add.
I'm not so sure it will be 0.01%. This integer will be used by MANY users, and you try to explain to a user that it does not work once you use a base pointer to a derived object.
Uh? Users do not expect to have a base class for integers, why should they expect it for multiprecision integers? It is a value type, why would you expect to treat it polymorphically?
This will be one of the most important classes of C++, and I want it to work ALWAYS. Therefore in my opinion in this case there is no alternative for runtime polymorphism.
I'm still far from convinced. But anyways, I'm not the one implementing it, so I have little say :) At most I'll be able to voice my concerns at review time. -- gpd

On Mon, Mar 31, 2008 at 9:22 PM, Maarten Kronenburg wrote:
Probably you would use conversion operators.
Not necessarily, you just make your operators templated on the
No need to do any conversions.
But runtime polymorphism allows basetype pointers to derived type objects, and still it works (through the vtable). So runtime polymorphism can never be fully replaced by something else.
Of couse you need runtime indirection if you need dynamic indirection :). But 99.9% of the times you do not need it. When you need it is easy to add.
I'm not so sure it will be 0.01%. This integer will be used by MANY users, and you try to explain to a user that it does not work once you use a
"Giovanni Piero Deretta" wrote in message policy. base
pointer to a derived object.
Uh? Users do not expect to have a base class for integers, why should they expect it for multiprecision integers? It is a value type, why would you expect to treat it polymorphically?
In my opinion an unsigned integer is a, works like a and can be substituted as an integer, and so is a modular integer, and as far as I'm concerned also an allocated integer. This is why I choose for derivation, and then with the base pointer to derived object (as quoted above) to runtime polymorphism. And in my opinion an integer is not a container, so I see no reason to use templates.
This will be one of the most important classes of C++, and I want it to
work
ALWAYS. Therefore in my opinion in this case there is no alternative for runtime polymorphism.
I'm still far from convinced. But anyways, I'm not the one implementing it, so I have little say :) At most I'll be able to voice my concerns at review time.
See above. Regards, Maarten.

AMDG Maarten Kronenburg wrote:
In my opinion an unsigned integer is a, works like a and can be substituted as an integer, and so is a modular integer, and as far as I'm concerned also an allocated integer. This is why I choose for derivation, and then with the base pointer to derived object (as quoted above) to runtime polymorphism.
I strongly disagree. 1) An unsigned integer only works like an integer for non-mutating operations. If clients take parameters of const integer&, sure, unsigned_integer will work fine in this context. Treating non-const integers polymorphically is a very bad idea, though. Code that does so has to be very carefully designed to make sure that it can handle all the combinations. If you have to use inheritance, make both integer and unsigned_integer inherit from a common base. 2) I would expect an integer class to be a value object. virtual functions don't play well with value objects, because to prevent slicing, everything has to be held by pointer. 3) unsigned_integer + unsigned_integer should return an unsigned_integer. There is no way for the result to be negative. This is even more glaring for modular_integer
And in my opinion an integer is not a container, so I see no reason to use templates.
??? In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Maarten Kronenburg wrote:
In my opinion an unsigned integer is a, works like a and can be substituted as an integer, and so is a modular integer, and as far as I'm concerned also an allocated integer. This is why I choose for derivation, and then with the base pointer to derived object (as quoted above) to runtime polymorphism.
I strongly disagree.
1) An unsigned integer only works like an integer for non-mutating operations. If clients take parameters of const integer&, sure, unsigned_integer will work fine in this context. Treating non-const integers polymorphically is a very bad idea, though. Code that does so has to be very carefully designed to make sure that it can handle all the combinations. If you have to use inheritance, make both integer and unsigned_integer inherit from a common base.
2) I would expect an integer class to be a value object. virtual functions don't play well with value objects, because to prevent slicing, everything has to be held by pointer.
3) unsigned_integer + unsigned_integer should return an unsigned_integer. There is no way for the result to be negative. This is even more glaring for modular_integer
+1 agreement from me too.
And in my opinion an integer is not a container, so I see no reason to use templates.
That's irrelevant IMO. For that matter, a value class doesn't have to be a template either - except possibly for the allocator parameter if required. And come to that, if the object performs allocation it is a container anyway! John.

AMDG
Maarten Kronenburg wrote:
In my opinion an unsigned integer is a, works like a and can be substituted as an integer, and so is a modular integer, and as far as I'm concerned also an allocated integer. This is why I choose for derivation, and then with
base pointer to derived object (as quoted above) to runtime
"Steven Watanabe" wrote in message the polymorphism.
I strongly disagree.
1) An unsigned integer only works like an integer for non-mutating operations. If clients take parameters of const integer&, sure, unsigned_integer will work fine in this context. Treating non-const integers polymorphically is a very bad idea, though. Code that does so has to be very carefully designed to make sure that it can handle all the combinations. If you have to use inheritance, make both integer and unsigned_integer inherit from a common base.
In my design the integer class is the value class, which can also be used as data member of other (possibly template) classes, and the unsigned_integer class is polymorphically derived so that a base pointer to a derived object still works. So in my opinion this design has the best of both worlds. But there are of course many other ways to do it with other pros and cons. But in my opinion these days templates are used in designs by default, and that's not a good thing. Users cannot derived from the STL string because it has no virtual destructor, but for that class that scenario was not so important. For the class integer I think we cannot afford to overlook this. Don't look at me if somewhere in the future this will bubble up.
2) I would expect an integer class to be a value object. virtual functions don't play well with value objects, because to prevent slicing, everything has to be held by pointer.
As mentioned the integer base class can be used as value class, the only price is the unused vtable.
3) unsigned_integer + unsigned_integer should return an unsigned_integer. There is no way for the result to be negative. This is even more glaring for modular_integer
No, what about unsigned_integer - unsigned_integer? That can be negative!
And in my opinion an integer is not a container, so I see no reason to use templates.
???

In large-scale computation code, fine control of allocation is a must. The ability to have the major data structures with their hand-tuned allocation strategies freely intermix with "casual" variables and constants in expressions, as parameters, etc is pretty important. Bleeding-edge factorization algorithms for instance allocate large numbers of, uhm, numbers all of which have roughly the same number of bits. Cache-and-paging efficient allocation for these numbers can make the difference between success and failure. But you'd like to be able to call functions that were written without awareness of your custom allocation strategy. The concerns for both interoperability and efficiency are inescapable. I believe the interface should allow for runtime polymorphism but be constructed so that a decent optimizing compiler can "almost always" infer the correct static call in code that is not intentionally using runtime polymorphism. That may take some template specializations but it would be worth it. Finally, how does this relate to gmp? Are we trying to work around licensing issues or is it an interface definition issue? Where the user has a tested and tuned gmp implementation, can the new library be configured as a wrapper for it?

On Mon, Mar 31, 2008 at 8:59 PM, Stephen Nuchia <snuchia@statsoft.com> wrote:
In large-scale computation code, fine control of allocation is a must. The ability to have the major data structures with their hand-tuned allocation strategies freely intermix with "casual" variables and constants in expressions, as parameters, etc is pretty important. Bleeding-edge factorization algorithms for instance allocate large numbers of, uhm, numbers all of which have roughly the same number of bits. Cache-and-paging efficient allocation for these numbers can make the difference between success and failure.
I do not work in the numerical computation field, but of course, agree in principle.
But you'd like to be able to call functions that were written without awareness of your custom allocation strategy.
I they have been written in the spirit of generic programming (and performance is a must), they will be templated on the numeric type, so not only you can pass multiprecision integers using custom allocators, you could even pass *different* multiprecision integers classes or even plain int. If performance is not a problem, (I/O maybe?), then you could use an external polymorphic wrapper to cancel the allocator type. A boost.multiprecision could even provide it out of the box. Really, it is nothing new. Boost.function does it for function objects, adobe any_iterator does it for iterators. It is not hard to imagine an any_integer class.
The concerns for both interoperability and efficiency are inescapable. I believe the interface should allow for runtime polymorphism but be constructed so that a decent optimizing compiler can "almost always" infer the correct static call in code that is not intentionally using runtime polymorphism.
A static, templated interface always allows for runtime polymorphism. The reverse is not the case.
That may take some template specializations but it would be worth it.
Finally, how does this relate to gmp? Are we trying to work around licensing issues or is it an interface definition issue? Where the user has a tested and tuned gmp implementation, can the new library be configured as a wrapper for it?
I have no opinion on this. It is a completely different issue. I think that gmp could be implemented on top of a templated interface just fine. Note that GMP might be a good argument against a templated interface. It works very well (in fact is AFAIK is the fastest multiprecision library) with runtime allocators. OTOH it might be a good argument in favor: custom, completely optimizable-away allocators might be the only way to go beyond GMP performance. -- gpd

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Kevin Sopp Sent: 30 March 2008 11:04 To: boost@lists.boost.org Subject: [boost] [mp_math] a multi precision signed integer library
I've been busy with porting the public domain C library libtommath to C++. The code is quite usable as it is and some simple documentation exists in the libs/mp_math/doc/html subfolder. I've implemented all C++ operators for the mp_int<> type and it can thus be used just like a built-in integer type. Some things like random number generation and primality testing have not been documented yet. Please have a look, I'm looking for feedback at the moment so that I can plan on how to proceed.
http://www.boost-consulting.com/vault/index.php?action=download file&filename=mp_math_v01.zip&directory=Math%20-%20Numerics&
alternatively go to boost vault /math - numerics subfolder and download mp_math_v01.zip
Of course, this is interesting in that Boost 'needs' an arbitrary precision arithmetic package in its library. John Maddock's Math Toolkit would not have been possible without NTL, a GPL library providing similar facilities, to calculate high accuracy coefficients and test values. And the Math Toolkit can be used with NTL (with some very minor modifications) by those who can accept the GPL license terms. (Aside - we have asked the author of NTL Victor Shoup to change the licence terms of at least the basic int and float facilities to Boost - but without success so far.) But it would be much nicer to have a package that fits seemlessly into the Boost package. I have not looked at this in detail but it looks promising for the integer requirements. I am not qualified to judge the compromises between design and efficiency, as mentioned by Maarten Kronenburg, but this library seems to fit in with the header-only and template/container design used by most Boost libraries. But there have been several unfinished big-integer attempts and I think you need to say why you think this one is better/more nearly finished. The tests are fine, but are no substitute for a lot more user examples and discussion of how to use fancier features are vital. However it does not address the floating point needs - yet. In order to be *really* useful, a 128 (and perhaps 256) fixed precision, and an arbitrary precision floating-point library is also needed, and preferably for better speed also a 128, and perhaps 256 fixed precision). But perhaps you are tired from your work so far ;-) Google Summer of Code project? Paul PS (A cosmetic detail is that none of the files include the Boost License but that will be easily fixed. It might be desirable to have an explicit agreement to these terms for any of Tom's C code used or referenced too.) PPS A check of your simplest demo program worked MSVC 2003, but produced an off-putting and worrying blizzard of warnings even at the default level 3 - these could (and should) be easily fixed by explicit casting. --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

On Mon, Mar 31, 2008 at 3:57 PM, Paul A Bristow <pbristow@hetp.u-net.com> wrote:
Of course, this is interesting in that Boost 'needs' an arbitrary precision arithmetic package in its library.
I'm glad that you see its value, I got a little worried after no one proposed a bigint library for GSoC this year ;-) My original motivation for starting this library was that I need it to add anything more interesting to the crypto library. And then I found libtommath which made it easy to write a bigint library in a short amount of time. Although there are many differences now between my code and his.
I have not looked at this in detail but it looks promising for the integer requirements. I am not qualified to judge the compromises between design and efficiency, as mentioned by Maarten Kronenburg, but this library seems to fit in with the header-only and template/container design used by most Boost libraries.
It was my intent to design it in a way that could be standardized.
But there have been several unfinished big-integer attempts and I think you need to say why you think this one is better/more nearly finished.
I have not seen other attempts except for big_radix_whole.tar.bz2 in the vault which implements arbitrary precision unsigned integers. One large advantage of my code is that it is quite simple, which makes it easy to understand and maintain. Now I want to work on polishing/finishing this library and then report back when it is basically ready for review.
However it does not address the floating point needs - yet. In order to be *really* useful, a 128 (and perhaps 256) fixed precision, and an arbitrary precision floating-point library is also needed, and preferably for better speed also a 128, and perhaps 256 fixed precision). But perhaps you are tired from your work so far ;-) Google Summer of Code project?
Luckily, Tom Denis, the author of libtommath has already written an arbitrary precision floating point package - again all code is in the public domain. Though he writes that it is not done yet it sounds like a good starting point for an mp_float type. Personally, I don't have any interest at the moment to start this project, I have other things on my TODO list.
PS (A cosmetic detail is that none of the files include the Boost License but that will be easily fixed. It might be desirable to have an explicit agreement to these terms for any of Tom's C code used or referenced too.)
This is by design as all code and documentation I wrote for this library is in the public domain in the spirit of libtommath.
PPS A check of your simplest demo program worked MSVC 2003, but produced an off-putting and worrying blizzard of warnings even at the default level 3 - these could (and should) be easily fixed by explicit casting.
Okay, I have only tested the code with gcc. Can you send me your compiler output privately? Thanks for your feedback it gives me the right motivation to continue with this library. Kevin

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Kevin Sopp Sent: 31 March 2008 15:55 To: boost@lists.boost.org Subject: Re: [boost] [mp_math] a multi precision signed integer library
PS (A cosmetic detail is that none of the files include the Boost License but that will be easily fixed. It might be desirable to have an explicit agreement to these terms for any of Tom's C code used or referenced too.)
This is by design as all code and documentation I wrote for this library is in the public domain in the spirit of libtommath.
In order to keep corporate lawyers sleeping easily at night, Boost has a policy of including the Boost license terms within ALL files. The effect is making it public domain so you are not restricting its use at all. Users worried about IP rights can be quite clear what the license terms are for all Boost stuff. But this is a trivial detail that can be fixed easily later.
PPS A check of your simplest demo program worked MSVC 2003, but produced an off-putting and worrying blizzard of warnings even at the default level 3 - these could (and should) be easily fixed by explicit casting.
Okay, I have only tested the code with gcc. Can you send me your compiler output privately?
Yes will do.
Thanks for your feedback it gives me the right motivation to continue with this library.
I hope you can get some positive feedback from others too - there was a lot of discussion on Maarten's propsal - heated at times ;-) Good luck! Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

AMDG Kevin Sopp wrote:
Hi all, I've been busy with porting the public domain C library libtommath to C++. The code is quite usable as it is and some simple documentation exists in the libs/mp_math/doc/html subfolder. I've implemented all C++ operators for the mp_int<> type and it can thus be used just like a built-in integer type. Some things like random number generation and primality testing have not been documented yet. Please have a look, I'm looking for feedback at the moment so that I can plan on how to proceed.
Since there is a multiply_by_2 function, I wonder whether it would make sense to allow multiplication by mpl::integral_c's and optimize when the value is a special case like a power of 2. In Christ, Steven Watanabe
participants (7)
-
Giovanni Piero Deretta
-
John Maddock
-
Kevin Sopp
-
Maarten Kronenburg
-
Paul A Bristow
-
Stephen Nuchia
-
Steven Watanabe