
On Mon, Mar 31, 2008 at 8:10 PM, Maarten Kronenburg <M.Kronenburg@inter.nl.net> wrote:
"Giovanni Piero Deretta" wrote in message
On Mon, Mar 31, 2008 at 6:39 PM, Maarten Kronenburg
wrote:
"Kevin Sopp" wrote in message > Hi Maarten,
On Sun, Mar 30, 2008 at 10:13 PM, Maarten Kronenburg
wrote:
Just for your info I designed another interface, see document
I knew about your document after I searched the internet for ideas on how to design the interface of the mp_int class template.
http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/#mailing2007-01
The root question here seems to be: do you use runtime
functions) or compile time polymorphism (templates). My argument is
(virtual that as
this class is so close to the hardware, and performance is so important, that runtime polymorphism will in this case provide the runtime flexibility needed.
I never really understood why you chose this approach, it is really different from the rest of the standard library. I understand that you achieve interoperability with the other integer types but this could be hardcoded once all integer types to be standardized are known. Also I think that interoperability of the integer types is a minor concern and as such should not govern the major design decision.
Interoperability is a major concern, because it is also available for
N2143 on: polymorphism the
base type ints: int x = -5; unsigned int y = 10; x -= y; y -= x; etc.
You can easily make two multi precision integers or policies using different allocators interoperate:
Int<signed, my_alloc> x = -5; Int<unsigned, your_alloc> y = 10; x -= y; y -= x;
This can be almost trivially made to work, I think.
In case of x + y, you just need to decide which combination of rhs and lhs policies to use.
Probably you would use conversion operators.
Not necessarily, you just make your operators templated on the policy. No need to do any conversions.
But runtime polymorphism allows basetype pointers to derived type objects, and still it works (through the vtable). So runtime polymorphism can never be fully replaced by something else.
Of couse you need runtime indirection if you need dynamic indirection :). But 99.9% of the times you do not need it. When you need it is easy to add. BTW, note that std::function and std::shared_ptr use dynamic indirection for their allocators because it basically comes from free: - function need to do a dynamic dispatch anyway to copy objects, so doing type erasure on the allocator is basically free. - shared_ptr need dynamic dispatch to handle pointers to incomplete types and pointer to non virtual bases, so again, the allocator handling comes from free. Also in this class you only need to invoke the allocator on construction and on destruction, so not doing type erasure wouldn't really buy you much.
The STL is made with templates, and rightly so, because containers have template parameters (e.g. what type they should contain). But this does not mean that other designs should be "templatized". Sometimes programmers need runtime polymorphism to achieve runtime flexibility, and in my opinion the integer class is an example.
It is easy to add runtime polymorphism to static polymorphic desings. It is impossible to do it the other way.
This would mean changing the design. Let's do it right from the beginning.
Well, having a templated allocator increases both flexibility and performance, so IMHO *is* the right design.
In my opinion my design is the right one, but of course anyone is free to use another.
Of course.
In your design the algorithms will probably end up in headers
(just
like the
STL), while my algorithms will end up in DLLs. In other words: my design considers the allocator and the traits as implementation details (although in my design it is possible to change the allocator dynamically), while your design considers these as design parameters.
The traits parameter is totally implementation defined. I included it because I wanted to give the user a way to customize some of the internal workings of the class, usually in C libraries this is done via macro definitions at compile time. Most users don't need to bother but if you're a poweruser with very large calculations you have at least a few knobs to finetune the internals. There was almost never a question in my mind about the allocator parameter. After all it would be strange not to have one for a class that will potentially allocate a large amount of memory.
In my opinion implementation parameters should be kept out of the interface. I agree about the allocator, this is why in my design I have added an allocator, only it is set at runtime.
If you use a scoped allocator (a-la alloca) as a template parameter, a decent compiler could completely optimize it away. It is very hard (although not impossible) to do the same if you have a runtime indirection.
Perhaps there are some developments with STL allocators that I don't know. But after the optimization it still uses an allocator, right?
Not necessarily. If the allocator is just reusing a region of stack memory, what's left of the allocator might just be a couple of pointer operations. Same thing for arena allocators.
But I admit that an allocator in my design (with an allocated_integer) would have to be set first at runtime, otherwise an error is generated. But users that don't need another allocator, just use the base class integer, and don't bother with any allocator.
You hardly bother with an allocator when using std::vector, but it is still there in case you need it.
In my design it is also possible to share allocators, as they are static.
There is nothing in the Allocator concept that prevents it to be shared. In fact most current allocators are static and shared, because it is not guaranteed that stateful allocators are correctly handled by the standard library (stateful allocators will be guaranteed to work in C++0x and already do work with some standard library implementations and some boost containers). -- gpd