Yet another bignum type

This is something I've been working on for the past few months. It's in the Sandbox and I just uploaded it to the new Vault as "big_radix_whole.tar.bz2" under the "Math - Numerics" directory, i.e. <http://www.boost-consulting.com/vault/index.php?action=downloadfile&filenam e=big_radix_whole.tar.bz2&directory=Math%20-%20Numerics&>. The files might choke on some systems, since I used naked min & max and the test file has #pragma marks all over it. (My IDE function-lists all the test modules as "BOOST_AUTO_TEST_CASE()", without anything in the parentheses, so I needed the marks to navigate the huge file.) I'm using Xcode 2.4, with Apple's special GCC, on a Mac OS X 10.4.8 PowerPC system. The only documentation right now is the Doxygen comments. It's a run-of-the-mill bignum type. It uses the philosophy of the STL containers to use a separate template parameter for the memory allocations. The type is radix-specific, also given as a template parameter. It's of the simplest bignum type, an arbitrary-length unsigned integer class. (Signed integers, fixed-point, floating-point, rational, etc., variants can all be built off of this kind of bignum.) As such, the renditions of the traditional algorithms are fairly pure. The biggest issue from a mathematical standpoint is that the type can't be a proper ring, since it doesn't support additive inverses (besides zero). Every other arithmetic operation (that doesn't need negative values) can work. The work-around for subtraction is the addition of "subtract-absolutely" member functions, which compute the absolute value of the difference, and return a "bool" indicating what the sign would have been. Many of the operations are repeated, each with optimized algorithms for the different sizes of the parameters. Since the traditional method of multiplication is actually a fused-add/multiply, member functions for such fused actions were added. There are many variants for shifting-up where additions/subtractions start since it's faster than doing a separate shifting operation first. And it saves memory, which was an important goal in all the algorithms, along with the strong exception guarantee. Besides the asserts, there is generally no protection for pre-condition violations (except for exceptions for negative results, divides by zero, and conversions). There are commented-out plans for functions that do powers and (truncated) roots, especially squaring and square-roots, and pseudo-reciprocals (for an N-digit value, find another M-digit value, with M <= N, closest to giving a product of Radix**2N, useful to implement exact-division by multiply-by-reciprocal, which is cheaper for huge N). What other routines should be added? Any hints on how to do said routines? Should there be Serialization? Could Spirit and/or Regex be used for the I/O routines? Any compiler workarounds to be added? -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

On 11/25/06 8:51 AM, "Mathias Gaunard" <mathias.gaunard@etu.u-bordeaux1.fr> wrote:
Daryle Walker wrote:
This is something I've been working on for the past few months.
Why not wrap a known and robust library instead of making a new one?
1. AFAIK, all those known & robust libraries already have wrappers 2. This type is more of a theoretical class than something that can compete against the "big boys". Having an adjustable radix alone is an indication of that. 3. I always thought that this idea was neat, and always wanted to make such a class ever since I heard of C++, even though I'm definitely not the first. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

This is something I've been working on for the past few months. It's in
"Daryle Walker" wrote in message the
Sandbox and I just uploaded it to the new Vault as "big_radix_whole.tar.bz2" under the "Math - Numerics" directory, i.e.
For this an interface should be proposed first, to be accepted by the LWG, see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2020.pdf The Revision 2 of this document will be submitted soon.
e=big_radix_whole.tar.bz2&directory=Math%20-%20Numerics&>. The files might choke on some systems, since I used naked min & max and the test file has #pragma marks all over it. (My IDE function-lists all the test modules as "BOOST_AUTO_TEST_CASE()", without anything in the parentheses, so I needed the marks to navigate the huge file.) I'm using Xcode 2.4, with Apple's special GCC, on a Mac OS X 10.4.8 PowerPC system. The only documentation right now is the Doxygen comments.
It's a run-of-the-mill bignum type. It uses the philosophy of the STL containers to use a separate template parameter for the memory allocations. The type is radix-specific, also given as a template parameter. It's of
<http://www.boost-consulting.com/vault/index.php?action=downloadfile&filenam the
simplest bignum type, an arbitrary-length unsigned integer class. (Signed integers, fixed-point, floating-point, rational, etc., variants can all be built off of this kind of bignum.) As such, the renditions of the traditional algorithms are fairly pure. The biggest issue from a mathematical standpoint is that the type can't be a proper ring, since it doesn't support additive inverses (besides zero). Every other arithmetic operation (that doesn't need negative values) can work. The work-around for subtraction is the addition of "subtract-absolutely" member functions, which compute the absolute value of the difference, and return a "bool" indicating what the sign would have been.
The integer data should not be in an STL container, but in a contiguous memory block, so that an assembler module is possible for performance, see http:://www.swox.com/gmp (see also my document referred to above).
Many of the operations are repeated, each with optimized algorithms for
the
different sizes of the parameters. Since the traditional method of multiplication is actually a fused-add/multiply, member functions for such fused actions were added. There are many variants for shifting-up where additions/subtractions start since it's faster than doing a separate shifting operation first. And it saves memory, which was an important goal in all the algorithms, along with the strong exception guarantee. Besides the asserts, there is generally no protection for pre-condition violations (except for exceptions for negative results, divides by zero, and conversions).
There are commented-out plans for functions that do powers and (truncated) roots, especially squaring and square-roots, and pseudo-reciprocals (for an N-digit value, find another M-digit value, with M <= N, closest to giving a product of Radix**2N, useful to implement exact-division by multiply-by-reciprocal, which is cheaper for huge N). What other routines should be added? Any hints on how to do said routines? Should there be Serialization? Could Spirit and/or Regex be used for the I/O routines? Any compiler workarounds to be added?
This is called requirements (see my document referred to above). I'm happy to discuss it here with anyone. Regards, Maarten.
-- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Maarten Kronenburg wrote:
For this an interface should be proposed first, to be accepted by the LWG, see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2020.pdf The Revision 2 of this document will be submitted soon.
<snip>
This is called requirements (see my document referred to above). I'm happy to discuss it here with anyone.
Maybe the upcoming Revision 2 already addresses it and I really don't want to sound disencouraging (in fact I appreciate your effort and the fact that you provide an optimized assembler version is a very big plus) but anyway. There is no excuse to have the synopsis dictate runtime polymorphism, here: Although a natural number is a special kind of integer, the integer should extend the functionality of the natural number and not vice versa. This scenario leads to (what I call) the "superset interface problem"; IOW. the base class contains the most extended implementation and derived classes can only polymorphicly mask it or use (what Gamma et al. call) the Template Method pattern to hook into it. So, if we wanted to extend things consistently to have a rational class we'd have to make it the base class. Further, there are contexts (such as e.g. implementing a scripting language that uses arbitrary precision arithmetic per-se) where the performance penalty of virtual dispatch could be significant. ==> That kind of design doesn't work too well. The easiest way out of our design troubles is to use converting constructors for widening conversion. Unfortunately we trade one performance problem for another - no cost for virtual dispatch but therefore extra cost for allocation and copying, which might perform worse. ==> Better design but no better implementation. The second easiest way is to use templates to create an integer view for appropriate types that models the concept Integer. // pseudo code template<class LHS, class RHS> integer operator+ ( viewable_as_integer<LHS> const & lhs , viewable_as_integer<RHS> const & rhs) { typename purified_integer<LHS>::type actual_lhs = purified_integer_view<LHS>(lhs.derived()); typename purified_integer<RHS>::type actual_rhs = purified_integer_view<LHS>(rhs.derived()); // perform operation on (actual_lhs, actual_rhs) // return result }; 'purified_view' is an identity function (by-reference) in case the argument is already an Integer. Otherwise it creates an integer view of its argument. After the purification actual_lhs and actual_rhs both model the Integer concept. This approach is already tricky to get entirely right. Note that I deduce 'viewable_as_integer<D>'. Every type convertible to an integer would inherit from this template specialized with its own type. I do it to restrict the arguments matched by the operator, here. We can even go further by applying Expression Templates. This way we can collect lots of useful information about the expression and exploit it for optimization. Further, we wouldn't need to redundantly do the purify stuff - the evaluator could handle it instead. However, it's even trickier to get right. ==> Enforcing implementation details is bad. The synopsis can be kept (partially) unspecified and concepts (IOW valid expressions and their semantics) should be documented instead. Regards, Tobias

Maarten Kronenburg wrote:
For this an interface should be proposed first, to be accepted by the LWG, see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2020.pdf The Revision 2 of this document will be submitted soon.
<snip>
This is called requirements (see my document referred to above). I'm happy to discuss it here with anyone.
Maybe the upcoming Revision 2 already addresses it and I really don't want to sound disencouraging (in fact I appreciate your effort and the fact that you provide an optimized assembler version is a very big plus) but anyway.
There is no excuse to have the synopsis dictate runtime polymorphism, here:
Although a natural number is a special kind of integer, the integer should extend the functionality of the natural number and not vice versa. This scenario leads to (what I call) the "superset interface problem"; IOW. the
Further, there are contexts (such as e.g. implementing a scripting language that uses arbitrary precision arithmetic per-se) where the
base class contains the most extended implementation and derived classes can only polymorphicly mask it or use (what Gamma et al. call) the Template Method pattern to hook into it. So, if we wanted to extend things consistently to have a rational class we'd have to make it the base class. Thanks for the comments. The fact is that an unsigned integer is an integer, and a modular integer is an integer, and an allocated integer is an integer. They have the same member functions and operators, only modified a little bit. And I would like to use pointers, so runtime polymorphism is required. What do you need more for derivation? A rational in my design would simply consist of two integer data members, the numerator and the denominator. I don't see any problem here. performance penalty of virtual dispatch could be significant. I have measured the performance hit of the vtable for the simple prefix operator++ and it was negligible (although of course I did not do it for all compilers).
==> That kind of design doesn't work too well.
The easiest way out of our design troubles is to use converting
constructors for widening conversion. Unfortunately we trade one performance problem for another - no cost for virtual dispatch but therefore extra cost for allocation and copying, which might perform worse.
==> Better design but no better implementation.
The second easiest way is to use templates to create an integer view for
appropriate types that models the concept Integer.
// pseudo code template<class LHS, class RHS> integer operator+ ( viewable_as_integer<LHS> const & lhs , viewable_as_integer<RHS> const & rhs) { typename purified_integer<LHS>::type actual_lhs = purified_integer_view<LHS>(lhs.derived()); typename purified_integer<RHS>::type actual_rhs = purified_integer_view<LHS>(rhs.derived());
// perform operation on (actual_lhs, actual_rhs) // return result };
This is very nice but in my opinion does not add very much to the simple binary operators for integers that I gave. Also take into account the evaluation problem of expressions with unsigned results but signed temporaries that I mentioned in the Introduction -> Classes derived from class integer -> The class unsigned_integer. Equivalent mathematical expressions should give identical results.
'purified_view' is an identity function (by-reference) in case the
argument is already an Integer. Otherwise it creates an integer view of its argument. After the purification actual_lhs and actual_rhs both model the Integer concept.
This approach is already tricky to get entirely right. Note that I deduce 'viewable_as_integer<D>'. Every type convertible to an integer would inherit from this template specialized with its own type. I do it to restrict the arguments matched by the operator, here.
We can even go further by applying Expression Templates. This way we can
collect lots of useful information about the expression and exploit it for optimization. Further, we wouldn't need to redundantly do the purify stuff - the evaluator could handle it instead. However, it's even trickier to get right. >
The expression templates you refer to should be generic for any arithmetic type, not just for the integer, and should be proposed in a separate proposal.
==> Enforcing implementation details is bad. The synopsis can be kept (partially) unspecified and concepts (IOW valid expressions and their semantics) should be documented instead.
In this case, as this is a basic type to be implemented and used under performance-critical environment, the details of the interface should be hammered out as much as possible. Below is an example of usage that I have added to the document (hope it comes through readable). Regards, Maarten. class heap_allocator : public integer_allocator { private: HANDLE the_heap; public: heap_allocator() { the_heap = HeapCreate( 0, 0, 0 ); }; ~heap_allocator() { HeapDestroy( the_heap ); }; void * allocate( integer::size_type n ) { return HeapAlloc( the_heap, 0, n ); }; void * reallocate( void * p, integer::size_type n ) { return HeapReAlloc( the_heap, 0, p, n ); }; void deallocate( void * p ) { HeapFree( the_heap, 0, p ); }; }; heap_allocator global_heap_a; // create heap heap_allocator global_heap_b; // create second heap int main() { modular_integer<1>::set_modulus( 16 ); modular_integer<2>::set_modulus( 11 ); allocated_integer<1>::set_allocator( & global_heap_a ); allocated_integer<2>::set_allocator( & global_heap_b ); integer x( 4 ); modular_integer<1> z( 10 ); allocated_integer<1> a( 3 ); integer * p = new allocated_integer<2>( -7 ); integer * q = new modular_integer<2>( 3 ); z += x * a; // z becomes 6 = 22 mod 16 x -= z; // x becomes -2 a *= x; // a becomes -6 *q += a; // *q becomes 8 = -3 mod 11 a.swap( *p ); // swaps a and *p by value // ... delete p; delete q; modular_integer<1>::set_modulus( 0 ); // cleanup static memory modular_integer<2>::set_modulus( 0 ); // cleanup static memory }

Maarten Kronenburg wrote:
The fact is that an unsigned integer is an integer, and a modular integer is an integer, and an allocated integer is an integer. They have the same member functions and operators, only modified a little bit.
Sounds like parametrization - a perfect scenario to use a class template...
And I would like to use pointers, so runtime polymorphism is required.
Why do you need runtime polymorphism to use pointers? Use pointers to a polymorphic base to battle the redundancy you invented by not using a template in the first place?
What do you need more for derivation?
A compelling reason. And while we're at it: I don't see no reason for the allocator not to be a template, either. You also wouldn't need an extra allocated_integer class. Not to mention inconsistency with the STL (yeah, I notice there's an additional reallocate member function in your allocator, but I guess it's better to add it to std::allocator than to require a new, polymorphic one).
A rational in my design would simply consist of two integer data members, the numerator and the denominator. I don't see any problem here.
An integer _is_a_ rational while the rational has more functionality. My point was that in cases like this one an "is-a" relationship does not justify runtime polymorphism, because it makes things inflexible.
Further, there are contexts (such as e.g. implementing a scripting language that uses arbitrary precision arithmetic per-se) where the performance penalty of virtual dispatch could be significant.
I have measured the performance hit of the vtable for the simple prefix operator++ and it was negligible (although of course I did not do it for all compilers).
What did you compare it to? The built-in type int? In that case I'd find it very hard to believe that the overhead will be negligible (regardless of the compiler and unless you have unacceptable overhead elsewhere)! <code>
This is very nice but in my opinion does not add very much to the simple binary operators for integers that I gave.
No, it adds nothing. But can potentially remove inflexibility and overhead ;-).
Also take into account the evaluation problem of expressions with unsigned results but signed temporaries that I mentioned in the Introduction -> Classes derived from class integer -> The class unsigned_integer.
I don't see any problem: All it takes is to leave out the overloads for unsigned and make the assignment operators of the unsigned throw when an implicit narrowing conversion (which I personally find questionable in itself) fails to fit.
Equivalent mathematical expressions should give identical results.
<snip>
The expression templates you refer to should be generic for any arithmetic type, not just for the integer, and should be proposed in a separate proposal.
Maybe. But it certainly is the way to make equivalent mathematical expressions give identical results (even without paying for it): http://tinyurl.com/qtyhh // experimental design draft I wouldn't dare to make it a standard proposal, though.
In this case, as this is a basic type to be implemented and used under performance-critical environment, the details of the interface should be hammered out as much as possible.
Especially if things are performance-critical you should either leave space for optimization or optimize it to the end. A fully optimized, generic component should not be optimized for a particular case. I believe that your implementation rocks for large numbers, but I'm pretty sure there is still plenty of room for improvement when benchmarking against a built-in integer. <snip>
... int main() { modular_integer<1>::set_modulus( 16 ); modular_integer<2>::set_modulus( 11 ); allocated_integer<1>::set_allocator( & global_heap_a ); allocated_integer<2>::set_allocator( & global_heap_b ); integer x( 4 ); modular_integer<1> z( 10 ); allocated_integer<1> a( 3 ); integer * p = new allocated_integer<2>( -7 ); integer * q = new modular_integer<2>( 3 );
That mix of tagging plus static functions is weird, at least (not to say scary). I guess I might know what you're up to, though. What about a template parameter that specifies the type of some Singleton for the modulus and a template & ctor parameter for the allocator (a la STL)? Regards, Tobias

"Tobias Schwinger" wrote in message
Maarten Kronenburg wrote:
The fact is that an unsigned integer is an integer, and a modular integer is an integer, and an allocated integer is an integer. They have the same member functions and operators, only modified a little bit.
Sounds like parametrization - a perfect scenario to use a class template...
Templates provide compile-time polymorphism, but in this case run-time polymorphism is required (see my example of usage below). If I have a pointer to an integer, I don't care how it's allocated, I just want it to behave at runtime like an integer. This is what I think runtime polymorphism is about.
And I would like to use pointers, so runtime polymorphism is required.
Why do you need runtime polymorphism to use pointers?
Use pointers to a polymorphic base to battle the redundancy you invented
by not using a template in the first place?
What do you need more for derivation?
A compelling reason.
And while we're at it: I don't see no reason for the allocator not to be a
template, either. You also wouldn't need an extra allocated_integer class. Not to mention inconsistency with the STL (yeah, I notice there's an additional reallocate member function in your allocator, but I guess it's better to add it to std::allocator than to require a new, polymorphic one).
The reason for derivation I already gave, also with the example of usage. The STL allocator has a few drawbacks. How do you add two strings with different allocators? As they become different template types (and not derived), you will get a compile-time error. This is not a problem for different char traits (which are impossible to concatenate anyway), but adding a differently allocated string could be possible. Another problem: when serialization of memory access is needed, how to do that with the STL allocator? For the integer_allocator that is not a problem: just add a mutex to the static allocator object. More allocated integer classes can even share the same static allocator object. Now I am not saying that the STL string or allocator design is wrong, because it is used in different contexts, but the priorities for the integer may be a bit different.
A rational in my design would simply consist of two integer data members, the numerator and the denominator. I don't see any problem here.
An integer _is_a_ rational while the rational has more functionality. My point was that in cases like this one an "is-a" relationship does not justify runtime polymorphism, because it makes things inflexible.
In my opinion runtime polymorphism makes the integer types more flexible to use at runtime. Templates make things more flexible at compile time.
Further, there are contexts (such as e.g. implementing a scripting language that uses arbitrary precision arithmetic per-se) where the performance penalty of virtual dispatch could be significant.
I have measured the performance hit of the vtable for the simple prefix operator++ and it was negligible (although of course I did not do it for
compilers).
What did you compare it to? The built-in type int? In that case I'd find it very hard to believe that the overhead will be negligible (regardless of
all the compiler and unless you have unacceptable overhead elsewhere)!
<code>
This is very nice but in my opinion does not add very much to the simple binary operators for integers that I gave.
No, it adds nothing. But can potentially remove inflexibility and overhead ;-).
Also take into account the evaluation problem of expressions with unsigned results but signed temporaries that I mentioned in the Introduction -> Classes derived from class integer -> The class unsigned_integer.
I don't see any problem: All it takes is to leave out the overloads for unsigned and make the assignment operators of the unsigned throw when an implicit narrowing conversion (which I personally find questionable in itself) fails to fit.
Equivalent mathematical expressions should give identical results.
<snip>
The expression templates you refer to should be generic for any arithmetic type, not just for the integer, and should be proposed in a separate proposal.
Maybe. But it certainly is the way to make equivalent mathematical expressions give identical results (even without paying for it):
http://tinyurl.com/qtyhh // experimental design draft
I wouldn't dare to make it a standard proposal, though.
In this case, as this is a basic type to be implemented and used under performance-critical environment, the details of the interface should be hammered out as much as possible.
Especially if things are performance-critical you should either leave space for optimization or optimize it to the end. A fully optimized, generic component should not be optimized for a particular case. I believe that your implementation rocks for large numbers, but I'm pretty sure there is still
No, I compared the prefix operator++ of an integer class that is not derived with virtual member functions and operators, with one that is. The difference should then be the effect of the vtable. plenty of room for improvement when benchmarking against a built-in integer.
<snip>
... int main() { modular_integer<1>::set_modulus( 16 ); modular_integer<2>::set_modulus( 11 ); allocated_integer<1>::set_allocator( & global_heap_a ); allocated_integer<2>::set_allocator( & global_heap_b ); integer x( 4 ); modular_integer<1> z( 10 ); allocated_integer<1> a( 3 ); integer * p = new allocated_integer<2>( -7 ); integer * q = new modular_integer<2>( 3 );
That mix of tagging plus static functions is weird, at least (not to say
scary). I guess I might know what you're up to, though. What about a template parameter that specifies the type of some Singleton for the modulus and a template & ctor parameter for the allocator (a la STL)?
This is difficult to explain, but I will try it just the same. The integer (and its unsigned, modular and allocated variants) are basic types, just like for example the string is. Now I agree it is possible to use singletons and object factories etc., but should these also be used for a basic type like an integer? The integer is very close to the hardware, it is more like a driver. A driver is always an abstract class interface with virtual functions. The integer should also be designed close to the hardware, with just the simple language elements. Hope this gives an answer to your question. Regards, Maarten.
Regards,
Tobias

Maarten Kronenburg wrote:
"Tobias Schwinger" wrote in message
Maarten Kronenburg wrote:
And I would like to use pointers, so runtime polymorphism is required.
Why do you need runtime polymorphism to use pointers?
Use pointers to a polymorphic base to battle the redundancy you invented by not using a template in the first place?
What do you need more for derivation?
A compelling reason.
And while we're at it: I don't see no reason for the allocator not to be a template, either. You also wouldn't need an extra allocated_integer class. Not to mention inconsistency with the STL (yeah, I notice there's an additional reallocate member function in your allocator, but I guess it's better to add it to std::allocator than to require a new, polymorphic one).
The reason for derivation I already gave, also with the example of usage.
Theorem: An STL allocator can be turned it into an integer_allocator template<class STL_Allocator> class adapted_stl_allocator : public integer_allocator { STL_Allocator & ref_that; // Or: // typename STL_Allocator::template rebind<unsigned>::other & ref_that; // to ensure T is always unsigned if that's what you want public: adapted_stl_allocator(STL_Allocator & that = STL_Allocator()) : ref_that(that) { } ~adapted_stl_allocator() { } void * allocate( integer::size_type n ) { return this->ref_that.allocate(n,0L); } // ... }; Now the expression "new adapted_stl_allocator<Alloc>(obj_alloc)" gives me an integer_allocator. Q.E.D. So where is the reason for integer_allocator to be part of the interface, again? However, virtual function might be not the best thing to call from assembly level (because of ABI incompatiblities between different compilers) you might prefer to do a different kind of type erasure to create freestanding functions instead: namespace detail { template<class Allocator> void* allocate(void * allocator, size_t size, void const * hint = 0L) { return static_cast<Allocator*>(allocator)->allocate(size, static_cast<typename Allocator::const_pointer>(hint) ); } } Now the expression: "& detail::allocate<Allocator>(& alloc, size)" is an ordinary function pointer. Its type is: "void*(*)(void*,size_t,void const*)" (note that all the type information from the template arguments has been removed and that you have an address you can CALL).
The STL allocator has a few drawbacks. How do you add two strings with different allocators?
That's a property of std::string - not of the allocator. And there certainly is a way to make strings with different allocators addable without making the allocator polymorphic. In fact, I already presented more advanced techniques earlier in this discussion.
As they become different template types (and not derived), you will get a compile-time error. This is not a problem for different char traits (which are impossible to concatenate anyway), but adding a differently allocated string could be possible. Another problem: when serialization of memory access is needed, how to do that with the STL allocator?
Just implement it? Or better: use Boost.Pool and have a break ;-). STL allocators are often just handles on static allocators. But since they are held by value they are more flexible and you could also use them to implement e.g. SBO, which is not possible with static allocators (but quite attractive for arbitrary precision integers, I figure).
For the integer_allocator that is not a problem: just add a mutex to the static allocator object. More allocated integer classes can even share the same static allocator object.
There is no problem and this functionality is provided by Boost.Pool.
Now I am not saying that the STL string or allocator design is wrong, because it is used in different contexts, but the priorities for the integer may be a bit different.
Yeah, but memory allocation should stay memory allocation and it might be hard to convince the comittee that another, polymorphic allocator is required...
What did you compare it to? The built-in type int? In that case I'd find it very hard to believe that the overhead will be negligible (regardless of the compiler and unless you have unacceptable overhead elsewhere)!
No, I compared the prefix operator++ of an integer class that is not derived with virtual member functions and operators, with one that is. The difference should then be the effect of the vtable.
Debug mode of a ten year old compiler, eh? ;-) Fact is, dynamic calls come at a much higher cost than addition - and a decent, optimizing C++ compiler will eat away wrapper code around an int (almost - if not) entirely. If you dynamically allocate every int you might be right. But you shouldn't, because it is possible to eliminate some dynamic allocation (even for arbitrary precision integers) with SBO and/or expression templates. <snip>
... int main() { modular_integer<1>::set_modulus( 16 ); modular_integer<2>::set_modulus( 11 ); allocated_integer<1>::set_allocator( & global_heap_a ); allocated_integer<2>::set_allocator( & global_heap_b ); integer x( 4 ); modular_integer<1> z( 10 ); allocated_integer<1> a( 3 ); integer * p = new allocated_integer<2>( -7 ); integer * q = new modular_integer<2>( 3 );
That mix of tagging plus static functions is weird, at least (not to say scary). I guess I might know what you're up to, though. What about a template parameter that specifies the type of some Singleton for the modulus and a template & ctor parameter for the allocator (a la STL)?
This is difficult to explain, but I will try it just the same. The integer (and its unsigned, modular and allocated variants) are basic types, just like for example the string is.
AFAIK, the type string is just a typedef name for a template specialization...
Now I agree it is possible to use singleton and object factories etc.,
You are probably misunderstanding me. What I'm talking about is e.g: struct my_modulus { static integer const & value(); private: // whatever }; Passed in as template argument: modular_integer< my_modulus > You could also provide a template to allow, e.g: modular_integer< integer_constant<123,456,789,012,345,678,901,234> >
but should these also be used for a basic type like an integer? The integer is very close to the hardware, it is more like a driver. A driver is always an abstract class interface with virtual functions.
Yeah, but there are no unsigned drivers that inherit from fully implemented signed ones ;-).
The integer should also be designed close to the hardware, with just the simple language elements. Hope this gives an answer to your question.
I don't see how "close to hardware" can be an excuse for design flaws. How can I e.g. get a modular integer that uses an allocator? This case is begging for templates but you avoid them at the high cost of a weird and inflexible design. Why? Are you using an outdated and broken compiler? Are you afraid that your optimized implementation might not fit in? If so, update and/or discover the full potential of template-based techniques before jumping to conclusions. Your interface needs serious work, IMO. It's not meant offensive in any way and I really hope you find the time and energy to pull it off. Regards, Tobias

"Tobias Schwinger" wrote in message
This case is begging for templates but you avoid them at the high cost of a weird and inflexible design.
Why? Are you using an outdated and broken compiler? Are you afraid that your optimized implementation might not fit in?
If so, update and/or discover the full potential of template-based techniques before jumping to conclusions. Your interface needs serious work, IMO. It's not meant offensive in any way and I really hope you find the time and energy to pull it off.
In the book "C++ Coding Standards" by Herb Sutter and Andrei Alexandrescu, chapter 64 reads: "Blend static and dynamic polymorphism judiciously." In my opinion this is what I am trying to do. In my document I have added to the section Introduction - Design Decisions the following: "As an unsigned, modular or allocated integer is an integer, these are designed as (possibly template) classes derived from class integer with virtual member functions and operators, providing runtime polymorphism. Thus a pointer of type integer * may point to an unsigned, a modular or an allocated integer, which all behave like an integer (virtual methods) but with slightly different implementations. This may be compared with a driver, which is an abstract class interface with only virtual functions. The integer and a driver have in common that they are both very close to the hardware. This is also the reason that for this basic type only basic language elements are used, and not design patterns like singletons, object factories and smart pointers, which should be used at a higher abstraction level." In an earlier discussion I have mentioned that there is a scale between compile-time (static) polymorphism on the left to run-time (dynamic) polymorphism on the right. Then the following items can be ordered from left to right: containers strings integers drivers. Regards, Maarten.

Maarten Kronenburg wrote:
"As an unsigned, modular or allocated integer is an integer, these are designed as (possibly template) classes derived from class integer with virtual member functions and operators, providing runtime polymorphism. Thus a pointer of type integer * may point to an unsigned, a modular or an allocated integer, which all behave like an integer (virtual methods) but with slightly different implementations.
Pointer polymorphism is not important to have. If it really was, there would be a polymorphic base for a sequences and strings in the standard library.
This may be compared with a driver, which is an abstract class interface with only virtual functions.
The unsigned integer inherits from the signed one. While this does model arithmetics correctly (a natural number is an integer, after all), it is the unsigned integer that has richer functionality. It's clumsy and not extensible to use inheritance in such a case. Allocation is orthogonal to all other properties, it applies to signed, unsigned, and modular integers - so this part is even worse.
The integer and a driver have in common that they are both very close to the hardware.
You said so already - and it doesn't make that lame excuse any better to repeat it :-).
This is also the reason that for this basic type only basic language elements are used, and not design patterns like singletons, object factories and smart pointers, which should be used at a higher abstraction level."
Why do you think so? What are "basic language elements"? What is an "object factory" (never heard of it)? Is there a design pattern called "smart pointer" (yeah, there are smart pointers but these are not design patterns, AFAIK)? Really interesting that having a fancy name for "struct with static function" makes it non-basic ;-)...
In an earlier discussion I have mentioned that there is a scale between compile-time (static) polymorphism on the left to run-time (dynamic) polymorphism on the right. Then the following items can be ordered from left to right: containers strings integers drivers.
No. Drivers are drivers. And strings and arbitrary precision integers are containers with special operations. It might feel like writing a driver to you - but these are details specific to your implementation that really should not have any impact on the interface you choose! Let's continue this discussion once you are really interested in improving the design. Regards, Tobias

Maarten Kronenburg wrote:
"As an unsigned, modular or allocated integer is an integer, these are designed as (possibly template) classes derived from class integer with virtual member functions and operators, providing runtime polymorphism. Thus a pointer of type integer * may point to an unsigned, a modular or an allocated integer, which all behave like an integer (virtual methods) but with slightly different implementations.
Pointer polymorphism is not important to have. If it really was, there would be a polymorphic base for a sequences and strings in the standard
"Tobias Schwinger" wrote in message library. As I mentioned before, it could be possible to add two strings with different allocators (though it may be of low priority, lower than for integers).
This may be compared with a driver, which is an abstract class interface with only virtual functions.
The unsigned integer inherits from the signed one. While this does model
arithmetics correctly (a natural number is an integer, after all), it is the unsigned integer that has richer functionality. It's clumsy and not extensible to use inheritance in such a case. The normal and the unsigned integer have the same functionality, only the implementation is slightly different.
Allocation is orthogonal to all other properties, it applies to signed,
unsigned, and modular integers - so this part is even worse.
The integer and a driver have in common that they are both very close to the hardware.
You said so already - and it doesn't make that lame excuse any better to
repeat it :-).
This is also the reason that for this basic type only basic language elements are used, and not design patterns like singletons,
factories and smart pointers, which should be used at a higher abstraction level."
Why do you think so? What are "basic language elements"? What is an "object factory" (never heard of it)? Is there a design pattern called "smart pointer" (yeah, there are smart pointers but these are not design
object patterns, AFAIK)? The book "Modern C++ Design, Generic Programming and Design Patterns Applied" by Andrei Alexandrescu describes them in the same part as "Components", but others may of course look at them differently.
Really interesting that having a fancy name for "struct with static
function" makes it non-basic ;-)...
In an earlier discussion I have mentioned that there is a scale between compile-time (static) polymorphism on the left to run-time (dynamic) polymorphism on the right. Then the following items can be ordered from
to right: containers strings integers drivers.
No. Drivers are drivers. And strings and arbitrary precision integers are containers with special operations. It might feel like writing a driver to you - but these are details specific to your implementation that really should not have any impact on
left the interface you choose! Containers can be filled with any type (this is why they are templates), but integers can only be filled with binary data in one single contiguous memory block (see gmp). In my opinion this is very different.
Let's continue this discussion once you are really interested in improving
the design. Thanks for the discussion, it makes some things clear to me that I didn't see before. Regards, Maarten.
Regards,
Tobias

On Wednesday, November 29, 2006, at 07:54AM, "Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote:
The unsigned integer inherits from the signed one. While this does model arithmetics correctly (a natural number is an integer, after all), it is the unsigned integer that has richer functionality. It's clumsy and not extensible to use inheritance in such a case.
The normal and the unsigned integer have the same functionality, only the implementation is slightly different.
I've heard this often enough since the beginning of this discussion, and imho it's a major source of confusion and problems in the design. Google "Liskov Substitution Principle", "ISA Relationship" or other similar keywords, for more information. The famous example is Shape/Rectangle/Square (read http://www.objectmentor.com/resources/articles/lsp.pdf for a good discussion). So I have to say this loud and clear: a natural number <NOT> "is a" </NOT> integer. It is not, no more than a modular one is. Try subtracting 1 to 0, and you will not get the same behavior in either three types. This violates the LSP (Liskov Subst. Princ. mentioned above) However, a const_unsigned_integer "IsA" const_integer, so long as you can't modify the value. Thus if you think of integer as a value semantic type, all is well in your relationships, but the presence of operator--, etc. dictates that unsigned_integer and integer should be unrelated classes. There is a hierarchy, equivalent to the famous example of Shape/Square/Rectangle, where (loosely speaking, and without syntactic rigor): class const_integer; class const_unsigned_integer : public const_integer; // a natural number is an integer, when talking about value semantics class integer : public const_integer; class unsigned_integer : public const_unsigned integer; But note that the fourth side of the rectangle (unsigned_integer: public integer) is *not* provided. The same goes even more drastically with modular_integer where some operations (operator<) don't make real sense. I think this was mentioned back in June, perhaps not in such details, but I remember some people objecting to the inheritance back then. I think you should think about this seriously, Maarten, since imho it's a major flaw in your design, not taking any position on allocators or other issues. I don't have a particular opinion about how to fix the proposal, but I just want this forum to be at least aware of the issues. Regards, -- Herve Bronnimann

"Herve Bronnimann" wrote in message
The unsigned integer inherits from the signed one. While this does
arithmetics correctly (a natural number is an integer, after all), it is
unsigned integer that has richer functionality. It's clumsy and not extensible to use inheritance in such a case.
The normal and the unsigned integer have the same functionality, only the implementation is slightly different.
I've heard this often enough since the beginning of this discussion, and imho it's a major source of confusion and problems in the design. Google "Liskov Substitution Principle", "ISA Relationship" or other similar keywords, for more information. The famous example is Shape/Rectangle/Square (read http://www.objectmentor.com/resources/articles/lsp.pdf for a good discussion).
So I have to say this loud and clear: a natural number <NOT> "is a" </NOT> integer. It is not, no more than a modular one is. Try subtracting 1 to 0, and you will not get the same behavior in either three types. This violates
model the the LSP (Liskov Subst. Princ. mentioned above) For determining whether an arithmetic result is positive or negative, you have to compute it first, and for determining whether a modular integer arithmetic result needs to be modulo reduced, you need to know the result first, which may not be in the modulo range. For this practical reason the unsigned and modular integer are built up from a normal integer. Software design is (in this case) a little more practical than pure mathematics, where an unsigned integer is always by definition positive and a modulo integer is always by definition in the modulo range. The ideal mathematical world can (in this case) not be achieved by real software, but only approached. Regards, Maarten.
However, a const_unsigned_integer "IsA" const_integer, so long as you
can't modify the value. Thus if you think of integer as a value semantic type, all is well in your relationships, but the presence of operator--, etc. dictates that unsigned_integer and integer should be unrelated classes. There is a hierarchy, equivalent to the famous example of Shape/Square/Rectangle, where (loosely speaking, and without syntactic rigor):
class const_integer; class const_unsigned_integer : public const_integer; // a natural
number is an integer, when talking about value semantics
class integer : public const_integer; class unsigned_integer : public const_unsigned integer; But note that the fourth side of the rectangle (unsigned_integer: public
integer) is *not* provided.
The same goes even more drastically with modular_integer where some
operations (operator<) don't make real sense.
I think this was mentioned back in June, perhaps not in such details, but
I remember some people objecting to the inheritance back then. I think you should think about this seriously, Maarten, since imho it's a major flaw in your design, not taking any position on allocators or other issues. I don't have a particular opinion about how to fix the proposal, but I just want this forum to be at least aware of the issues.
Regards, -- Herve Bronnimann

Maarten Kronenburg wrote:
"Herve Bronnimann" wrote in message
I've heard this often enough since the beginning of this discussion, and imho it's a major source of confusion and problems in the design. Google "Liskov Substitution Principle", "ISA Relationship" or other similar keywords, for more information. The famous example is Shape/Rectangle/Square (read http://www.objectmentor.com/resources/articles/lsp.pdf for a good discussion). So I have to say this loud and clear: a natural number <NOT> "is a" </NOT> integer. It is not, no more than a modular one is. Try subtracting 1 to 0, and you will not get the same behavior in either three types. This violates the LSP (Liskov Subst. Princ. mentioned above)
For determining whether an arithmetic result is positive or negative, you have to compute it first, and for determining whether a modular integer arithmetic result needs to be modulo reduced, you need to know the result first, which may not be in the modulo range. For this practical reason the unsigned and modular integer are built up from a normal integer. Software design is (in this case) a little more practical than pure mathematics, where an unsigned integer is always by definition positive and a modulo integer is always by definition in the modulo range. The ideal mathematical world can (in this case) not be achieved by real software, but only approached. Regards, Maarten.
Ok, I've been way too busy to pay attention to this thread carefully, but I have to say what parts I've read are mostly about several folks objecting to this design -- again. We've had this long discussion once before in May/June of this year. I believe the first objection to the polymorphic design began here: http://lists.boost.org/Archives/boost/2006/05/105372.php The discussion goes on for a long time. Here's some selected bites: http://lists.boost.org/Archives/boost/2006/06/105822.php http://lists.boost.org/Archives/boost/2006/06/105677.php http://lists.boost.org/Archives/boost/2006/06/105660.php Finally it culminated in this post from me that got no reply. http://lists.boost.org/Archives/boost/2006/06/105744.php Many on this list obviously disagree with the choice of runtime polymorphism. I personally think the unsigned type should be dropped and all the runtime polymorphism removed as was suggested back in June. This level of objection among the Boost community does not bode well for this proposal in the standard committee. Some of the people objecting are voting members of that committee. But, in the end, it's up to Maarten to decide how he wants to proceed with the design -- it's his work. At this point, however, I'm happy to see an alternative effort because I don't think Maarten is going to change his mind. And I feel certain that this library won't make it into Boost or the standard. Note that besides Daryle's new effort, which I haven't looked at, there's Boost BigInt by Ron Garcia and friends in the sandbox as well. Not sure why we keep reinventing this but never finishing... Jeff

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeff Garland Sent: 30 November 2006 00:57 To: boost@lists.boost.org Subject: Re: [boost] Yet another bignum type
Note that besides Daryle's new effort, which I haven't looked at, there's Boost BigInt by Ron Garcia and friends in the sandbox as well. Not sure why we keep reinventing this but never finishing...
As, I suspect, a representative of those who don't fully understand the details of these discussions, I feel impelled to say how very strange I find it that Boost has several flavours of fancy points but lacks any sort of bigger integer. Surely, bigger integers are quite fundamental; and have many potential applications. I ask again if our review process is partly to blame for this. IMO people are not going to put in the boring work on finishing it to the rightly rigorous review standard unless they feel they have a good chance of getting it through (and maybe not even then - but would be happy for someone else to do the drudge work on testing and documentation). Do we need some process for deciding that a particular design/prototype is a 'candidate for work towards a full review' in order to provide that encouragement? Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

Note that besides Daryle's new effort, which I haven't looked at, there's Boost BigInt by Ron Garcia and friends in the sandbox as well. Not sure why we keep reinventing this but never finishing...
As, I suspect, a representative of those who don't fully understand the
very strange I find it that Boost has several flavours of fancy points but lacks any sort of bigger integer.
Surely, bigger integers are quite fundamental; and have many potential applications.
I ask again if our review process is partly to blame for this.
IMO people are not going to put in the boring work on finishing it to the rightly rigorous review standard unless they feel they have a good chance of getting it through (and maybe not even then - but would be happy for someone else to do the drudge work on testing and documentation). Do we need some process for deciding that a
"Paul A Bristow" wrote in message details of these discussions, I feel impelled to say how particular design/prototype is a 'candidate for work towards
a full review' in order to provide that encouragement?
Paul
In my opinion software design is not mathematical proof that one design is better than another, but there is the mathematical fact that the set of unsigned integers is a subset of the integers, and the set of modular integers is a subset of the integers. In the book "C++ Coding Standards" by Herb Sutter and Andrei Alexandrescu, in chapter 64: "Blending static and dynamic polymorphism judiciously", it reads: "Due to its characteristics, dynamic polymorphism in C++ is best at: - Uniform manipulation based on superset/subset relationships: Different classes that hold a superset/subset (base/derived) relationship can be treated uniformly." In my "design decisions" I will quote this passage literally. Of course anyone is free to propose a different design, and then the LWG would have to make a decision which design to prefer. Regards, Maarten

Quoting Maarten Kronenburg <M.Kronenburg@inter.nl.net>:
In my opinion software design is not mathematical proof that one design is better than another, but there is the mathematical fact that the set of unsigned integers is a subset of the integers, and the set of modular integers is a subset of the integers.
You missed the fact that definition of mathematical set and subset is significantly different from definition of set and subset in software design. Most importantly, later one depends on indended use. B.

"Bronek Kozicki" wrote in message
In my opinion software design is not mathematical proof that one design is better than another, but there is the mathematical fact that the set of unsigned integers is a subset of the integers, and the set of modular integers is a subset of the integers.
You missed the fact that definition of mathematical set and subset is significantly different from definition of set and subset in software design. Most importantly, later one depends on indended use.
A subset is a subset.
B.

----Original Message---- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Maarten Kronenburg Sent: 30 November 2006 12:09 To: boost@lists.boost.org Subject: Re: [boost] Yet another bignum type
In my opinion software design is not mathematical proof that one design is better than another, but there is the mathematical fact that the set of unsigned integers is a subset of the integers, and the set of modular integers is a subset of the integers.
But your classes do not represent the SET of integers and the SET of modular integers. They represent the FIELDS (you include the operators - and the class wouldn't be much use if they didn't. The modular integers do NOT form a sub-field of the integers, which is why you shouldn't use derivation to represent them.
The LWG would have to make a decision which design to prefer.
Yes. It is also true to say that even if there is /no/ competing proposal, they would have to make a decision if they liked the proposed design. My gut reaction is that you should be prepared for the LWG to be as negative as the boost community.

Hi, I would like to join the critics of the design, just trying to clarify certain things. On Thu, Nov 30, 2006 at 04:26:10PM -0000, Martin Bonner wrote:
----Original Message---- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Maarten Kronenburg Sent: 30 November 2006 12:09 To: boost@lists.boost.org Subject: Re: [boost] Yet another bignum type
In my opinion software design is not mathematical proof that one design is better than another, but there is the mathematical fact that the set of unsigned integers is a subset of the integers, and the set of modular integers is a subset of the integers.
But your classes do not represent the SET of integers and the SET of modular integers. They represent the FIELDS (you include the operators - and the class wouldn't be much use if they didn't.
The modular integers do NOT form a sub-field of the integers, which is why you shouldn't use derivation to represent them.
In computer science we have to be very precise about what we are speaking --- up to the formal details (this is different from ordinary mathematics, where often handwaving is fine). There is no "the set of integers" etc. First of all, depending on your foundations (for general mathematics set theory is best suited) there are (several) *representations" of natural numbers and integers. Ordinary mathematics often (especially nowadays) actually does not go down to the level of representation, but just takes one representation of the objects as granted --- but, of course, in computer science the representation matters. Second, as emphasised in the above quote, for mathematics the pure sets don't matter, but the *structures* are of importance, while the translation between structures is handled by "homomorphisms". So we have the semiring (N_0, +, *, 0, 1) of natural numbers (using here some fixed implementation --- actually we are speaking of isomorphism classes of semirings), and the ring (Z, +, *, 0, 1) of integers. The semiring N_0 has a natural embedding into Z, and thus is considered as a subsemiring in mathematical practice. So well, at least here there is in mathematical practice (which is necessarily different from what happens on a computer --- see below) a clear relation between natural numbers and integers. But there is nothing like "modular integers". For every positive integer m we have the ring Z / mZ of "integers modulo m", which has two standard representations : - mathematically meaningful is to use the standard quotient construction of elements of Z/mZ as equivalence classes; - easier to understand (but finally just a "hack") is to represent Z/mZ as the integers 0, ..., m-1. (I use this representation when I teach modular arithmetic to second year undergraduate students in computer science, but only because for them handling equivalence classes is too complicated.) So here only by using some special representation of Z/mZ the *set* of "integers modulo m" is a subset of N_0. But no mathematician would consider Z/mZ as somehow contained in N_0 (or Z), since Z/mZ is not embedabble into Z as the fundamental structure, that is, as a ring (adding +1 m times yields 0 in Z/mZ). (In the above quote by mistake "fields" are mentioned; only Z/mZ here can be a field (namely in case m is a prime number).) So Z/mZ is not directly related to N_0 and Z, only there is a conversion from Z to Z/mZ (which is a homomorphism), and a conversion from Z/mZ to N_0 (which is not a homomorphism, but only a canonical representation). So modular arithmetic is related to natural numbers and integers only by conversion. What about the natural numbers and the integers? In an earlier e-mail already the very instructive example of "squares vs. rectangles" was mentioned (much nice material on it available online). The point here is that the mathematical world is eternal, nothing ever gets constructed or destroyed, but everything always already exists, and we only invent names to point into the infinite static object world. And, of course, a mathematical object never can be modified. This does no work on a computer. There, especially when handling large objects, objects need to be modified. And we need to construct objects (and destruct them). And then the mathematical relations break down (in general). For the relationship between N_0 and Z the breakdown can be demonstrated by using a negation member function for integers, which cannot work for natural numbers (under the same invariants). So, in this sense N_0 should not be derived from Z. One escape here (as already mentioned) would be to consider only non-modifiable objects. But, especially regarding the large objects we potentially have to handle here, this approach, which must be based on copy-semantics, would be very inefficient (and misleading). Another escape (in order to use derivation) would be to derive N_0 from a superclass of Z, where this superclass only considers the semiring-structure of Z (that is, without additive inverse). This would be possible, and perhaps also somewhat sensible. But it might complicate the whole design. So, to conclude, representations of N_0 should not be derived from Z (or the other way around; also in generic programming the normal concept of N_0 is not a refinement of the normal concept of Z (neither the other way around)). The classes N_0, Z and Z/mZ are all different classes (class templates), where well defined conversions (explicit or implicit) are responsible for the relations. A final remark: C++ is very close to ordinary mathematical practice (much more so than Java or functional programming) with its emphasise on copy semantics, its support for conversions (basic handwaving in mathematics is just implicit conversion(!)) and its support for parameterised types (in this way all the parameters normally hidden in mathematical practice can be made explicit). Object orientation does not mix very well with mathematics; of course, it's a computer science thing, but must be used with great care. In a mathematical world nobody would put elements of N_0 and Z, when coming from different structures, into the same container (but mathematicians "use" the type-safe container in C++ style, not the type-unsafe container in Java style). Hope this was a bit helpful to clarify some points. Oliver

"Oliver Kullmann" wrote in message
Hi,
I would like to join the critics of the design, just trying to clarify certain things.
In my opinion software design is not mathematical proof that one design is better than another, but there is the mathematical fact that the set of unsigned integers is a subset of the integers, and the set of modular integers is a subset of the integers.
But your classes do not represent the SET of integers and the SET of modular integers. They represent the FIELDS (you include the operators - and the class wouldn't be much use if they didn't.
The modular integers do NOT form a sub-field of the integers, which is why you shouldn't use derivation to represent them.
In computer science we have to be very precise about what we are speaking --- up to the formal details (this is different from ordinary mathematics, where often handwaving is fine).
There is no "the set of integers" etc.
First of all, depending on your foundations (for general mathematics set theory is best suited) there are (several) *representations" of natural numbers and integers.
Ordinary mathematics often (especially nowadays) actually does not go down to the level of representation, but just takes one representation of the objects as granted --- but, of course, in computer science the representation matters.
Second, as emphasised in the above quote, for mathematics the pure sets don't matter, but the *structures* are of importance, while the
between structures is handled by "homomorphisms". So we have the semiring (N_0, +, *, 0, 1) of natural numbers (using here some fixed implementation --- actually we are speaking of isomorphism classes of semirings), and the ring (Z, +, *, 0, 1) of integers. The semiring N_0 has a natural embedding into Z, and thus is considered as a subsemiring in mathematical practice. So well, at least here there is in mathematical practice (which is necessarily different from what happens on a computer --- see below) a clear relation between natural numbers and integers.
But there is nothing like "modular integers". For every positive integer m we have the ring Z / mZ of "integers modulo m", which has two standard representations : - mathematically meaningful is to use the standard quotient construction of elements of Z/mZ as equivalence classes; - easier to understand (but finally just a "hack") is to represent Z/mZ as the integers 0, ..., m-1. (I use this representation when I teach modular arithmetic to second year undergraduate students in computer science, but only because for them handling equivalence classes is too complicated.)
So here only by using some special representation of Z/mZ the *set* of "integers modulo m" is a subset of N_0. But no mathematician would consider Z/mZ as somehow contained in N_0 (or Z), since Z/mZ is not embedabble into Z as the fundamental structure, that is, as a ring (adding +1 m times yields 0 in Z/mZ).
(In the above quote by mistake "fields" are mentioned; only Z/mZ here can be a field (namely in case m is a prime number).)
So Z/mZ is not directly related to N_0 and Z, only there is a conversion from Z to Z/mZ (which is a homomorphism), and a conversion from Z/mZ to N_0 (which is not a homomorphism, but only a canonical representation).
So modular arithmetic is related to natural numbers and integers only by conversion. What about the natural numbers and the integers?
In an earlier e-mail already the very instructive example of "squares vs. rectangles" was mentioned (much nice material on it available online). The point here is that the mathematical world is eternal, nothing ever gets constructed or destroyed, but everything always already exists, and we only invent names to point into the infinite static object world. And, of course, a mathematical object never can be modified.
This does no work on a computer. There, especially when handling large objects, objects need to be modified. And we need to construct objects (and destruct them). And then the mathematical relations break down (in general).
For the relationship between N_0 and Z the breakdown can be demonstrated by using a negation member function for integers, which cannot work for natural numbers (under the same invariants). So, in this sense N_0 should not be derived from Z.
One escape here (as already mentioned) would be to consider only non-modifiable objects. But, especially regarding the large objects we potentially have to handle here, this approach, which must be based on copy-semantics, would be very inefficient (and misleading).
Another escape (in order to use derivation) would be to derive N_0 from a superclass of Z, where this superclass only considers the semiring-structure of Z (that is, without additive inverse). This would be possible, and perhaps also somewhat sensible. But it might complicate the whole design.
So, to conclude, representations of N_0 should not be derived from Z (or
Here I disagree. The class represents the set, and an object of that class represents an element from that set, whether the set is employees and secretaries, or integers and unsigned integers. translation the other way around;
also in generic programming the normal concept of N_0 is not a refinement of the normal concept of Z (neither the other way around)).
The classes N_0, Z and Z/mZ are all different classes (class templates), where well defined conversions (explicit or implicit) are responsible for the relations.
A final remark: C++ is very close to ordinary mathematical practice (much more so than Java or functional programming) with its emphasise on copy semantics, its support for conversions (basic handwaving in mathematics is just implicit conversion(!)) and its support for parameterised types (in this way all the parameters normally hidden in mathematical practice can be made explicit).
Object orientation does not mix very well with mathematics; of course, it's a computer science thing, but must be used with great care. In a mathematical world nobody would put elements of N_0 and Z, when coming from different structures, into the same container (but mathematicians "use" the type-safe container in C++ style, not the type-unsafe container in Java style).
Hope this was a bit helpful to clarify some points.
Oliver

Maarten Kronenburg wrote:
Here I disagree. The class represents the set, and an object of that class represents an element from that set, whether the set is employees and secretaries, or integers and unsigned integers.
If C++ classes represent only sets of data and not their behavior, then there would not be any need for member functions. Neither would there be any need for the distinction between public and private members. One would only need structs and free functions that act on them. But maybe this argument has gone on too long. You should just proceed as you think best.

OK, one last attempt (still hoping that the suspicion that this discussion might be about pure ignorance proves wrong). Maarten Kronenburg wrote:
The class represents the set, and an object of that class represents an element from that set,
Agreed, so far.
whether the set is employees and secretaries, or integers and unsigned integers.
The operations on secretaries are usually a superset of the operations on employees, while the operations on unsigned integers are a subset of the operations on signed integers. Inheritance is only applicable in the former case. The flaw in the latter scenario is similar to adding operations specific to secretaries, trainees or managers to the interface of 'employee'. Regards, Tobias

Tobias wrote:
Maarten Kronenburg wrote:
The class represents the set, and an object of that class represents an element from that set,
Agreed, so far.
whether the set is employees and secretaries, or integers and unsigned integers.
The operations on secretaries are usually a superset of the operations on employees, while the operations on unsigned integers are a subset of the operations on signed integers.
Inheritance is only applicable in the former case. The flaw in the latter scenario is similar to adding operations specific to secretaries, trainees or managers to the interface of 'employee'.
Things like secretary, manager, etc. might better be considered as roles, an employee might have > 1 role. Simon

simon.sebright@ubs.com wrote:
Tobias wrote:
Maarten Kronenburg wrote:
The class represents the set, and an object of that class represents an element from that set,
Agreed, so far.
whether the set is employees and secretaries, or integers and unsigned integers.
The operations on secretaries are usually a superset of the operations on employees, while the operations on unsigned integers are a subset of the operations on signed integers.
Inheritance is only applicable in the former case. The flaw in the latter scenario is similar to adding operations specific to secretaries, trainees or managers to the interface of 'employee'.
Things like secretary, manager, etc. might better be considered as roles, an employee might have > 1 role.
Sure, if that's what you are trying to model. But what does that have to do with this discussion? Regards, Tobias

[mailto:boost-bounces@lists.boost.org] On Behalf Of Tobias Schwinger
simon.sebright@ubs.com wrote:
Tobias wrote:
Maarten Kronenburg wrote:
The class represents the set, and an object of that class represents an element from that set,
Agreed, so far.
whether the set is employees and secretaries, or integers and unsigned integers.
The operations on secretaries are usually a superset of the operations on employees, while the operations on unsigned integers are a subset of the operations on signed integers.
Inheritance is only applicable in the former case. The flaw in the latter scenario is similar to adding operations specific to secretaries, trainees or managers to the interface of 'employee'.
Things like secretary, manager, etc. might better be considered as roles, an employee might have > 1 role.
Sure, if that's what you are trying to model. But what does that have to do with this discussion?
I seem to have joined the thread in the middle, sorry if I am repeating something already said... I guess I am reinforcing what I have been picking up on this thread. I.e. that behaviour of something can be disconnected from what it is (a square is a rectangle in geometry, but the behaviour of a square object does not meet the contract of a rectangle object). In the computer world, inheritance is a very strong relationship, and needs to be considered carefully, taking into account the dynamic nature of the instances. The above separation of employee/role is a bit of a strategy pattern, I think. Maybe it could help the integer issue, such that the various required integer classes could hook up to sets of behaviour, some in common, some not. Traits might be able to contribute too. Anyway, what about the fact that it is not a good idea to have multiple concrete classes in a hierarchy? That would debunk one int class deriving from another. Simon

simon.sebright@ubs.com wrote:
I seem to have joined the thread in the middle, sorry if I am repeating something already said...
I guess I am reinforcing what I have been picking up on this thread. I.e. that behaviour of something can be disconnected from what it is (a square is a rectangle in geometry, but the behaviour of a square object does not meet the contract of a rectangle object).
Right. In particular, a derived class should always either extend or just implement its base.
In the computer world, inheritance is a very strong relationship, and needs to be considered carefully, taking into account the dynamic nature of the instances.
I fully agree. However, the "secretary IsA employee case" is not bad design per se (as the integer case sure is). As you correctly pointed out there are cases (considering what we're actually trying to model) where it's better to do it differently, still.
The above separation of employee/role is a bit of a strategy pattern, I think. Maybe it could help the integer issue, such that the various required integer classes could hook up to sets of behaviour, some in common, some not.
C++ is very powerful when it comes to code reuse in an implementation. We might as well just put the common stuff into functions - and I don't think we should be too concerned with the implementation when talking about the interface.
Traits might be able to contribute too.
Right. The only reason to use polymorphism to implement the Strategy pattern in C++ that I can currently think of is modularization (hiding the source, reducing build dependencies, ...). The easiest way to fix the design of the proposed library is probably to have two unrelated classes for signed and unsigned and an implicit widening (unsigned->int) conversion sequence and an explicit narrowing (int->unsigned) one. Allocator and modulation should be passed in via template parameters taking an optional Allocator value in the ctor (Allocators are a good example implementing the Strategy pattern with templates, BTW). The Allocator should be STL compatible (and it might be a good idea to add a 'reallocate' member function to that concept, anyway - IIRC, there are STL implementation that already have such a beast). The Modulus parameter would be the type of a Singleton who manages the initialization of the value and exposes it through a static function. The signed integer would allow signed and unsigned modulus. The unsigned integer would only allow an unsigned integer type for the modulus.
Anyway, what about the fact that it is not a good idea to have multiple concrete classes in a hierarchy? That would debunk one int class deriving from another.
I'm not sure it's wise to say so rigorously. You brought up the Strategy pattern, so let's consider a fully implemented default Strategy to inherit from. It isn't all that bad of an idea, is it? Regards, Tobias

Martin Bonner wrote:
In my opinion software design is not mathematical proof that one design is better than another, but there is the mathematical fact that the set of unsigned integers is a subset of the integers, and the set of modular integers is a subset of the integers.
But your classes do not represent the SET of integers and the SET of modular integers. They represent the FIELDS (you include the operators - and the class wouldn't be much use if they didn't.
The modular integers do NOT form a sub-field of the integers, which is why you shouldn't use derivation to represent them.
As mathematician who also writes C++ code, I would like to concur with both Martin's opinion and his rationale. I think he put it very well.

On 11/25/06 10:17 AM, "Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote:
"Daryle Walker" wrote in message
This is something I've been working on for the past few months. It's in the Sandbox and I just uploaded it to the new Vault as "big_radix_whole.tar.bz2" under the "Math - Numerics" directory, i.e.
For this an interface should be proposed first, to be accepted by the LWG, see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2020.pdf The Revision 2 of this document will be submitted soon.
This isn't going to be submitted to the C++ standard bodies. The fact that the radix and allocations are template parameters prevents my class from having the hard core computational efficiencies that a concrete class can have (hide implementation, use assembly). [SNIP]
It's a run-of-the-mill bignum type. It uses the philosophy of the STL containers to use a separate template parameter for the memory allocations. The type is radix-specific, also given as a template parameter. It's of the simplest bignum type, an arbitrary-length unsigned integer class. (Signed integers, fixed-point, floating-point, rational, etc., variants can all be built off of this kind of bignum.) As such, the renditions of the traditional algorithms are fairly pure. The biggest issue from a mathematical standpoint is that the type can't be a proper ring, since it doesn't support additive inverses (besides zero). Every other arithmetic operation (that doesn't need negative values) can work. The work-around for subtraction is the addition of "subtract-absolutely" member functions, which compute the absolute value of the difference, and return a "bool" indicating what the sign would have been.
The integer data should not be in an STL container, but in a contiguous memory block, so that an assembler module is possible for performance, see <http://www.swox.com/gmp> (see also my document referred to above).
It's not trying to compete for performance, but for exposing its mechanism in a (hopefully) clear manner. I used an STL container to reuse the experts' memory strategies. I used a deque, instead of a vector, since I use push-front a lot. A deque's piecemeal allocations is a substitute for vector's reserve; contiguity doesn't matter since I'm using iterators.
Many of the operations are repeated, each with optimized algorithms for the different sizes of the parameters. Since the traditional method of multiplication is actually a fused-add/multiply, member functions for such fused actions were added. There are many variants for shifting-up where additions/subtractions start since it's faster than doing a separate shifting operation first. And it saves memory, which was an important goal in all the algorithms, along with the strong exception guarantee. Besides the asserts, there is generally no protection for pre-condition violations (except for exceptions for negative results, divides by zero, and conversions).
There are commented-out plans for functions that do powers and (truncated) roots, especially squaring and square-roots, and pseudo-reciprocals (for an N-digit value, find another M-digit value, with M <= N, closest to giving a product of Radix**2N, useful to implement exact-division by multiply-by-reciprocal, which is cheaper for huge N). What other routines should be added? Any hints on how to do said routines? Should there be Serialization? Could Spirit and/or Regex be used for the I/O routines? Any compiler workarounds to be added?
This is called requirements (see my document referred to above). I'm happy to discuss it here with anyone.
Are you talking about a rationale for this type? I just did it because I was always curious about C++ and the mechanics of computation, so I put the two together. I have compile-time selectable radices because our arithmetic algorithms are radix-based, even though fixing the radix at a convenient power of two would be better. The allocation template argument is to match STL, even though using a non-template base class with instances passed at run-time would be better. Both ideas combined would improve the code by shifting everything into a *.cpp file. (I'm using *.ipp files now because the core *.hpp file reached a hard-to-navigate 2500 lines before I was a quarter through.) Because the theoretical purity has compromised computational practicality, I can't compete with other bignum libraries out there (especially since they are built from many man-years of work). -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Daryle Walker wrote:
This is something I've been working on for the past few months. It's in the Sandbox and I just uploaded it to the new Vault as "big_radix_whole.tar.bz2" under the "Math - Numerics" directory, i.e. <http://www.boost-consulting.com/vault/index.php?action=downloadfile&filenam e=big_radix_whole.tar.bz2&directory=Math%20-%20Numerics&>.
Very cool!
The only documentation right now is the Doxygen comments.
But most importantly the code reads nice. It would probably read even better without those comments ;-).
Many of the operations are repeated, each with optimized algorithms for the different sizes of the parameters. Since the traditional method of multiplication is actually a fused-add/multiply, member functions for such fused actions were added. There are many variants for shifting-up where additions/subtractions start since it's faster than doing a separate shifting operation first. And it saves memory, which was an important goal in all the algorithms, along with the strong exception guarantee.
Did you consider using expression templates to - eliminate temporaries in expressions - detect temporaries to use in-place operations instead of copying ones, IOW exploiting destructibility of values - decorate the allocator for stack-based allocation during evaluation - replacing widening conversion operations with views ? The last point of the above list can probably be implemented with a "hollow base", CRTP and deduce-from-base instead of ET (similar to polymorphism of parsers in Spirit). The ET approach could also be used to implement interesting syntactic properties. I once wrote a Wiki page about it: http://tinyurl.com/qtyhh The information in the above section of my post is probably not too properly explained - please ask specific questions in case you are interested in something that seems mysterious.
... What other routines should be added?
Logarithm
Should there be Serialization?
Yes. Especially for Base64 encoding.
Could Spirit and/or Regex be used for the I/O routines?
Given these two alternatives I'd prefer Spirit for header-only code: // schematic, untested code typedef big_radix_whole<Rx,Al> value_type; value_type result; uint_parser< value_type, 10 > dec_ap; uint_parser< value_type, 16 > hex_ap; uint_parser< value_type, 8 > oct_ap; uint_parser< value_type, 2 > bin_ap; if (! parse(first,last, // or some char* instead // this expression could be statically initialized // for optimization "0x" >> hex_ap [ var(result) = arg1 ] | "0b" >> bin_ap [ var(result) = arg1 ] | "0" >> oct_ap [ var(result) = arg1 ] | dec_ap [ var(result) = arg1 ] ).full) throw some_error; But it might be faster to just hand-code the whole thing (in particular to avoid copying), I figure. Regards, Tobias

On 11/26/06 8:58 AM, "Tobias Schwinger" <tschwinger@neoscientists.org> wrote:
Daryle Walker wrote:
This is something I've been working on for the past few months. It's in the Sandbox and I just uploaded it to the new Vault as "big_radix_whole.tar.bz2" under the "Math - Numerics" directory, i.e. <http://www.boost-consulting.com/vault/index.php?action=downloadfile&filenam e=big_radix_whole.tar.bz2&directory=Math%20-%20Numerics&>.
Very cool!
Thanks.
The only documentation right now is the Doxygen comments.
But most importantly the code reads nice. It would probably read even better without those comments ;-).
Theoretical purity was a consideration, so I tried to avoid being obscure. It also helps in portability concerns. The comments are there so I could make each function concrete in my mind before writing it. I don't have a rationale right now, so I didn't need separate documentation for the functions themselves if I used Doxygen. Later, I could reserve separate docs, probably with QuickBook, just for the rationale and user guide.
Many of the operations are repeated, each with optimized algorithms for the different sizes of the parameters. Since the traditional method of multiplication is actually a fused-add/multiply, member functions for such fused actions were added. There are many variants for shifting-up where additions/subtractions start since it's faster than doing a separate shifting operation first. And it saves memory, which was an important goal in all the algorithms, along with the strong exception guarantee.
Did you consider using expression templates to
- eliminate temporaries in expressions - detect temporaries to use in-place operations instead of copying ones, IOW exploiting destructibility of values - decorate the allocator for stack-based allocation during evaluation - replacing widening conversion operations with views
?
No. An type aided with expression templates eventually has to forward to some algorithms somewhere. I substituted for that by having member functions that combine operations together. Only operations that can be reasonably combined are done; operations that can interfere with each other aren't combined. That's why division and modulus don't combine with anything besides each other. However, a class like mine can be used as the implementation class for something that does use expression templates! Also, expression templates would interfere with presenting the code clearly (for educational purposes).
The last point of the above list can probably be implemented with a "hollow base", CRTP and deduce-from-base instead of ET (similar to polymorphism of parsers in Spirit).
What's CRTP stand for?
The ET approach could also be used to implement interesting syntactic properties. I once wrote a Wiki page about it:
I've read that page. I got some of my ideas from it (like the memory management at the end).
The information in the above section of my post is probably not too properly explained - please ask specific questions in case you are interested in something that seems mysterious.
... What other routines should be added?
Logarithm
Natural logarithm and/or any base? How can it be done, especially since I would be limited to integer arithmetic and results? I only have access to Knuth's _The Art of Computer Programming_ books, Wikipeida, plus occasional bignum pages I see on the web. I don't have access to the ACM/IEEE stuff, where I suspect a lot of "kewl" algorithms live.
Should there be Serialization?
Yes. Especially for Base64 encoding.
Could Spirit and/or Regex be used for the I/O routines?
Given these two alternatives I'd prefer Spirit for header-only code:
// schematic, untested code
typedef big_radix_whole<Rx,Al> value_type;
value_type result;
uint_parser< value_type, 10 > dec_ap; uint_parser< value_type, 16 > hex_ap; uint_parser< value_type, 8 > oct_ap; uint_parser< value_type, 2 > bin_ap;
if (! parse(first,last, // or some char* instead // this expression could be statically initialized // for optimization "0x" >> hex_ap [ var(result) = arg1 ] | "0b" >> bin_ap [ var(result) = arg1 ] | "0" >> oct_ap [ var(result) = arg1 ] | dec_ap [ var(result) = arg1 ] ).full) throw some_error;
But it might be faster to just hand-code the whole thing (in particular to avoid copying), I figure.
One key point of my type is that it specifies the radix at compile-time, to take full advantage of that, I/O is always done in that radix. This code here seems to be fixed at radix-8/10/16, probably using the appropriate I/O flags. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

On Sunday, 26. November 2006 23:55, Daryle Walker wrote:
On 11/26/06 8:58 AM, "Tobias Schwinger" <tschwinger@neoscientists.org>
The last point of the above list can probably be implemented with a "hollow base", CRTP and deduce-from-base instead of ET (similar to polymorphism of parsers in Spirit).
What's CRTP stand for?
Google helps you (with c++ as additional searchword): http://en.wikipedia.org/wiki/Curiously_Recurring_Template_Pattern -- Ciao, / / /--/ / / ANS

Logarithm
Natural logarithm and/or any base? How can it be done, especially since I would be limited to integer arithmetic and results? I only have access to Knuth's _The Art of Computer Programming_ books, Wikipeida, plus occasional bignum pages I see on the web. I don't have access to the ACM/IEEE stuff, where I suspect a lot of "kewl" algorithms live.
This is somewhat offtopic since log/pow/exp are only really relevant for floating-point types, but basically: If x is a base 2 number in normalised form then split it into significand s in range [1,2) and integer exponent n, then: If s > 1.5, divide s by two and add one to n. Then we have s in [0.75,1.5) and: ln(x) = ln(s) + n*ln(2) ln(s) can be calculated from it's taylor series for small x < 1 (hence the split above), ln(2) you could either just store lots of digits, or calculate that specific value by some other method. Note that this only really works for base 2 numbers, since if we have base N, then the significand is in [1,N) and it's harder (though not impossible) to reduce that so that we can use the taylor expansion. HTH, John.

Daryle Walker wrote:
Did you consider using expression templates to
- eliminate temporaries in expressions - detect temporaries to use in-place operations instead of copying ones, IOW exploiting destructibility of values - decorate the allocator for stack-based allocation during evaluation - replacing widening conversion operations with views
?
No. An type aided with expression templates eventually has to forward to some algorithms somewhere. I substituted for that by having member functions that combine operations together. Only operations that can be reasonably combined are done; operations that can interfere with each other aren't combined. That's why division and modulus don't combine with anything besides each other. However, a class like mine can be used as the implementation class for something that does use expression templates! Also, expression templates would interfere with presenting the code clearly (for educational purposes).
I see.
The last point of the above list can probably be implemented with a "hollow base", CRTP and deduce-from-base instead of ET (similar to polymorphism of parsers in Spirit).
What's CRTP stand for?
It's the Curiously Recurring Template Pattern (some code attached). You can make a base class know its dervied class by specializing a base class template with the derived type. This way you can use static_cast to downcast the 'this' pointer and implement compile time polymorphism. I gave a code example in reply to Maarten Kronenburg.
The ET approach could also be used to implement interesting syntactic properties. I once wrote a Wiki page about it:
I've read that page. I got some of my ideas from it (like the memory management at the end).
Natural logarithm and/or any base?
log_Radix is there already and it's called digit_count(), isn't it? Base N where N is an integer would be useful (e.g. to deal with tree structures larger than main memory).
How can it be done, especially since I would be limited to integer arithmetic and results? I only have access to Knuth's _The Art of Computer Programming_ books, Wikipeida, plus occasional bignum pages I see on the web. I don't have access to the ACM/IEEE stuff, where I suspect a lot of "kewl" algorithms live.
Well, I don't have a really cool algorithm in my pocket. What follows is possibly amateurish intuitive guessing - there are probably better algorithms around (probably even in Knuth's TAOCP). Anyway, I'll give it a try: Let's first look at cases where things are easy: Given Radix = pow(b_r,n), where b_r,n are integers log_b_r(x) = log_Radix(x)*n + log_b_r(most_significant_digit(x)) Example: Radix = 256 b_r = 2, n = 8 log_2(550) = 1*8 + 1 = 9 For some Radix/base combination things can be put to work real smoothly. BTW. it seems possible to calculate b_r and n at compile time. Another good thing to check is probably b > x in this case we can just return 0. For other cases (and for the single digit, above) we pick a rather lame algorithm: int log(int b, int x) { int r = 0; for (int y = 1; y < x; y *= b, ++r); return r; } Now the question is: "Can we estimate a value less than the result so we can let (r,y) start at (r_0,pow(b,r_0))?" It might work to use the least base logarithm we have a special case for (called log_b_r, above - that would be log_Radix, worst case) to do the estimation. Basically the log_a(x) = log_b(x) / log_b(a) sort of thing, while we have to make sure we never over-estimate the result. <snip> [ ... Spirit code ... ]
But it might be faster to just hand-code the whole thing (in particular to avoid copying), I figure.
One key point of my type is that it specifies the radix at compile-time, to take full advantage of that, I/O is always done in that radix. This code here seems to be fixed at radix-8/10/16, probably using the appropriate I/O flags.
It's just a number parser (it matches strings such as "42", "0xcafe" or "077"). It should work with any radix for the destination, but there will be a conversion, of course. Since you mentioned Serialization, I figured that the only IO left would be initialization from human readable form... Regards, Tobias template<typename Derived> struct base { void caller { static_cast<Derived*>(this)->compile_time_polymorphic(); } void compile_time_polymorphic() { // default implementation } } struct derived : base<derived> { void compile_time_polymorphic() { // override } } #include <iostream> template<class Derived> struct crtp_base { Derived & derived() { return *static_cast<Derived*>(this); } Derived const & derived() const { return *static_cast<Derived const *>(this); } }; class concrete_class : public crtp_base<concrete_class> { int val_state; public: concrete_class(int v) : val_state(v) { } int get_state() const { return val_state; } }; template<class Derived> void func(crtp_base<Derived> const & arg) { std::cout << "value == " << arg.derived().get_state() << std::endl; } int main() { func( concrete_class(42) ); return 0; };

Perhaps I'm missing something completly obvious but what is the use of a non binary radix integer? Except perhaps faster I/O as no implicit base conversion is needed.

Ben Strasser wrote:
Perhaps I'm missing something completly obvious but what is the use of a non binary radix integer? Except perhaps faster I/O as no implicit base conversion is needed.
Base ten^n shifts can be practical to implement a "human compatible" float, which is useful to avoid conversion errors in cases where decimal numbers don't have an exact binary representation, such as 0.2 (== 0xb0.001100110011... IIRC). Regards, Tobias
participants (15)
-
Ben Strasser
-
Bronek Kozicki
-
Daryle Walker
-
Deane Yang
-
Hans Meine
-
Herve Bronnimann
-
Jeff Garland
-
John Maddock
-
Maarten Kronenburg
-
Martin Bonner
-
Mathias Gaunard
-
Oliver Kullmann
-
Paul A Bristow
-
simon.sebrightļ¼ ubs.com
-
Tobias Schwinger