Name and Namespace for Potential Boost Extended Floating-Point

Hello, We are seeking a good name for a potential boost extended precision floating-point library. This is a multiple precision floating-point type which *behaves* well in C++. The original thread announcing this potential library is here: http://lists.boost.org/Archives/boost/2011/06/182334.php Some very helpful boosters are assisting me with the preparation of this potential library. We've still got a lot to do, but are making good progress. In order to continue, we need a good name for a potential boost multiple-precision floating-point type. For the multiple-precision real type we have names like: * multiprecision_real * mp_real * extended_float_tOur current favorite is the second:mp_real. For namespaces, we have: 1. boost::mp::mp_real 2. boost::multiprecision::mp_real 3. boost::multiple_precision::mp_realOur current favorite namespace is number 2: boost::multiprecision. This new data type behaves like a drop-in replacement for a POD. It also has a collection of transcendental functions and a complex data type. So we are proposing: namespace boost { namespace multiprecision { class mp_real { ... }; // The real data type. class mp_complex { ... }; // The complex data type. mp_real sin(const mp_real&); // The sin function for the real data type. class mp_uint { }; // A potential future multiple-precision unsigned integer type. class mp_int { }; // A potential future multiple-precision signed integer type. } } We would appreciate comments (negative or positive) and/or alternative suggestions for names. Thank you. Sincerely, Chris.

I'm leaning towards ... namespace boost { namespace mpfp { // multi-precision floating point class real; class complex; } // mpfp } // boost Also, what does it mean for an integer type to be *multi-precision* ? Is it something along the lines of BigInt ? On Tue, Aug 30, 2011 at 3:08 PM, Christopher Kormanyos <e_float@yahoo.com>wrote:
Hello,
We are seeking a good name for a potential boost extended precision floating-point library. This is a multiple precision floating-point type which *behaves* well in C++.
The original thread announcing this potential library is here: http://lists.boost.org/Archives/boost/2011/06/182334.php
Some very helpful boosters are assisting me with the preparation of this potential library. We've still got a lot to do, but are making good progress.
In order to continue, we need a good name for a potential boost multiple-precision floating-point type.
For the multiple-precision real type we have names like:
* multiprecision_real * mp_real * extended_float_tOur current favorite is the second:mp_real.
For namespaces, we have:
1. boost::mp::mp_real 2. boost::multiprecision::mp_real 3. boost::multiple_precision::mp_realOur current favorite namespace is number 2: boost::multiprecision.
This new data type behaves like a drop-in replacement for a POD. It also has a collection of transcendental functions and a complex data type.
So we are proposing:
namespace boost { namespace multiprecision { class mp_real { ... }; // The real data type. class mp_complex { ... }; // The complex data type. mp_real sin(const mp_real&); // The sin function for the real data type.
class mp_uint { }; // A potential future multiple-precision unsigned integer type. class mp_int { }; // A potential future multiple-precision signed integer type. } }
We would appreciate comments (negative or positive) and/or alternative suggestions for names.
Thank you. Sincerely, Chris. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

________________________________ From: Steven Maitlall <m.steven@gmail.com> To: boost@lists.boost.org Sent: Tuesday, August 30, 2011 9:18 PM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point I'm leaning towards ... namespace boost { namespace mpfp { // multi-precision floating point class real; class complex; } // mpfp } // boost That's a good one.We'll put it on the list. Thank you. Also, what does it mean for an integer type to be *multi-precision* ? Is it something along the lines of BigInt ? Yes it would be a big integer. But we're not working on it now. Maybe *precision* isn't right for integer types. Sincerely, Chris.

AMDG On 08/30/2011 12:08 PM, Christopher Kormanyos wrote:
We are seeking a good name for a potential boost extended precision floating-point library. This is a multiple precision floating-point type which *behaves* well in C++.
<snip>
For namespaces, we have:
1. boost::mp::mp_real 2. boost::multiprecision::mp_real 3. boost::multiple_precision::mp_real Our current favorite namespace is number 2: boost::multiprecision.
I'm against mp. It's too close to mpl. In Christ, Steven Watanabe

From: Steven Watanabe <watanabesj@gmail.com> To: boost@lists.boost.org Sent: Tuesday, August 30, 2011 9:19 PM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point I'm against mp. It's too close to mpl. In Christ, Steven Watanabe We also found "mp" to be too terse and non-intuitive. Thank you. You're right. It is too close to mpl. Sincerely, Chris.

On Tue, Aug 30, 2011 at 12:53 PM, Christopher Kormanyos <e_float@yahoo.com>wrote:
From: Steven Watanabe <watanabesj@gmail.com>
To: boost@lists.boost.org Sent: Tuesday, August 30, 2011 9:19 PM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
I'm against mp. It's too close to mpl.
In Christ, Steven Watanabe
We also found "mp" to be too terse and non-intuitive. Thank you. You're right. It is too close to mpl.
It's too close to mpi too... :) - Jeff

Le 30/08/11 21:08, Christopher Kormanyos a écrit :
Hello,
We are seeking a good name for a potential boost extended precision floating-point library. This is a multiple precision floating-point type which *behaves* well in C++.
Some very helpful boosters are assisting me with the preparation of this potential library. We've still got a lot to do, but are making good progress.
In order to continue, we need a good name for a potential boost multiple-precision floating-point type.
For the multiple-precision real type we have names like:
* multiprecision_real * mp_real * extended_float_tOur current favorite is the second:mp_real.
For namespaces, we have:
1. boost::mp::mp_real 2. boost::multiprecision::mp_real 3. boost::multiple_precision::mp_realOur current favorite namespace is number 2: boost::multiprecision.
This new data type behaves like a drop-in replacement for a POD. It also has a collection of transcendental functions and a complex data type.
Hi, I guess you know the BigNumber proposal from John Maddock. While this library provides some wrappers to 3rd party libraries, nothing forbid to include an arbitrary precision real and an integer with arbitrary digits. So, if John accepts, the namespace could be big_number. The classes could just be named real, integer, ... Best, Vicente

On Tue, Aug 30, 2011 at 4:32 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 30/08/11 21:08, Christopher Kormanyos a écrit :
We are seeking a good name for a potential boost extended precision floating-point library. [...] the namespace could be big_number.
What about more simply boost::bignum::real boost::bignum::integer boost::bignum::complex or possibly boost::bignum::big(real|float|fp) boost::bignum::big(u)int boost::bignum::bigcomplex ?

________________________________ From: Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> To: boost@lists.boost.org Sent: Tuesday, August 30, 2011 11:32 PM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point I guess you know the BigNumber proposal from John Maddock. While this library provides some wrappers to 3rd party libraries, nothing forbid to include an arbitrary precision real and an integer with arbitrary digits. So, if John accepts, the namespace could be big_number. The classes could just be named real, integer, ... Best, Vicente ------------------------------------------------------------------------------------- Yes, I believe we are synchronized with John. (He is assisting us.) In addition to the abstract and generic big_number, we also want to provide the extended floating-point type standalone. Thank you for your suggestion. Sincerely, Chris.

Yes, I believe we are synchronized with John. (He is assisting us.)
In addition to the abstract and generic big_number, we also want to provide the extended floating-point type standalone.
Ah yes, to clarify: * Expression template enabled code - maybe faster to run, but slow to compile/develop with. * Simpler non-expression template code - maybe slower to run, but faster to build/develop with. In short it's a trade off between the two depending on your needs. John.

Some very helpful boosters are assisting me with the preparation of this potential library. We've still got a lot to do, but are making good progress.
In order to continue, we need a good name for a potential boost multiple-precision floating-point type.
For the multiple-precision real type we have names like:
* multiprecision_real * mp_real * extended_float_tOur current favorite is the second:mp_real.
For namespaces, we have:
1. boost::mp::mp_real 2. boost::multiprecision::mp_real 3. boost::multiple_precision::mp_realOur current favorite namespace is number 2: boost::multiprecision.
This new data type behaves like a drop-in replacement for a POD. It also has a collection of transcendental functions and a complex data type.
I guess you know the BigNumber proposal from John Maddock. While this library provides some wrappers to 3rd party libraries, nothing forbid to include an arbitrary precision real and an integer with arbitrary digits. So, if John accepts, the namespace could be big_number. The classes could just be named real, integer, ...
Chris and I are working on parallel, but hopefully converging courses (is that a contradiction in terms??), BTW the stuff I'm working on is an expression template enabled front end to arithmetic types: that may just so happen to include a "big int" but it's not the main/only focus. John.

Le 31/08/11 10:20, John Maddock a écrit :
Some very helpful boosters are assisting me with the preparation of this potential library. We've still got a lot to do, but are making good progress.
In order to continue, we need a good name for a potential boost multiple-precision floating-point type.
For the multiple-precision real type we have names like:
* multiprecision_real * mp_real * extended_float_tOur current favorite is the second:mp_real.
For namespaces, we have:
1. boost::mp::mp_real 2. boost::multiprecision::mp_real 3. boost::multiple_precision::mp_realOur current favorite namespace is number 2: boost::multiprecision.
This new data type behaves like a drop-in replacement for a POD. It also has a collection of transcendental functions and a complex data type.
I guess you know the BigNumber proposal from John Maddock. While this library provides some wrappers to 3rd party libraries, nothing forbid to include an arbitrary precision real and an integer with arbitrary digits. So, if John accepts, the namespace could be big_number. The classes could just be named real, integer, ...
Chris and I are working on parallel, but hopefully converging courses (is that a contradiction in terms??), BTW the stuff I'm working on is an expression template enabled front end to arithmetic types: that may just so happen to include a "big int" but it's not the main/only focus.
John, I'm aware of what you are doing with BigNumbers and the E-Float libraries. I understand that each one is working on different parts. What I was proposing is just that both use the same namespace. Best, Vicente

________________________________ From: Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> To: boost@lists.boost.org Sent: Wednesday, August 31, 2011 6:31 PM Subject: Re: [boost] Name and Namespace for Potential Boost ExtendedFloating-Point John, I'm aware of what you are doing with BigNumbers and the E-Float libraries. I understand that each one is working on different parts. What I was proposing is just that both use the same namespace. Best, Vicente ------------------------------------------------------------------ Well, that might look something like this: namespace boost { namespace math { namespace multiprecision { class mp_real { }; class mp_complex { }; } } } Or with one less level namespace boost { namespace math { class mp_real { }; class mp_complex { }; } } (Whereby, I personally would prefer the additional "multiprecision" layer.) Any comments? Sincerely, Chris.

________________________________ From: Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> To: boost@lists.boost.org Sent: Wednesday, August 31, 2011 6:31 PM Subject: Re: [boost] Name and Namespace for Potential Boost ExtendedFloating-Point John, I'm aware of what you are doing with BigNumbers and the E-Float libraries. I understand that each one is working on different parts. What I was proposing is just that both use the same namespace. Best, Vicente --------------------------------------------------------- It's a good idea. Something like boost::math::multiprecision::mp_real. My only concern is that it would add too much bulk to the already large Boost.Math. Consider the long-term goal of extending mp_real (or whatever it might be called) to thousands or millions of digits, requiring FFTs and quite a bit of infrastructure bulk. It might be best to keep Boost.Math free of this stuff. Sincerely, Chris.

On Tue, 30 Aug 2011 12:08:46 -0700, Christopher Kormanyos <e_float@yahoo.com> wrote:
Hello,
We are seeking a good name for a potential boost extended precision floating-point library. This is a multiple precision floating-point type which *behaves* well in C++.
<snip>
In order to continue, we need a good name for a potential boost multiple-precision floating-point type.
For the multiple-precision real type we have names like:
* multiprecision_real * mp_real * extended_float_tOur current favorite is the second:mp_real.
For namespaces, we have:
1. boost::mp::mp_real 2. boost::multiprecision::mp_real 3. boost::multiple_precision::mp_realOur current favorite namespace is number 2: boost::multiprecision.
<snip>
Thank you. Sincerely, Chris.
If you're already in namespace mp, multiprecision, etc ..., then why prefix entities in that namespace with "mp_"? Mostafa

Mostafa wrote:
On Tue, 30 Aug 2011 12:08:46 -0700, Christopher Kormanyos <e_float@yahoo.com> wrote:
Hello,
We are seeking a good name for a potential boost extended precision floating-point library. This is a multiple precision floating-point type which *behaves* well in C++.
<snip>
In order to continue, we need a good name for a potential boost multiple-precision floating-point type.
For the multiple-precision real type we have names like:
* multiprecision_real * mp_real * extended_float_tOur current favorite is the second:mp_real.
For namespaces, we have:
1. boost::mp::mp_real 2. boost::multiprecision::mp_real 3. boost::multiple_precision::mp_realOur current favorite namespace is number 2: boost::multiprecision.
<snip>
Thank you. Sincerely, Chris.
If you're already in namespace mp, multiprecision, etc ..., then why prefix entities in that namespace with "mp_"?
Mostafa
Please don't call it real. An mp float type is not at all a real number any more than a regular float type is a real number. With extended precision floating type you cannot represent irrationals and you cannot represent most rationals exactly either. What it is is a floating point type with a variable, but still finite number of bits. All computers have finite sized memory and are required to run programs in a finite ammount of time. That is why these numerical types are properly called multiple precision and not infinte precision. I'd like to see the name be mp_float. I prefer: mp_float mp_int mp_rational Obviously we need the mp_ prefix if we use float and int since these are keywords without it. Also, if the user puts using namespace boost::multiprecision then they will benefit from the prefix. I prefer multiprecision to mp since the library name I'm guessing would be multiprecision and it is conventional to have the namespace be the same as the library name. For me the rational data type is very important and should wrap the gmp mpq_type. It would be nice if the library could use mp_int with Boost.Rational to implement its own standalone mp_rational datatype, but I would prefer to use the gmp type whenever possible. By the way, I have put extensive thought into how to build a good expression template system for dealing with multi-precision arithmetic and would like to be involved in some of the discussion of the design of the system. Particularly, the most important thing about providing expression templates for multi-precsision is eliminating allocations and deallocations. The allocations can easily dominate the runtime of a multiprecision algorithm. There are many options and considerations for how to optimize the allocations, such as object pools, thread safety etc. Anyone who uses multiprecision numerical data types in the same way they would use built-in numerical data types is writing code that runs significantly slower than the equivalent C code that doesn't abstract away allocation and deallocation and forces the programmer to choose where and when to allocate so that they choose the proper place. I saw large speedups in my Polygon library by recycling gmpq_type variables rather than declaring them in the inner most scope in which they were used. Thanks, Luke

on Wed Aug 31 2011, "Simonson, Lucanus J" <lucanus.j.simonson-AT-intel.com> wrote:
Please don't call it real. An mp float type is not at all a real number any more than a regular float type is a real number. With extended precision floating type you cannot represent irrationals and you cannot represent most rationals exactly either. What it is is a floating point type with a variable, but still finite number of bits. All computers have finite sized memory and are required to run programs in a finite ammount of time. That is why these numerical types are properly called multiple precision and not infinte precision. I'd like to see the name be mp_float. I prefer:
mp_float mp_int mp_rational
+1 -- Dave Abrahams BoostPro Computing http://www.boostpro.com

________________________________ From: Dave Abrahams <dave@boostpro.com> To: boost@lists.boost.org Sent: Wednesday, August 31, 2011 9:12 PM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point on Wed Aug 31 2011, "Simonson, Lucanus J" <lucanus.j.simonson-AT-intel.com> wrote:
Please don't call it real. An mp float type is not at all a real number any more than a regular float type is a real number. With extended precision floating type you cannot represent irrationals and you cannot represent most rationals exactly either. What it is is a floating point type with a variable, but still finite number of bits. All computers have finite sized memory and are required to run programs in a finite ammount of time. That is why these numerical types are properly called multiple precision and not infinte precision. I'd like to see the name be mp_float. I prefer:
mp_float mp_int mp_rational
+1 -- Dave Abrahams BoostPro Computing http://www.boostpro.com ------------------------------------------------------------------------------ Yeah, like Ali said, "Fly like a butterfly, and float like a bee!". Or whatever. But, seriously, I see your point. Real has a distinct meaning which drastically differs from float. Thank you for your input. Sincerely, Chris.

________________________________ From: Matthias Schabel <boost@schabel-family.org> To: boost@lists.boost.org Sent: Thursday, September 1, 2011 12:00 AM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
mp_float mp_int mp_rational
+1
If this is going to be in the "multiprecision" namespace, isn't "mp_*" redundant? Why not just multiprecision::float multiprecision::int multiprecision::rational $0.02 --------------------------------------------------- No, unfortunately not. Please remember when using a namespace, a name like "float" can not be distinguished from the POD type float. Remember that many developers *use* a namespace to avoid typing. The ugly prefix is needed to guarantee unique naming. Sincerely, Chris.

On Wed, Aug 31, 2011 at 3:16 PM, Christopher Kormanyos <e_float@yahoo.com>wrote:
From: Matthias Schabel <boost@schabel-family.org> To: boost@lists.boost.org Sent: Thursday, September 1, 2011 12:00 AM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
mp_float mp_int mp_rational
+1
If this is going to be in the "multiprecision" namespace, isn't "mp_*" redundant? Why not just
multiprecision::float multiprecision::int multiprecision::rational
$0.02
---------------------------------------------------
No, unfortunately not. Please remember when using a namespace, a name like "float" can not be distinguished from the POD type float. Remember that many developers *use* a namespace to avoid typing.
The ugly prefix is needed to guarantee unique naming.
Couldn't you just do what "every" other boost library does, and append an underscore? - Jeff

Jeffrey Lee Hellrung, Jr. wrote:
On Wed, Aug 31, 2011 at 3:16 PM, Christopher Kormanyos <e_float@yahoo.com>wrote:
From: Matthias Schabel <boost@schabel-family.org> To: boost@lists.boost.org Sent: Thursday, September 1, 2011 12:00 AM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
mp_float mp_int mp_rational
+1
If this is going to be in the "multiprecision" namespace, isn't "mp_*" redundant? Why not just
multiprecision::float multiprecision::int multiprecision::rational
$0.02
---------------------------------------------------
No, unfortunately not. Please remember when using a namespace, a name like "float" can not be distinguished from the POD type float. Remember that many developers *use* a namespace to avoid typing.
The ugly prefix is needed to guarantee unique naming.
Couldn't you just do what "every" other boost library does, and append an underscore?
In the interest of clarity I would prefer to see mp_float and not float_ in user code that *use* the namespace. float_ in no way intuitively means multiprecision floating point data type. Intuitively, I would expect float_ in a boost library would mean a type that wraps a literal floating point value in a type for metaprogramming and might be declared something like: template <float f> float_ : float_c<f>; Regards, Luke

On Wed, Aug 31, 2011 at 9:22 PM, Simonson, Lucanus J < lucanus.j.simonson@intel.com> wrote:
Jeffrey Lee Hellrung, Jr. wrote:
On Wed, Aug 31, 2011 at 3:16 PM, Christopher Kormanyos <e_float@yahoo.com>wrote:
From: Matthias Schabel <boost@schabel-family.org> To: boost@lists.boost.org Sent: Thursday, September 1, 2011 12:00 AM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
mp_float mp_int mp_rational
+1
If this is going to be in the "multiprecision" namespace, isn't "mp_*" redundant? Why not just
multiprecision::float multiprecision::int multiprecision::rational
$0.02
---------------------------------------------------
No, unfortunately not. Please remember when using a namespace, a name like "float" can not be distinguished from the POD type float. Remember that many developers *use* a namespace to avoid typing.
The ugly prefix is needed to guarantee unique naming.
Couldn't you just do what "every" other boost library does, and append an underscore?
In the interest of clarity I would prefer to see mp_float and not float_ in user code that *use* the namespace.
Noted. I actually misread the original comment and inferred an objection to float and int already being C++ keywords. In any case, it is indeed a stylistic preference. If I was really worried about verbosity, I would probably alias boost::multiprecision (or whatever the namespace was) to bmp and then use bmp::float_. Or maybe better is real. Yes, these objects can't cover *all* real numbers, but their purpose is to provide (nearly) arbitrarily precise approximations to real numbers, right? A similar "fudging", if you will, of "real" is used by Boost.Spirit (qi::real_parser and karma::real_generator). float_ in no way intuitively means multiprecision floating point data type.
Agreed, but I think the "no way" is a bit excessive. The only thing it doesn't really convey is the multiprecision aspect, and that context is given by the enclosing namespace. I see multiprecision::mp_float as having unnecessary lexical redundancy...or something like that :/ Intuitively, I would expect float_ in a boost library would mean a type that
wraps a literal floating point value in a type for metaprogramming and might be declared something like:
template <float f> float_ : float_c<f>;
I'm guessing your expectations are based on Boost.MPL, but keep in mind that there are actually float_ and int_ objects in Boost.Spirit.Qi and Boost.Spirit.Karma which have little to do with wrapping numeric values. So, again, it's difficult to infer much from an identifier without appropriate context. - Jeff

on Wed Aug 31 2011, "Simonson, Lucanus J" <lucanus.j.simonson-AT-intel.com> wrote:
float_ in no way intuitively means multiprecision floating point data type.
Intuitively, I would expect float_ in a boost library would mean a type that wraps a literal floating point value in a type for metaprogramming and might be declared something like:
template <float f> float_ : float_c<f>;
float_ in no way intuitively means that, either. We have namespaces because/so-that the same name can have different meanings depending on context. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

on Wed Aug 31 2011, "Jeffrey Lee Hellrung, Jr." <jeffrey.hellrung-AT-gmail.com> wrote:
Couldn't you just do what "every" other boost library does, and append an underscore?
This might be a better answer, yes. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

________________________________ From: "Jeffrey Lee Hellrung, Jr." <jeffrey.hellrung@gmail.com> To: boost@lists.boost.org Sent: Thursday, September 1, 2011 12:29 AM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point Couldn't you just do what "every" other boost library does, and append an underscore? - Jeff _______________________________________________ float_ seems so plain. I don't really like it, in the sense of liking a color of wall-paint. This isn't really a good technical reason, though. Sincerely, Chris.

on Wed Aug 31 2011, Christopher Kormanyos <e_float-AT-yahoo.com> wrote:
________________________________ From: "Jeffrey Lee Hellrung, Jr." <jeffrey.hellrung@gmail.com> To: boost@lists.boost.org Sent: Thursday, September 1, 2011 12:29 AM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
Couldn't you just do what "every" other boost library does, and append an underscore?
- Jeff _______________________________________________
float_ seems so plain. I don't really like it, in the sense of liking a color of wall-paint.
The kind specifically formulated for bike sheds? couldn't-resist-ly y'rs, -- Dave Abrahams BoostPro Computing http://www.boostpro.com

________________________________ From: "Simonson, Lucanus J" <lucanus.j.simonson@intel.com> To: "boost@lists.boost.org" <boost@lists.boost.org> Sent: Wednesday, August 31, 2011 7:46 PM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point <snip> Thank you, Luke. I really appreciate your clear and detailed response. Please don't call it real. An mp float type is not at all a real number any more than a regular float type is a real number. With extended precision floating type you cannot represent irrationals and you cannot represent most rationals exactly either. What it is is a floating point type with a variable, but still finite number of bits. All computers have finite sized memory and are required to run programs in a finite amount of time. That is why these numerical types are properly called multiple precision and not infinte precision. I'd like to see the name be mp_float. I prefer: mp_float mp_int mp_rational Yes, I see your point here. In fact, it's quite illuminating. The use of "float" in the name is probably better than "real". Please remember, though, float, int and rational are enough to keep us all busy for now and later and on and on... We are only doing the floating-point type now. Obviously we need the mp_ prefix if we use float and int since these are keywords without it. Also, if the user puts using namespace boost::multiprecision then they will benefit from the prefix. I prefer multiprecision to mp since the library name I'm guessing would be multiprecision and it is conventional to have the namespace be the same as the library name. Affirmative. For me the rational data type is very important and should wrap the gmp mpq_type. It would be nice if the library could use mp_int with Boost.Rational to implement its own standalone mp_rational datatype, but I would prefer to use the gmp type whenever possible. Again, yes. I personally have not programmed the rational type --- Just int's and float's. The long-term idea is to understanding the algorithms *and* to program portable code, thereby avoiding restrictive GNU licensing issues (-lgmp,... barf). By the way, I have put extensive thought into how to build a good expression template system for dealing with multi-precision arithmetic and would like to be involved in some of the discussion of the design of the system. My stuff is in the sandbox under "e_float". John's stuff is in "big_number". I don't know much and I'm just doing the back-end. But I modestly welcome any assistance, particularly of architectural nature. Particularly, the most important thing about providing expression templates for multi-precsision is eliminating allocations and deallocations. The allocations can easily dominate the runtime of a multiprecision algorithm. There are many options and considerations for how to optimize the allocations, such as object pools, thread safety etc. Anyone who uses multiprecision numerical data types in the same way they would use built-in numerical data types is writing code that runs significantly slower than the equivalent C code that doesn't abstract away allocation and deallocation and forces the programmer to choose where and when to allocate so that they choose the proper place. I saw large speedups in my Polygon library by recycling gmpq_type variables ra ther than declaring them in the inner most scope in which they were used. I hear you. But I beg to augment this critical point. Multiple-precision mathematical algorithms spend roughly 80-90% of their computational time within the multiplication routine. If you can get a fast multiplication algorithm and subsequently apply this to Newton iteration for inverse and for roots, then you've got a good package. If you make it portable and adherent to the C++ semantics, well, then you're finally finished with this mess. If you investigate the MFLOPS of MP-math, even a modest 100 digits is, maybe, 100-200 times slower slower than double-precision. The rat-race lies within the multiplication routine, not the allocation mechanisms. Fixed-size arrays and a basic custom allocator above size=N can solve the allocation performance bottleneck (to within certain limits). But fast multiply and C++ adherence ultimately elevate the package from a the realm of a hack to a real, specifiable, high-performance type. Thanks, Luke _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Christopher Kormanyos wrote:
I hear you. But I beg to augment this critical point. Multiple-precision mathematical algorithms spend roughly 80-90% of their computational time within the multiplication routine. If you can get a fast multiplication algorithm and subsequently apply this to Newton iteration for inverse and for roots, then you've got a good package. If you make it portable and adherent to the C++ semantics, well, then you're finally finished with this mess. If you investigate the MFLOPS of MP-math, even a modest 100 digits is, maybe, 100-200 times slower slower than double-precision. The rat-race lies within the multiplication routine, not the allocation mechanisms. Fixed-size arrays and a basic custom allocator above size=N can solve the allocation performance bottleneck (to within certain limits). But fast multiply and C++ adherence ultimately elevate the package from a the realm of a hack to a real, specifiable, high-performance type.
From my perspective though, the common case is numbers with only a small constant factor more bits than the largest built-ins. In my case 65, 96 and 126 bit integer and rationals. You need 65 bits for a 32 bit integer cross product in the worst case. In many cases the probability of a large number occuring goes down with increase in its size. Yes, a fixed size array in the datatype would be my preference, but the way I would like to use such a boost multiprecision library is as a wrapper for gmp that defaults to a portable c++ implementation when gmp is not available. From my perspective usage of gmp with modest sized multiprecision values is the common case. As much as we may hate the lgpl, gmp is part of the linux environment of every machine I use for real work and it is free. Right now I have no fall back and my algorithms run in non-robust mode without gmp. I'd be very happy to have a library in boost to provide the default multiprecision datatype implementation.
Given that xint was recently rejected, which we all have mixed feelings about, I'm sure, we need to be careful about what the scope and intent of this fairly similar library is. Is the scope multiprecision floating point only with the intent to provide high-performance multiprecision algorithms? I'm afraid that direction would lead to benchmark comparisons with gmp, with predictable results. If you provide a metafunction for looking up the implementation of the mutliprecision arithemtic so that people can specify gmp (or suitable alternative) then the focus will turn toward how well you wrap gmp with your expression templates and the primary concern about your own algorithms will be their correctness and portability. I think an effort in that direction is realistic and achievable and something that could be accepted as a boost library. Be careful to set yourself a task you can succeed at by limiting the scope of your library and be careful also to avoid confusion about what it is you are trying to do by being very clear about what those limits are. Building a coalition of contributing authors might also be a good idea. For example, will you provide an extensible framework for adding mp_int and mp_rational later if your initial scope is limited to mp_float? I wish you the best of luck and success. Regards, Luke

________________________________ From: "Simonson, Lucanus J" <lucanus.j.simonson@intel.com> To: "boost@lists.boost.org" <boost@lists.boost.org> Sent: Thursday, September 1, 2011 7:31 AM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point Christopher Kormanyos wrote:
I hear you. But I beg to augment this critical point. ...
From my perspective though, the common case is numbers with only a small constant factor more bits than the largest built-ins. In my case 65, 96 and 126 bit integer and rationals. You need 65 bits for a 32 bit integer cross product in the worst case. In many cases the probability of a large number occuring goes down with increase in its size. Yes, a fixed size array in the datatype would be my preference, but the way I would like to use such a boost multiprecision library is as a wrapper for gmp that defaults to a portable c++ implementation when gmp is not available. From my perspective usage of gmp with modest sized multiprecision values is the common case. As much as we may hate the lgpl, gmp is part of the linux environment of every machine I use for real work and it is free. Right now I have no fall back and my algorithms run in non-robust mode without gmp. I'd be very happy to have a library in boost to provide the default multiprecision datatype implementation.
Given that xint was recently rejected, which we all have mixed feelings about, I'm sure, we need to be careful about what the scope and intent of this fairly similar library is. Is the scope multiprecision floating point only with the intent to provide high-performance multiprecision algorithms? I'm afraid that direction would lead to benchmark comparisons with gmp, with predictable results. If you provide a metafunction for looking up the implementation of the mutliprecision arithemtic so that people can specify gmp (or suitable alternative) then the focus will turn toward how well you wrap gmp with your expression templates and the primary concern about your own algorithms will be their correctness and portability. I think an effort in that direction is realistic and achievable and something that could be accepted as a boost library. Be careful to set yourself a task you can succeed at by limiting the scope of your library and be careful also to avoid confusion about what it is you are trying to do by being very clear about what those limits are. Building a coalition of contributing authors might also be a good idea. For example, will you provide an extensible framework for adding mp_int and mp_rational later if your initial scope is limited to mp_float? I wish you the best of luck and success. Regards, Luke --------------------------------------------------------------------------------- Thank you for your detailed comments, Luke. The primary concern right now is multiple-precision floating point algorithms. You're right, I was not specific. My float type does this: * boost::multiprecision::mp_real (or whatever it might be called) provides a multiprecision floating-point type with a compile-time width. * It is tested and specified from 30...300 decimal digits of precision. * In addition, the expected semantics for C++ such as interaction with PODs and I/O streams are supported. * A selection of elementary transcendental functions sin, cos, etc. and Gamma and Zeta functions are provided for real and complex values. * Three back end numeric types are included in boost::multiprecision::mp_real: GMP, MPFR and my own portable base-10 EFX type. * All three back ends have similar performance, with MPFR slightly higher due to its superior transcendental functions. * The EFX type compares well with, or even beats, GMP in a bare-bones MFLOPS test at 50 and 100 decimal digits. This is going to make you guffaw. But people just say GMP is fast. They never really tried to come up with a serious alternative. Although GMP is far superior to the limited digit range that I am preliminarily offering. And we need to do a lot of work to catch up with those guys. John's big_number templates wrap native GMP, MPFR and my boost::multiprecision::mp_real, thereby augmentig these with a common interface for high-level algorithm design. Again, you are right. I have no infrastructure for a common backbone for real, int, float. And my vision is probably running away from me.

Dear Christopher Kormanyos, could you please try to follow <http://www.boost.org/community/policy.html#quoting> I'm not a moderator, but I'm really having difficulty reading your mails. My problem is to distinguish the text of your "replies" from the text of the mail you are replying to. Regards, Thomas

Thomas Klimpel wrote:
Dear Christopher Kormanyos, could you please try to follow <http://www.boost.org/community/policy.html#quoting>
Regards, Thomas
----------------------------------------------------------------------------------- Thomas, I'm trying to do the best I can. But I'm still figuring out how you guys do it. I've only ever used those other kinds of visual forums. I will try to improve my adherence to the standards of this forum. Was this post better? Sincerely, Chris.

Christopher Kormanyos wrote:
Thomas Klimpel wrote:
Dear Christopher Kormanyos, could you please try to follow <http://www.boost.org/community/policy.html#quoting>
Regards, Thomas
-----------------------------------------------------------------------------------
Thomas, I'm trying to do the best I can. But I'm still figuring out how you guys do it. I've only ever used those other kinds of visual forums.
I will try to improve my adherence to the standards of this forum.
Was this post better?
We use Outlook-quitefix by Dominik Jain. Google will know how to find it.

on Thu Sep 01 2011, "Simonson, Lucanus J" <lucanus.j.simonson-AT-intel.com> wrote:
Christopher Kormanyos wrote:
Thomas Klimpel wrote:
Dear Christopher Kormanyos, could you please try to follow <http://www.boost.org/community/policy.html#quoting>> Regards, Thomas
-----------------------------------------------------------------------------------
Thomas, I'm trying to do the best I can. But I'm still figuring out how you guys do it. I've only ever used those other kinds of visual forums.
I will try to improve my adherence to the standards of this forum.
Was this post better?
We use Outlook-quitefix by Dominik Jain. Google will know how to find it.
I think the headers in Christopher's post indicate he's using YahooMailWebService, not Outlook. And that's "outlook-quotefix", should anyone decide to Google it. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Aug 31, 2011, at 10:46 AM, Simonson, Lucanus J wrote:
I'd like to see the name be mp_float. I prefer:
mp_float mp_int mp_rational
Obviously we need the mp_ prefix if we use float and int since these are keywords without it.
How about floating integer rational
Also, if the user puts using namespace boost::multiprecision then they will benefit from the prefix.
If the user writes "namespace mp = boost::multiprecision;", then the mp_ prefix is redundant and ugly. If you really prefer to write mp_float to mp::floating, you can still "typedef boost::multiprecision::floating mp_float;".
I prefer multiprecision to mp since the library name I'm guessing would be multiprecision and it is conventional to have the namespace be the same as the library name.
I agree here. But the user is free to alias the namespace to something more succinct and convenient. Josh

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Joshua Juran Sent: Thursday, September 01, 2011 9:12 AM To: boost@lists.boost.org Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
On Aug 31, 2011, at 10:46 AM, Simonson, Lucanus J wrote:
I'd like to see the name be mp_float. I prefer:
mp_float mp_int mp_rational
Obviously we need the mp_ prefix if we use float and int since these are keywords without it.
How about
floating integer rational
Also, if the user puts using namespace boost::multiprecision then they will benefit from the prefix.
If the user writes "namespace mp = boost::multiprecision;", then the mp_ prefix is redundant and ugly. If you really prefer to write mp_float to mp::floating, you can still "typedef boost::multiprecision::floating mp_float;".
I prefer multiprecision to mp since the library name I'm guessing would be multiprecision and it is conventional to have the namespace be the same as the library name.
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Joshua Juran Sent: Thursday, September 01, 2011 9:12 AM To: boost@lists.boost.org Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
On Aug 31, 2011, at 10:46 AM, Simonson, Lucanus J wrote:
I'd like to see the name be mp_float. I prefer:
mp_float mp_int mp_rational
Obviously we need the mp_ prefix if we use float and int since these are keywords without it.
How about
floating integer rational
Also, if the user puts using namespace boost::multiprecision then they will benefit from the prefix.
If the user writes "namespace mp = boost::multiprecision;", then the mp_ prefix is redundant and ugly. If you really prefer to write mp_float to mp::floating, you can still "typedef boost::multiprecision::floating mp_float;".
I prefer multiprecision to mp since the library name I'm guessing would be multiprecision and it is conventional to have the namespace be the same as the library name.
Having given this matter some thought before Chris laid ideas for your input, perhaps I can give my twopennyworth. multiprecision really is the right name for both a library Boost.Multiprecision, and thus for the enclosing namespace boost::multiprecision. All the 'Big' variant names seem passé, and not quite right - or example, we *could* include floating-point types with smaller precision than float - for embedded systems. Who needs 6 decimal digits precision for a toaster ;-) Abbreviating the library name to MP is just acronymitis (and raises hackles of those who have already 'claimed' the letter letters M and P for various other purposes). Adding a trailing _ is ugly, and confused with similar nasty conventions like member functions/data, so this is an 'over my dead body' proposal. The shortness of real is neat, though I agree it is not strictly mathematically right. I could live with floating, integer (integral?) and rational. (and perhaps also decimal? - Where do decimal types - proposed, and implemented - fit into this scheme?) So FWIW, I'll go with boost::multiprecision::floating, integer and rational Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

Paul A. Bristow wrote:
Having given this matter some thought before Chris laid ideas for your input, perhaps I can give my twopennyworth.
multiprecision really is the right name for both a library Boost.Multiprecision, and thus for the enclosing namespace boost::multiprecision.
All the 'Big' variant names seem passé, and not quite right - or example, we *could* include floating-point types with smaller precision than float - for embedded systems. Who needs 6 decimal digits precision for a toaster ;-)
Abbreviating the library name to MP is just acronymitis (and raises hackles of those who have already 'claimed' the letter letters M and P for various other purposes).
Adding a trailing _ is ugly, and confused with similar nasty conventions like member functions/data, so this is an 'over my dead body' proposal.
The shortness of real is neat, though I agree it is not strictly mathematically right. I could live with floating, integer (integral?) and rational.
So FWIW, I'll go with boost::multiprecision::floating, integer and rational
Paul
--------------------------------------------------------------------------------------- Well said. And allow me to throw out a big thanks to everyone who contributed. We need to wrap this one up for the weekend. * I believe we received more positive feedback for "a variation of float...", neither real nor decimal. * The consensus seems to be that the namespace mp is terse and confusing. * Some like the trailing underscore. Most, however, do not favor it. Let's go with this. namespace boost { namespace multiprecision { class floating { }; } } So this is the last call for anyone who can't live with it. After this weekend or next, I expect to have refactored the e_float code accordingly. Sincerely, Chris.

So this is the last call for anyone who can't live with it.
Since you're still asking... :-)
* I believe we received more positive feedback for "a variation of float...", neither real nor decimal. * The consensus seems to be that the namespace mp is terse and confusing. * Some like the trailing underscore. Most, however, do not favor it.
+1 all from me.
namespace boost { namespace multiprecision {
OK, so far...
class floating { }; } }
Please no! I'd expect a class name like this to be a noun (possibly preceded by adjectives). Am I alone? My vote goes to the following, already mentioned earlier: namespace boost { namespace multiprecision { class mp_float; } } £0.02, Gareth ************************************************************************ The information contained in this message or any of its attachments may be confidential and is intended for the exclusive use of the addressee(s). Any disclosure, reproduction, distribution or other dissemination or use of this communication is strictly prohibited without the express permission of the sender. The views expressed in this email are those of the individual and not necessarily those of Sony or Sony affiliated companies. Sony email is for business use only. This email and any response may be monitored by Sony to be in compliance with Sony's global policies and standards

Sylvester-Bradley, Gareth wrote:
So this is the last call for anyone who can't live with it.
Since you're still asking... :-)
* I believe we received more positive feedback for "a variation of float...", neither real nor decimal. * The consensus seems to be that the namespace mp is terse and confusing. * Some like the trailing underscore. Most, however, do not favor it.
+1 all from me.
namespace boost { namespace multiprecision {
OK, so far...
class floating { }; } }
Please no! I'd expect a class name like this to be a noun (possibly preceded by adjectives). Am I alone?
My vote goes to the following, already mentioned earlier:
namespace boost { namespace multiprecision { class mp_float; } }
I'm OK with multiprecision::floating. I'm guessing that floating would actually be a template with compile time precision based on previous statements by the author. template <int precision> //number of bits class floating {}; So if the user typed using namespace multiprecision; typedef floating<128> f128; f128 my_val; It would be pretty clear, I think, what his intent was. Also, an instance of such a template isn't mutli-precision anymore, it is fixed precision, so mp_floating<128> is somewhat self contradictory. If floating isn't a template and has runtime precision then I'd rather see it named mp_float. I'm only half joking when I say that if it were my library I'd very likely name it multiprecision_floating_point_type. I use epic length identifiers instead of comments. A function name is often a whole sentence explaining what the funcion does, eg. rectangularize_negative_by_leaps. I do try to keep the sentence short and to the point, though. I'm actually getting worse about this as I get older. The switch to wide aspect ratio monitors has only encouraged this behavior and I now consider 160 characters to be a resonable line length limit. Perhaps my opinion on naming things should be taken with a grain of salt. Regards, Luke

Sylvester-Bradley, Gareth wrote:
So this is the last call for anyone who can't live with it.
Since you're still asking... :-) My vote goes to the following, already mentioned earlier:
<snip>
namespace boost { namespace multiprecision { class mp_float; } }
£0.02, Gareth
---------------------------------------------------------------------------- Either way is fine. Some did not like the "mp_" prefix. I don't care. It will be either "floating" or "mp_float". I'll see how it looks on the code and tell y'all which one it is when I'm done with it in a week or two. Thanks again for all the contributions. Sincerely, Chris.

<snip>
It will be either "floating" or "mp_float". I'll see how it looks on the code... Thanks again for all the contributions.
<snip> Boost.Multiprecision is available in the sandbox. It has: namespace boost { namespace multiprecision { class mp_float { }; class mp_complex { }; mp_float sin(const mp_float&); mp_complex sin(const mp_complex&); // and many more functions, etc... } } The original work "e_float" remains in the sandbox, but will eventually be removed. Anyone using, testing or benchmarking the old "e_float" can now migrate to Boost.Multiprecision. I replaced my own stuff like "my_lexical_cast" with boost::lexical_cast, used boost::array instead of tr1::array or std::array, etc. Before I document and adapt the test and build to the boost way, may I request some experienced designers to look at my architecture. I am *not* the best C++ architect in the world. It would be nice if any *significant* architectural suggestions come before I document it and wrap it up for potential submission. If you want to suggest architectural adaptation, though, please ensure that the change does not negatively impact the run-time performance of the *timed* test suite. This is high-performance mathematical software, offering world-class performance in its target digit range (< 200 decimal digits). Any architectural suggestions should check performance before change and after change. I will be unavailable from Sunday, 25-October until the last week of October, 2011. Sincerely, Chris

Well said.
And allow me to throw out a big thanks to everyone who contributed.
We need to wrap this one up for the weekend.
* I believe we received more positive feedback for "a variation of float...", neither real nor decimal. * The consensus seems to be that the namespace mp is terse and confusing. * Some like the trailing underscore. Most, however, do not favor it. Let's go with this.
namespace boost { namespace multiprecision { class floating { }; } }
So this is the last call for anyone who can't live with it. After this weekend or next, I expect to have refactored the e_float code accordingly.
+1

On Sep 1, 2011, at 4:39 AM, Paul A. Bristow wrote:
I could live with floating, integer (integral?) and rational.
Please not 'integral' -- that's too easily associated with antiderivatives rather than classes of numbers. If you want to make the parts of speech consistent, try 'ratio'. (As for 'floating': Yes, it's an adjective, but it describes the fraction point (as in "floating point number"), not the number itself (as in "rational number"), so using all adjectives wouldn't be consistent anyway. And a 'float' is part of either a pier or a parade.) Josh

For me the rational data type is very important and should wrap the gmp mpq_type. It would be nice if the library could use mp_int with Boost.Rational to implement its own standalone mp_rational datatype, but I would prefer to use the gmp type whenever possible.
Support for mpq_t is on my (long) todo list.
By the way, I have put extensive thought into how to build a good expression template system for dealing with multi-precision arithmetic and would like to be involved in some of the discussion of the design of the system. Particularly, the most important thing about providing expression templates for multi-precsision is eliminating allocations and deallocations. The allocations can easily dominate the runtime of a multiprecision algorithm.
I suspect this may depend on the data type - for real-valued types, I've done some experimenting with a port of the LINPACK benchmark to C++ that shows *almost* no difference between expression template enabled code (whether mine, or existing wrappers such as mpf_class) and more traditional class abstractions. I need to do some experimenting and profiling, but it seems like the runtime is completely dominated by the cost of multiplication/division, and even though my code generates fewer temporaries that either mpf_class or traditional non-ET wrappers, that simply doesn't take much off the runtime. Of course LINPACK is an old Fortran program written in a very idiomatic style that probably doesn't really stretch the expression template code at all. So arguably we need better test cases - the special functions in Boost.Math would make good candidates I guess.
There are many options and considerations for how to optimize the allocations, such as object pools, thread safety etc. Anyone who uses multiprecision numerical data types in the same way they would use built-in numerical data types is writing code that runs significantly slower than the equivalent C code that doesn't abstract away allocation and deallocation and forces the programmer to choose where and when to allocate so that they choose the proper place. I saw large speedups in my Polygon library by recycling gmpq_type variables rather than declaring them in the inner most scope in which they were used.
I take it you mean that within the "big number" class type initialized mpq_t variables get cached and recycled? That's an interesting idea, and not something I've investigated so far - not least because of the issues you highlight. One thing I do want to investigate for fixed-precision real-valued types (with mpfr or mpf backends) is to eliminate the allocation altogether by placing the storage directly in the class (if it makes sense for not-too large precisions). I have a couple of questions relating to the polygon library if that's OK? * Is there a concept check program anywhere that I can plug my types into and verify they meet the libraries conceptual requirements? * Is there a good program for testing "big number" performance within your library (for either large integers or rationals)? And yes, we would very much like your input ;-) With regard to expression templates, can I direct you to the "big_number" directory of the sandbox and the wildly out of date docs at http://svn.boost.org/svn/boost/sandbox/big_number/libs/math/doc/html/index.h... In particular the conceptual requirements required for backend types are constantly evolving - both as the needs of new backends are evaluated - and as I come up with new "good ideas" ;-) Nonetheless I would much welcome your input, and will try and add a rationale type backend ASAP for you to experiment with. Regards, John.

John Maddock wrote:
For me the rational data type is very important and should wrap the gmp mpq_type. It would be nice if the library could use mp_int with Boost.Rational to implement its own standalone mp_rational datatype, but I would prefer to use the gmp type whenever possible.
Support for mpq_t is on my (long) todo list.
I'm glad to hear. In theory a fixed precision floating point type with 128 bits or more in the mantissa would also satisfy my needs, but I have only tested with multi-precison rational.
By the way, I have put extensive thought into how to build a good expression template system for dealing with multi-precision arithmetic and would like to be involved in some of the discussion of the design of the system. Particularly, the most important thing about providing expression templates for multi-precsision is eliminating allocations and deallocations. The allocations can easily dominate the runtime of a multiprecision algorithm.
I suspect this may depend on the data type - for real-valued types, I've done some experimenting with a port of the LINPACK benchmark to C++ that shows *almost* no difference between expression template enabled code (whether mine, or existing wrappers such as mpf_class) and more traditional class abstractions. I need to do some experimenting and profiling, but it seems like the runtime is completely dominated by the cost of multiplication/division, and even though my code generates fewer temporaries that either mpf_class or traditional non-ET wrappers, that simply doesn't take much off the runtime. Of course LINPACK is an old Fortran program written in a very idiomatic style that probably doesn't really stretch the expression template code at all. So arguably we need better test cases - the special functions in Boost.Math would make good candidates I guess.
Gmp mpq_class expressions allocate for temporaries in compound expressions. If you declare your numerical types outside the loop and write only single operation expressions inside the loop with no implicit temporaries created then mpq_class won't allocate at all in the loop (except when the max size of the buffer in the numerical types needs to be occasionally increased.) The result can be allocation free execution in the common case. And I have seen significant speedups due to rewriting code to avoid allocations in both the line segment intersection and voronoi diagram use cases. I would like to get the same benefits without needing to rewrite arithmatic code so that replacing a builtin numerical datatype with a multiprecision in a templated context would produce optimal code even if the author followed the normal style of declaring variables in the inner most scope possible and large compound expressions with many temporaries.
There are many options and considerations for how to optimize the allocations, such as object pools, thread safety etc. Anyone who uses multiprecision numerical data types in the same way they would use built-in numerical data types is writing code that runs significantly slower than the equivalent C code that doesn't abstract away allocation and deallocation and forces the programmer to choose where and when to allocate so that they choose the proper place. I saw large speedups in my Polygon library by recycling gmpq_type variables rather than declaring them in the inner most scope in which they were used.
I take it you mean that within the "big number" class type initialized mpq_t variables get cached and recycled?
Yes. One simple way is the cache a temoporary along with the primary value in each wrapeed numerical object and return reference to the tempoarary as the result of an operator. That way if the numerical object stays in scope through many iterations of a loop the tempoarary gets recycled also. However, this still requires that numerical objects be declared at the outer rather than inner scope, which is not normal style.
That's an interesting idea, and not something I've investigated so far - not least because of the issues you highlight.
One thing I do want to investigate for fixed-precision real-valued types (with mpfr or mpf backends) is to eliminate the allocation altogether by placing the storage directly in the class (if it makes sense for not-too large precisions).
I like the idea of fixed sized array directly in the class. It would be a huge win and beat typical usage of gmp in C++ with even a slower arithmetic algorithm because allocation dominates arithmetic for modest sized values.
I have a couple of questions relating to the polygon library if that's OK?
* Is there a concept check program anywhere that I can plug my types into and verify they meet the libraries conceptual requirements?
Not exactly. I considered writing one, but basically what you need to do is make sure the traits for your type are defined and that it is registered as the right concept type then write a test that exercises the traits through the free function of the same name that expects the concept type. For example point concept has get and set free functions that exercise the get and set traits. If those two functions work correctly (and construct()) then your point is a model of the concept. If you copy-paste the example code for mapping a user defined type to the concept and replace my type with yours and compile the test is part of the example code because I exercise the traits.
* Is there a good program for testing "big number" performance within your library (for either large integers or rationals)?
If you look at the gmp_override.hpp you will see how I abstract away the gmpq_class as a high_precision_type<> metafunction lookup. You can replace gmpq_class with anything you want and benchmark. However, you will find that the performance of the high precision type is almost irrelevant to performance of the larger algorthm because lazy-exact arithmetic implies that it is very rarely if ever used. What you can do instead is dig into my polygon_aribtrary_formation.hpp in details and find the function that computes the intersection point of two line segments using the high precision type. You can then write a benchmark that generates random line segments and intersects them. Once you have that working you can swap in different high precision types by replacing gmp_override.hpp with your own override header file that you include just after polygon.hpp and before your own benchmark code. I have a "fitness" test for intersection points so you can also use the benchmark code as a stress test for the numerical data type chosen to see if it would provide robust execution of my algorithm. You will see the fitness test applied at the end of the lazy version of line segment intersection just before it conditionally calls the exact version.
With regard to expression templates, can I direct you to the "big_number" directory of the sandbox and the wildly out of date docs at http://svn.boost.org/svn/boost/sandbox/big_number/libs/math/doc/html/index.h... In particular the conceptual requirements required for backend types are constantly evolving - both as the needs of new backends are evaluated - and as I come up with new "good ideas" ;-) Nonetheless I would much welcome your input, and will try and add a rationale type backend ASAP for you to experiment with.
I'll try to look at is as time permits. Thanks, Luke

For me the rational data type is very important and should wrap the gmp mpq_type. It would be nice if the library could use mp_int with Boost.Rational to implement its own standalone mp_rational datatype, but I would prefer to use the gmp type whenever possible.
Support for mpq_t is on my (long) todo list.
I'm glad to hear. In theory a fixed precision floating point type with 128 bits or more in the mantissa would also satisfy my needs, but I have only tested with multi-precison rational.
Nod. In theory it should be possible to compose the low level GMP functions to create a fixed-precision (modulus arithmetic) integer type... but that's one for the future. I do have a rational/mpq backend for big_number (or whatever we call it) now, but I'm away for a week now, so it might not get committed for a while :-(
Gmp mpq_class expressions allocate for temporaries in compound expressions.
Nod, one thing I've worked hard on is eliminating temporaries inside expressions whenever possible, so if you write say: a = b + c + d + e + f; Then you get no temporaries at all - everthing gets accumulated inside variable a. However, something like: a = b*c + d*e; does require one temporary for *one* of the multiplications (the other uses "a" as working space). One thing I've thought of is walking the expression tree at compile time to calculate the largest number of temporaries in the longest "branch", then creating these as an array at the outermost level of the expression evaluation and passing them down, so that: a = b*c + b*d + b*e + b*f; still only needs one temporary - which gets reused for each branch. This needs some more thought though - to make sure I don't shoot myself in the foot and reuse something that shouldn't be!
* Is there a concept check program anywhere that I can plug my types into and verify they meet the libraries conceptual requirements?
Not exactly. I considered writing one, but basically what you need to do is make sure the traits for your type are defined and that it is registered as the right concept type then write a test that exercises the traits through the free function of the same name that expects the concept type. For example point concept has get and set free functions that exercise the get and set traits. If those two functions work correctly (and construct()) then your point is a model of the concept. If you copy-paste the example code for mapping a user defined type to the concept and replace my type with yours and compile the test is part of the example code because I exercise the traits.
* Is there a good program for testing "big number" performance within your library (for either large integers or rationals)?
If you look at the gmp_override.hpp you will see how I abstract away the gmpq_class as a high_precision_type<> metafunction lookup. You can replace gmpq_class with anything you want and benchmark. However, you will find that the performance of the high precision type is almost irrelevant to performance of the larger algorthm because lazy-exact arithmetic implies that it is very rarely if ever used. What you can do instead is dig into my polygon_aribtrary_formation.hpp in details and find the function that computes the intersection point of two line segments using the high precision type. You can then write a benchmark that generates random line segments and intersects them. Once you have that working you can swap in different high precision types by replacing gmp_override.hpp with your own override header file that you include just after polygon.hpp and before your own benchmark code. I have a "fitness" test for intersection points so you can also use the benchmark code as a stress test for the numerical data type chosen to see if it would provide robust execution of my algorithm. You will see the fitness test applied at the end of the lazy version of line segment intersection just before it conditionally calls the exact version.
OK sounds like that might not be as simple as I'd hoped, will investigate as I get time, Regards, John.

John Maddock wrote:
Gmp mpq_class expressions allocate for temporaries in compound expressions.
Nod, one thing I've worked hard on is eliminating temporaries inside expressions whenever possible, so if you write say:
a = b + c + d + e + f;
Then you get no temporaries at all - everthing gets accumulated inside variable a.
However, something like:
a = b*c + d*e;
does require one temporary for *one* of the multiplications (the other uses "a" as working space).
I really like this, and I think it may be about the best we can do. If you could somehow abstract out the compile time logic of doing this with an expression template system from your application of it to bignum then it could become a boost library extension that might find a home in proto (assuming you use proto). I'd imagine Joel F. would like to discuss this more with you. You've already pushed the expression templates about as far as they can go to accomplish this, and farther than I ever have.
One thing I've thought of is walking the expression tree at compile time to calculate the largest number of temporaries in the longest "branch", then creating these as an array at the outermost level of the expression evaluation and passing them down, so that:
a = b*c + b*d + b*e + b*f;
still only needs one temporary - which gets reused for each branch.
This needs some more thought though - to make sure I don't shoot myself in the foot and reuse something that shouldn't be.
You could cache the array of temporaries in "a" for reuse when "a" is reused. bignum a, b, c, d, e, f; for (int i = 0; i < 1000; ++i) { b = B[i]; c = C[i]; d = D[i]; e = E[i]; f = F[i]; a = b*c + b*d + b*e + b*f; foo(a); } Allocates a,b,c,d,e,f and two temporaries for a total of 8 allocations instead of 6006 with gmpq_class. The only problem is that it is more natural to decare a-f inside the loop or just write: for (int i = 0; i < 1000; ++i) { foo( b[i]*c[i] + b[i]*d[i] + b[i]*e[i] + b[i]*f[i]); } We could make the array mutable and cache it in any of the arguments of the expression, but that wouldn't help in this case, and wouldn't help if they were all rvalues. for (int i = 0; i < 1000; ++i) { foo( b(i)*c(i) + b(i)*d(i) + b(i)*e(i) + b(i)*f(i)); } If we try to fall back to a custom allocator we have all the problems with thread safety and have only really accomplished improving on normal heap allocation by applying a pool. The problem with creating an areana of gmpq_class objects is that what we really want is to cache the object that has the right *size* buffer internally to store the result of the operation. In general that is different from one temporary to the next in the code. If you use an areana you get an aribtrary object and you end up growing all their buffers to store the largest temporary, which is a lot more allocations and a lot more memory consumed. Also, with the areana you can't release the memory without putting that burden on the library user. In the end there are problems with any approach that goes beyond what you've already done and I'm not entirely sure what our best option is. I'm afraid we can never abstract away the management of memory from the user of the code and give them something that can be as efficient as they could be if they managed their allocations explictly. Regards, Luke

Le 02/09/11 20:34, Simonson, Lucanus J a écrit :
John Maddock wrote:
Gmp mpq_class expressions allocate for temporaries in compound expressions. Nod, one thing I've worked hard on is eliminating temporaries inside expressions whenever possible, so if you write say:
a = b + c + d + e + f;
Then you get no temporaries at all - everthing gets accumulated inside variable a.
However, something like:
a = b*c + d*e;
does require one temporary for *one* of the multiplications (the other uses "a" as working space). I really like this, and I think it may be about the best we can do. If you could somehow abstract out the compile time logic of doing this with an expression template system from your application of it to bignum then it could become a boost library extension that might find a home in proto (assuming you use proto). I'd imagine Joel F. would like to discuss this more with you. You've already pushed the expression templates about as far as they can go to accomplish this, and farther than I ever have.
One thing I've thought of is walking the expression tree at compile time to calculate the largest number of temporaries in the longest "branch", then creating these as an array at the outermost level of the expression evaluation and passing them down, so that:
a = b*c + b*d + b*e + b*f;
still only needs one temporary - which gets reused for each branch.
This needs some more thought though - to make sure I don't shoot myself in the foot and reuse something that shouldn't be. You could cache the array of temporaries in "a" for reuse when "a" is reused.
bignum a, b, c, d, e, f; for (int i = 0; i< 1000; ++i) { b = B[i]; c = C[i]; d = D[i]; e = E[i]; f = F[i]; a = b*c + b*d + b*e + b*f; foo(a); }
Allocates a,b,c,d,e,f and two temporaries for a total of 8 allocations instead of 6006 with gmpq_class.
The only problem is that it is more natural to decare a-f inside the loop or just write:
for (int i = 0; i< 1000; ++i) { foo( b[i]*c[i] + b[i]*d[i] + b[i]*e[i] + b[i]*f[i]); }
We could make the array mutable and cache it in any of the arguments of the expression, but that wouldn't help in this case, and wouldn't help if they were all rvalues.
for (int i = 0; i< 1000; ++i) { foo( b(i)*c(i) + b(i)*d(i) + b(i)*e(i) + b(i)*f(i)); }
If we try to fall back to a custom allocator we have all the problems with thread safety and have only really accomplished improving on normal heap allocation by applying a pool. The problem with creating an areana of gmpq_class objects is that what we really want is to cache the object that has the right *size* buffer internally to store the result of the operation. In general that is different from one temporary to the next in the code. If you use an areana you get an aribtrary object and you end up growing all their buffers to store the largest temporary, which is a lot more allocations and a lot more memory consumed. Also, with the areana you can't release the memory without putting that burden on the library user.
In the end there are problems with any approach that goes beyond what you've already done and I'm not entirely sure what our best option is. I'm afraid we can never abstract away the management of memory from the user of the code and give them something that can be as efficient as they could be if they managed their allocations explictly.
Hi, I don't know if this could help. The library could provid a class that cache temporary objects build and used while evaluation an expression assigned to it. This cache will provide an explict bignum conversion operator. The user could declare this cache outside the loop to avoid the temporary allocations as for example: bignum_cache tmp; for (int i = 0; i< 1000; ++i) { tmp = b[i]*c[i] + b[i]*d[i] + b[i]*e[i] + b[i]*f[i]; foo( bignum(tmp) ); // explicit conversion } We could see this cache as a smart collection of registers. Best, Vicente

Vicente J. Botet Escriba wrote:
Le 02/09/11 20:34, Simonson, Lucanus J a écrit :
Gmp mpq_class expressions allocate for temporaries in compound expressions. Nod, one thing I've worked hard on is eliminating temporaries inside expressions whenever possible, so if you write say:
a = b + c + d + e + f;
Then you get no temporaries at all - everthing gets accumulated inside variable a.
However, something like:
a = b*c + d*e;
does require one temporary for *one* of the multiplications (the other uses "a" as working space). I really like this, and I think it may be about the best we can do. If you could somehow abstract out the compile time logic of doing
John Maddock wrote: this with an expression template system from your application of it to bignum then it could become a boost library extension that might find a home in proto (assuming you use proto). I'd imagine Joel F. would like to discuss this more with you. You've already pushed the expression templates about as far as they can go to accomplish this, and farther than I ever have.
One thing I've thought of is walking the expression tree at compile time to calculate the largest number of temporaries in the longest "branch", then creating these as an array at the outermost level of the expression evaluation and passing them down, so that:
a = b*c + b*d + b*e + b*f;
still only needs one temporary - which gets reused for each branch.
This needs some more thought though - to make sure I don't shoot myself in the foot and reuse something that shouldn't be. You could cache the array of temporaries in "a" for reuse when "a" is reused.
bignum a, b, c, d, e, f; for (int i = 0; i< 1000; ++i) { b = B[i]; c = C[i]; d = D[i]; e = E[i]; f = F[i]; a = b*c + b*d + b*e + b*f; foo(a); }
Allocates a,b,c,d,e,f and two temporaries for a total of 8 allocations instead of 6006 with gmpq_class.
The only problem is that it is more natural to decare a-f inside the loop or just write:
for (int i = 0; i< 1000; ++i) { foo( b[i]*c[i] + b[i]*d[i] + b[i]*e[i] + b[i]*f[i]); }
We could make the array mutable and cache it in any of the arguments of the expression, but that wouldn't help in this case, and wouldn't help if they were all rvalues.
for (int i = 0; i< 1000; ++i) { foo( b(i)*c(i) + b(i)*d(i) + b(i)*e(i) + b(i)*f(i)); }
If we try to fall back to a custom allocator we have all the problems with thread safety and have only really accomplished improving on normal heap allocation by applying a pool. The problem with creating an areana of gmpq_class objects is that what we really want is to cache the object that has the right *size* buffer internally to store the result of the operation. In general that is different from one temporary to the next in the code. If you use an areana you get an aribtrary object and you end up growing all their buffers to store the largest temporary, which is a lot more allocations and a lot more memory consumed. Also, with the areana you can't release the memory without putting that burden on the library user.
In the end there are problems with any approach that goes beyond what you've already done and I'm not entirely sure what our best option is. I'm afraid we can never abstract away the management of memory from the user of the code and give them something that can be as efficient as they could be if they managed their allocations explictly.
Hi,
I don't know if this could help.
The library could provid a class that cache temporary objects build and used while evaluation an expression assigned to it. This cache will provide an explict bignum conversion operator. The user could declare this cache outside the loop to avoid the temporary allocations as for example:
bignum_cache tmp;
for (int i = 0; i< 1000; ++i) { tmp = b[i]*c[i] + b[i]*d[i] + b[i]*e[i] + b[i]*f[i]; foo( bignum(tmp) ); // explicit conversion }
We could see this cache as a smart collection of registers.
Yes, however, the ideal solution would allow a person who wrote code in a templated context with numerical data type T assumed to be a builtin floating point type to substitute the bignum datatype for T and get optimal code. One of my suggestions was to put the bignum_cache inside the bignum datatype itself and max its use implicit. If you look in the implementation details of Polygon I have what I call a struct compute_intersection_pack { typedef typename high_precision_type<Unit>::type high_precision; high_precision y_high, dx1, dy1, dx2, dy2, x11, x21, y11, y21, x_num, y_num, x_den, y_den, x, y; ...implementation of line intersection operation... }; That is a struct that has registers for all of the variables I use in the long expressions for computing line segment pair intersection point. These are like registers and I make the pack a member of the object that encapsulated the larger all pairs line segment intersection algorithm to recycle the high_precision values it caches. There are almost as many temporaries as explicit variables in my expression, but even so doing this caching rather than reallocating these dozen or so variables each time I performed the computation made a significant performance improvement before I applied lazy-exact arithmetic and relegated high precision arithmetic to Amdhal's prison of code that could be faster but isn't worth the effort. Similarly my colleague Andrii used a static array of gmp values as an areana and treated them like registers, unfortuantely not thread safe, so an optional feature of his implementation. We would like to see a general solution in a boost multiprecision library, but it is important that it not impose a burdon on the user of the library or use of it in a templated context would be prohibitive. Multiprecision::mp_float needs to behave in all ways as much like a regular float as reasonably possible to make its substitution into other boost libraries feasible. The design of the multi-precision library will necessarily involve some trade-offs. I looks like the current bignum expression templates will improve the performance of my own code (wrt number of allocations) if I substitute it for gmpq_class, and I'm pretty happy with that overall. I don't think a boost multiprecision library needs to be perfect, I'll be satisfied that it is an improvement on gmp's own expression templates. Regards, Luke

Simonson, Lucanus J wrote <snip>
Multiprecision::mp_float needs to behave in all ways as much like a regular float as reasonably possible to make its substitution into other boost libraries feasible.
boost::multiprecision::mp_float will behave like a POD. Interaction with other PODs and the proper behavior with I/O streams are all implemented and test routines exist for these. Paul Bristow has dedicated a great deal of time to the standardization of boost::multiprecision::mp_float. With his patient cooperation, we are reaching a very high level of floating point standardization including some of the most minute levels of floating point behavior (infinities, NaNs, true equality, precision with o-streams, div by zero, overflow, underflow, etc.). ...And it's still got high performance to-boot. The name change from boost::e_float to boost::multiprecision::mp_float has not yet taken place. My work is still called boost::e_float. If you want to get a preview, you can take a look at boost::e_float in the sandbox. The software is ripe, but not yet boost-ready. boost::multiprecision::mp_float will include wrappers for my own portable EFX, as well as for GMP and MPFR. This is all in the sandbox, available as compiler switch options or via project selection with VS2010. Native plug-in compatibility with boost::math as well as with John's future big_number (or whatever it will be called) without the need for special bindings is planned (not quite finished, but close).
The design of the multi-precision library will necessarily involve some trade-offs. I looks like the current bignum expression templates will improve the performance of my own code (wrt number of allocations) if I substitute it for gmpq_class, and I'm pretty happy with that overall.
John needs to correct me if I get this wrong, but... Remember that John is working on a higher-level interface for big number math involving expression templates. He will be making wrappers for GMP, MPFR, NTL and boost::multiprecision::mp_float. He has *big* plans extending all the way from integer to float to rational and beyond. The big number back-end to John's templates are selected via template parameters.
I don't think a boost multiprecision library needs to be perfect, I'll be satisfied that it is an improvement on gmp's own expression templates.
Regards, Luke
Nothing is ever perfect. But when we are finished, we will be one tiny step closer to high-precision math that *behaves* in the C++ way. Sincerely, Chris.

on Sun Sep 04 2011, Christopher Kormanyos <e_float-AT-yahoo.com> wrote:
boost::multiprecision::mp_float will behave like a POD.
That would mean you could copy it with memcpy. But that's not what you intend to say, is it? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Christopher Kormanyos wrote:
boost::multiprecision::mp_float will behave like a POD.
Dave Abrahams wrote:
That would mean you could copy it with memcpy. But that's not what you intend to say, is it?
No, just normal stuff such as instances of boost::multiprecision::mp_float constructed and copied from other PODs like a regular float, double or long double. For example: boost::multiprecision::mp_float x = 4.4; boost::multiprecision::mp_float y = x / 3; std::cout << std::setprecision(std::numeric_limits<boost::multiprecision::mp_float>::digits10) << std::scientific << boost::multipreccision::sqrt(x / y) << std::endl; ... and so on... But if you have a sequence of them, they can be copied with std::copy or set with std::fill, and the lot. Sincerely, Chris.

on Sun Sep 04 2011, Christopher Kormanyos <e_float-AT-yahoo.com> wrote:
Christopher Kormanyos wrote:
boost::multiprecision::mp_float will behave like a POD.
Dave Abrahams wrote:
That would mean you could copy it with memcpy. But that's not what you intend to say, is it?
No, just normal stuff such as instances of boost::multiprecision::mp_float constructed and copied from other PODs like a regular float, double or long double.
This has little to do with POD-ness, FWIW. You should find some other way to describe it (e.g. "they behave like the built-in numeric types").
For example:
boost::multiprecision::mp_float x = 4.4; boost::multiprecision::mp_float y = x / 3;
std::cout << std::setprecision(std::numeric_limits<boost::multiprecision::mp_float>::digits10) << std::scientific << boost::multipreccision::sqrt(x / y) << std::endl;
... and so on...
But if you have a sequence of them, they can be copied with std::copy or set with std::fill, and the lot.
Sincerely, Chris.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Dave Abrahams BoostPro Computing http://www.boostpro.com

Christopher Kormanyos wrote
No, just normal stuff such as instances of boost::multiprecision::mp_float constructed and copied from other PODs like a regular float, double or long double.
Dave Abrahams wrote
This has little to do with POD-ness, FWIW. You should find some other way to describe it (e.g. "they behave like the built-in numeric types").
Your suggestion is valuable, and I will adhere to it. boost::multiprecision::mp_float will behave like a built-in numeric type. Thank you. Sincerely, Chris.

John Maddock wrote:
For me the rational data type is very important and should wrap the gmp mpq_type. It would be nice if the library could use mp_int with Boost.Rational to implement its own standalone mp_rational datatype, but I would prefer to use the gmp type whenever possible.
Support for mpq_t is on my (long) todo list.
I'm glad to hear. In theory a fixed precision floating point type with 128 bits or more in the mantissa would also satisfy my needs, but I have only tested with multi-precison rational.
Nod. In theory it should be possible to compose the low level GMP functions to create a fixed-precision (modulus arithmetic) integer type... but that's one for the future.
I do have a rational/mpq backend for big_number (or whatever we call it) now, but I'm away for a week now, so it might not get committed for a while :-(
Gmp mpq_class expressions allocate for temporaries in compound expressions.
Nod, one thing I've worked hard on is eliminating temporaries inside expressions whenever possible, so if you write say:
a = b + c + d + e + f;
Then you get no temporaries at all - everthing gets accumulated inside variable a.
However, something like:
a = b*c + d*e;
does require one temporary for *one* of the multiplications (the other uses "a" as working space).
One thing I've thought of is walking the expression tree at compile time to calculate the largest number of temporaries in the longest "branch", then creating these as an array at the outermost level of the expression evaluation and passing them down, so that:
a = b*c + b*d + b*e + b*f;
still only needs one temporary - which gets reused for each branch.
This needs some more thought though - to make sure I don't shoot myself in the foot and reuse something that shouldn't be!
* Is there a concept check program anywhere that I can plug my types into and verify they meet the libraries conceptual requirements?
Not exactly. I considered writing one, but basically what you need to do is make sure the traits for your type are defined and that it is registered as the right concept type then write a test that exercises the traits through the free function of the same name that expects the concept type. For example point concept has get and set free functions that exercise the get and set traits. If those two functions work correctly (and construct()) then your point is a model of the concept. If you copy-paste the example code for mapping a user defined type to the concept and replace my type with yours and compile the test is part of the example code because I exercise the traits.
* Is there a good program for testing "big number" performance within your library (for either large integers or rationals)?
If you look at the gmp_override.hpp you will see how I abstract away the gmpq_class as a high_precision_type<> metafunction lookup. You can replace gmpq_class with anything you want and benchmark. However, you will find that the performance of the high precision type is almost irrelevant to performance of the larger algorthm because lazy-exact arithmetic implies that it is very rarely if ever used. What you can do instead is dig into my polygon_aribtrary_formation.hpp in details and find the function that computes the intersection point of two line segments using the high precision type. You can then write a benchmark that generates random line segments and intersects them. Once you have that working you can swap in different high precision types by replacing gmp_override.hpp with your own override header file that you include just after polygon.hpp and before your own benchmark code. I have a "fitness" test for intersection points so you can also use the benchmark code as a stress test for the numerical data type chosen to see if it would provide robust execution of my algorithm. You will see the fitness test applied at the end of the lazy version of line segment intersection just before it conditionally calls the exact version.
OK sounds like that might not be as simple as I'd hoped, will investigate as I get time,
Actually, it turns out it is simple if you use the trunk instead of release version of Polygon. I have directed_line_segment_concept in the trunk with generic function: // set point to the intersection of segment and b template <typename point_type, typename segment_type, typename segment_type_2> typename enable_if< typename gtl_and_4<y_dls_intersect, typename is_mutable_point_concept<typename geometry_concept<point_type>::type>::type, typename is_directed_line_segment_concept<typename geometry_concept<segment_type>::type>::type, typename is_directed_line_segment_concept<typename geometry_concept<segment_type_2>::type>::type>::type, bool>::type intersection(point_type& intersection, const segment_type& segment, const segment_type_2& b, bool projected = false, bool round_closest = false) { typedef polygon_arbitrary_formation<typename directed_line_segment_traits<segment_type>::coordinate_type> paf; typename paf::Point pt; typename paf::Point l, h, l2, h2; assign(l, low(segment)); assign(h, high(segment)); assign(l2, low(b)); assign(h2, high(b)); typename paf::half_edge he1(l, h), he2(l2, h2); typename paf::compute_intersection_pack pack; if(pack.compute_intersection(pt, he1, he2, projected, round_closest)) { assign(intersection, pt); return true; } return false; } That computes the intersection point. If you replace pack.compute_intersection with pack.compute_exact_intersection it will not do the lazy intersection computation and you will get maximal usage of your bignum type. You still need to #include gmp_override.hpp and provide your own for other types. The only relevant code in gmp_override.hpp is: template <> struct high_precision_type<int> { typedef mpq_class type; }; template <> int convert_high_precision_type<int>(const mpq_class& v) { mpz_class num = v.get_num(); mpz_class den = v.get_den(); num /= den; return num.get_si(); }; Just specify your own bignum as the specialization of high_precision_type for coordinate type int and your own conversion specialization to convert it to int. Here is the expressions I use for computing the intersection point: x_num = (x11 * dy1 * dx2 - x21 * dy2 * dx1 + y21 * dx1 * dx2 - y11 * dx1 * dx2); x_den = (dy1 * dx2 - dy2 * dx1); y_num = (y11 * dx1 * dy2 - y21 * dx2 * dy1 + x21 * dy1 * dy2 - x11 * dy1 * dy2); y_den = (dx1 * dy2 - dx2 * dy1); x = x_num / x_den; y = y_num / y_den; As you can see, I'm not caching my temporaries, despite being concerned about that sort of thing. The reason is that lazy exact evaluation pushes the use of high precision type down to one in a thousand or lower odds, so I don't really care about the cost of getting the right answer in the unlikely event that long double gave me the wrong answer, it just needs to be right. My concern about efficiency is more about getting the best bignum library, not for my use case. Regards, Luke

OK sounds like that might not be as simple as I'd hoped, will investigate as I get time,
Actually, it turns out it is simple if you use the trunk instead of release version of Polygon. I have directed_line_segment_concept in the trunk with generic function:
I've started investigating this - although I got the code for this working, I couldn't find any documentation on directed_line_segments?
If you replace pack.compute_intersection with pack.compute_exact_intersection it will not do the lazy intersection computation and you will get maximal usage of your bignum type. You still need to #include gmp_override.hpp and provide your own for other types.
Nod.
Just specify your own bignum as the specialization of high_precision_type for coordinate type int and your own conversion specialization to convert it to int.
Here is the expressions I use for computing the intersection point: x_num = (x11 * dy1 * dx2 - x21 * dy2 * dx1 + y21 * dx1 * dx2 - y11 * dx1 * dx2); x_den = (dy1 * dx2 - dy2 * dx1); y_num = (y11 * dx1 * dy2 - y21 * dx2 * dy1 + x21 * dy1 * dy2 - x11 * dy1 * dy2); y_den = (dx1 * dy2 - dx2 * dy1); x = x_num / x_den; y = y_num / y_den;
As you can see, I'm not caching my temporaries, despite being concerned about that sort of thing. The reason is that lazy exact evaluation pushes the use of high precision type down to one in a thousand or lower odds, so I don't really care about the cost of getting the right answer in the unlikely event that long double gave me the wrong answer, it just needs to be right. My concern about efficiency is more about getting the best bignum library, not for my use case.
I tried this, as well as instantiating my version of the intersection function directly on the big-number-rational type.... but I saw pretty much no difference between mpq_class and my expression-template-enabled version. Investigating further, it seems that there are a lot of superfluous temporaries being created which completely swamp anything gained in evaluating complex expressions. For example in compute_exact_intersection: dy2 = (high_precision)(he2.second.get(VERTICAL)) - (high_precision)(he2.first.get(VERTICAL)); What benefit are the typecasts here? They create 2 additional temporaries (and there are quite a few of these statements), but add no further precision to the calculated result, unless, possibly the arguments are floating point types - even then it's questionable? Then another bunch of typecasts: x11 = (high_precision)(he1.first.get(HORIZONTAL)); which all create an extra temporary. Presumably if type high_precision can be copy-constructed from the argument, it can have that argument assigned to it as well? A clearer conceptual model would sort that one out I guess. Then: x = x + (high_precision)0.5; y = y + (high_precision)0.5; Only one temporary with value 0.5 is required, and maybe none at all if type high_precision supports mixed arithmetic with type double (that can be checked with the operator traits extension to type traits that should be in Trunk fairly soon - I hope!). Then: if(x < (high_precision)x_unit) --x_unit; if(y < (high_precision)y_unit) --y_unit; Again, if mixed comparisons are supported they're likely to be more efficient than a temporary creation. Of course arguably internal caching could fix all this.... HTH, John.

________________________________ From: Mostafa <mostafa_working_away@yahoo.com> To: boost@lists.boost.org Sent: Wednesday, August 31, 2011 7:02 AM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
For namespaces, we have:
1. boost::mp::mp_real 2. boost::multiprecision::mp_real 3. boost::multiple_precision::mp_real Our current favorite namespace is number 2: boost::multiprecision.
<snip> If you're already in namespace mp, multiprecision, etc ..., then why prefix entities in that namespace with "mp_"? Mostafa -------------------------------------------------------------------------- That's a good question. I personally feel that names such as "real", "complex" and "integer", although intuitive and eloquent, can easily lead to non-uniqueness when *using* namespaces. One example that immediately comes to mind is std::complex<...>. Therefore, I think that the prefix "mp_" adds just enough to avoid a variety of unwanted pitfalls. Sincerely, Chris.
participants (15)
-
Christopher Kormanyos
-
Dave Abrahams
-
Dominique Devienne
-
Jeffrey Lee Hellrung, Jr.
-
John Maddock
-
Joshua Juran
-
Matthias Schabel
-
Mostafa
-
Paul A. Bristow
-
Simonson, Lucanus J
-
Steven Maitlall
-
Steven Watanabe
-
Sylvester-Bradley, Gareth
-
Thomas Klimpel
-
Vicente J. Botet Escriba