[complex] Feedback and Potential Review Manager

Hi Matthieu, I would like to provide a detailed feedback on the proposed Boost.Complex. I am terribly sorry about the late response on this, as I was not doing well in winter when this topic was more active. First of all, great work with this, Matthieu. I believe the community desperately requires this facility. Boost.Complex appears to have two main goals: A) Provide acceleration for pure imaginary numbers. B) Allow complex to be used with user-defined types. For goal B), I have briefly tested parts of the Boost.Complex code with the cpp_dec_float multiprecision data type proposed for Boost.Multiprecision by John Maddock. My Tests include: * Test some elementary transcendental functions. * Test the overall package via computation of orthogonal polynomials.* My test file (complex.cpp) is included in this e-mail. * Visual Studio 2010, SP1. * Proposed Boost.Multiprecision and boost trunk. Remarks: 1) Various functions use macros like M_PI_2. These macros do not provide enough precision for a multiprecision type. And they will be inefficient for some fixed-point types. Perhaps you need to "bite the bullet" and use boost::math::constants. (Is M_PI_2 even part of C++, or is it only lurking around in <math.h> of GCC?) 2) Ditto for the LN_10 and LN_2 constants. Does boost::math::constants support ln_ten()? I do know that it has ln_two(). 3) I disagree with explicit complex(const T& re = T(0.), const T& im = T(0.)) I believe this constructor should be non-explicit. See ISO/IEC 14882:2011, Section 26.4.2. 4) I would really like to see global operators for seamless interaction of complex<T> with integer PODs. The proposed code of boost::complex forces explicit construction from integer "all the way" to boost::complex<T> just to, for example, multiply with 2. This is extremely costly as most multiprecision or fixed-point types have specially optimized routines for multiplication with integer PODs. Perhaps something like this would be good template<typename T> complex<T> operator*(const complex<T>& z, int n) { T re(z.real()); T im(z.imag()); re *= n; im *= n; return complex<T>(re, im); } 5) In the exp() function, polar() can not be resolved without explicit template parameter. Instead of this: template<typename T> inline complex<T> exp(const complex<T>& x) { return polar(exp(x.real()), x.imag()); } I needed this: template<typename T> inline complex<T> exp(const complex<T>& x) { return polar<T>(exp(x.real()), x.imag()); } I don't know who's right here, your code or the compiler. 6) The tanh() function and some other functions compute the exponent twice. By that I mean both exp(x) as well as exp(-x). I believe that division would be faster if exp(-x) were computed with (1 / exp(x)). Or is there an issue with branch cuts that mandates the potentially redundant calculation? 7) I hope to find time to test this code further with a tiny fixed-point class. But I might be too busy for that extra work. REVIEW PROGRESS: I may potentially be qualified for review manager. I do know the math and the code. I am also well-versed with all aspects of software review, assessment and audit situations. BUTTT!!! I'm not as good at C++ as some of you folks and I'm a real beginner at boost. Nonetheless, If nobody better steps forward, I could potentially manage the review of Boost.Complex. Best regards, Chris.

Another comment to add to Chris's, you have code like this: template<typename T> inline complex<T> polar(const T& r, const T& theta) { return complex<T>(r*cos(theta), r*sin(theta)); } Which will either: * Not compile if the compiler strictly adhers to the std and doesn't inject cos and sin into the global namespace, or: * Silently do the wrong thing when T is a long double - truncate r and theta to double and call ::cos(double) etc. You can't just add a std:: prefix to all the calls either, as then it won't work with user-defined types whose functions aren't in namespace std. So instead you need: template<typename T> inline complex<T> polar(const T& r, const T& theta) { using std::cos; using std::sin; return complex<T>(r*cos(theta), r*sin(theta)); } In Boost.Math we found this trap so easy to slip into that: * We have a macro "BOOST_MATH_STD_USING" which adds all the possible using declarations to the start of a function. * A concept-checking type std_real_concept, which when passed to a function like the above, won't compile unless it's done right. HTH, John.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Christopher Kormanyos Sent: Sunday, April 29, 2012 10:47 PM To: boost@lists.boost.org Subject: [boost] [complex] Feedback and Potential Review Manager
Hi Matthieu,
Boost.Complex appears to have two main goals: A) Provide acceleration for pure imaginary numbers. B) Allow complex to be used with user-defined types.
For goal B), I have briefly tested parts of the Boost.Complex code with the cpp_dec_float multiprecision data type proposed for Boost.Multiprecision by John Maddock.
My Tests include: * Test some elementary transcendental functions. * Test the overall package via computation of orthogonal polynomials.* My test file (complex.cpp) is included in this e-mail. * Visual Studio 2010, SP1. * Proposed Boost.Multiprecision and boost trunk.
Remarks: 1) Various functions use macros like M_PI_2. These macros do not provide enough precision for a multiprecision type.
Which is a reason why Boost.Math constants were produced. And they will be inefficient for
some fixed-point types. Perhaps you need to "bite the bullet" and use boost::math::constants. (Is M_PI_2 even part of C++, or is it only lurking around in <math.h> of GCC?)
2) Ditto for the LN_10 and LN_2 constants. Does boost::math::constants support ln_ten()? I do know that it has ln_two().
The obvious constant LN_10 = ln(10) = 2.30258509 seems to have slipped through the Boost.Math constants net. But we do have the equal valued one_div_log10_e = "2.3025850929940456840179914546843642076011014886287729760333279009675726096773524802359972050895982 9834196778404e+00") But we should probably add ln(10) as people are more likely to be looking for that? Paul PS Might Matthieu be qualified as a review manager for Boost.Multiprecision? Boost badly needs this reviewed soon (and preferably accepted!) --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

2) Ditto for the LN_10 and LN_2 constants. Does boost::math::constants support ln_ten()? I do know that it has ln_two().
The obvious constant LN_10 = ln(10) = 2.30258509 seems to have slipped through the Boost.Math constants net.
But we should probably add ln(10) as people are more likely to be looking for that? Paul
Yes, even though I am somewhat familiar with the constants, I did not find what I was looking for at cursory glance. We should probably add ln_ten() to the collection. Thanks, Paul. Best regards, Chris.

The obvious constant LN_10 = ln(10) = 2.30258509 seems to have slipped through the Boost.Math constants net.
But we do have the equal valued one_div_log10_e = "2.3025850929940456840179914546843642076011014886287729760333279009675726096773524802359972050895982 9834196778404e+00")
But we should probably add ln(10) as people are more likely to be looking for that?
Definitely, do you want to do the honors and add that one? John.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of John Maddock Sent: Monday, April 30, 2012 5:10 PM To: boost@lists.boost.org Subject: Re: [boost] [complex] Feedback and Potential Review Manager
The obvious constant LN_10 = ln(10) = 2.30258509 seems to have slipped through the Boost.Math constants net.
But we do have the equal valued one_div_log10_e = "2.3025850929940456840179914546843642076011014886287729760333279009675 726096773524802359972050895982 9834196778404e+00")
But we should probably add ln(10) as people are more likely to be looking for that?
Definitely, do you want to do the honors and add that one?
Ok - I'll do it. And I'll provide Matthieu Schaller with an updated version of constants.hpp - and any assistance needed in using Boost.Math constants. Paul

3) I disagree with
explicit complex(const T& re = T(0.), const T& im = T(0.)) I believe this constructor should be non-explicit. See ISO/IEC 14882:2011, Section 26.4.2.
With which I, myself, disagree. It's funny, because even my suggested correction is wrong---again! I believe the form of the constructor is actually: complex(const T& re = T(), const T& im = T()); The default parameters are not initialized to zero, but rather to the default constructor of the template type T. There is a subtle difference, and one which masks user-error in a "too-forgiving" fashion. It's a real pitfall because it makes the wrong code seemingly work right and then be break when complex is fixed. A long time ago, the man known as STL brought this to my attention. But I re-made the same mistake again, and again... In a potential review, we should go through the constructors meticulously and make sure they are as they should be. Best regards, Chris.

Date: Mon, 30 Apr 2012 17:07:49 +0100 From: Chris
3) I disagree with
explicit complex(const T& re = T(0.), const T& im = T(0.)) I believe this constructor should be non-explicit. See ISO/IEC 14882:2011, Section 26.4.2.
With which I, myself, disagree. It's funny, because even my suggested correction is wrong---again!
Complex numbers are a (strict) super-set of reals, so a real should betransparently convertible to complex. Explicit constructors are forconstructor input (and, since C++11, input tuples) that are configurationparameters instead of equivalent values.
I believe the form of the constructor is actually: complex(const T& re = T(), const T& im = T());
The default parameters are not initialized to zero, but rather to the default constructor of the template type T. There is a subtle difference, and one which masks user-error in a "too-forgiving" fashion. It's a real pitfall because it makes the wrong code seemingly work right and then be break when complex is fixed. A long time ago, the man known as STL brought this to my attention.
I think we need someone to get a rule-book here, because I think you'rewrong. The "T()" syntax should provide value-initialization, which will bezero for the built-in numeric types. Asking any mathematical type that isdefault-constructible to have said default state be equivalent to zero (orempty set, or some other analog) should be a requirement. (And for suchmath types, they should have a Boolean conversion where said zero/emptystate is the only regular one that maps to False. Math types that do notmap to (a subset of) real numbers should have said conversion be explicit.)
But I re-made the same mistake again, and again...
Just researching this, I found out some (all?) versions of MSVC don't dothis right. They ignore value-semantics and always apply default-semanticshere; leaving built-in numeric types and POD types as random bit buckets.If you're using MSVC, then that may be your problem, which is a compiler bug.
In a potential review, we should go through the constructors meticulously and make sure they are as they should be.
Daryle W.

<snip>
I believe the form of the constructor is actually:
complex(const T& re = T(), const T& im = T());
I think we need someone to get a rule-book here, because I think you're wrong.
Well, it won't be the first time or the last that I'm wrong.
The "T()" syntax should provide value-initialization, which will be zero for the built-in numeric types. Asking any mathematical type that isdefault-constructible to have said default state be equivalent to zero (or empty set, or some other analog) should be a requirement. (And for such math types, they should have a Boolean conversion where said zero/empty state is the only regular one that maps to False. Math types that do not map to (a subset of) real numbers should have said conversion be explicit.) <snip>
Daryle W.
Daryle, this issue has puzzled me for some time now I guess there are two choices here for a potential complex<T> in boost: 1. complex(const T& re = T(), const T& im = T()); 2. complex(const T& re = T(0), const T& im = T(0)); Let's keep in mind that we are talking about a template parameter that is *not* required to be one of the built-in types float, douuble or long double. It's hard to interpret the right thing to do here. ISO/IEC 14882:2011 specifies a generalized template for complex<T> with the first form of the constructor (the one with T() default parameters). But ISO/IEC14882:2011goes on to explicitly specify the forms of each template specialization for float, double and long double. In addition, ISO/IEC 14882:2011 clearly states that complex<T> for any type other than float, double or long double is not specified. According to ISO/IEC 14882:2011, the constructor for the specialization of complex<float>, for example, is: template<> complex<float>(float re = 0.0F, float im = 0.0F); The input parameters are non-constant and non-references. Furthermore, the default parameters are specified to be 0.0F, not float(). So should a potential Boost.Complex use the default parameter T() or rather T(0)? It may be more "in the spirit" of ISO(IEC 14882:2011 to use T(). On the other hand, writers and users of a specialized type might be confused when all of a sudden complex<user_type> might potentially be creatable with non-zero junk. In the end, there is no right or wrong. It's just up to boost to set a policy here and document it. In my opinion, this should be sorted out in a potential review. Best regards, Chris.

First of all, great work with this, Matthieu. I believe the community desperately requires this facility. Thanks for your support. Boost.Complex appears to have two main goals: A) Provide acceleration for pure imaginary numbers. B) Allow complex to be used with user-defined types.
For goal B), I have briefly tested parts of the Boost.Complex code with the cpp_dec_float multiprecision data type proposed for Boost.Multiprecision by John Maddock.
My Tests include: * Test some elementary transcendental functions. * Test the overall package via computation of orthogonal polynomials.* My test file (complex.cpp) is included in this e-mail. * Visual Studio 2010, SP1. * Proposed Boost.Multiprecision and boost trunk. Interesting tests. I am glad you could run the code without obvious
Hi Chris, problems.
Remarks: 1) Various functions use macros like M_PI_2. These macros do not provide enough precision for a multiprecision type. And they will be inefficient for some fixed-point types. Perhaps you need to "bite the bullet" and use boost::math::constants. (Is M_PI_2 even part of C++, or is it only lurking around in<math.h> of GCC?) 2) Ditto for the LN_10 and LN_2 constants. Does boost::math::constants support ln_ten()? I do know that it has ln_two(). You are right, M_PI_2 is not standard although present in gcc, icc and Visual C++. I will move the constants to boost::math::constant. LN_10 exists under the name one_div_log10_e as Paul Bristow mentioned. 3) I disagree with explicit complex(const T& re = T(0.), const T& im = T(0.)) I believe this constructor should be non-explicit. See ISO/IEC 14882:2011, Section 26.4.2. Agreed, I should stick to the standard. However, I will keep the imaginary constructor explicit. I do not want spurious Real -> Imaginary conversions. 4) I would really like to see global operators for seamless interaction of complex<T> with integer PODs. The proposed code of boost::complex forces explicit construction from integer "all the way" to boost::complex<T> just to, for example, multiply with 2. This is extremely costly as most multiprecision or fixed-point types have specially optimized routines for multiplication with integer PODs. Great idea. I have done it for pow() but it is true that other operators may benefit from this. This optimization will only be available to POD types but T==double is probably the primary concern. 5) In the exp() function, polar() can not be resolved without explicit template parameter. Instead of this:
template<typename T> inline complex<T> exp(const complex<T>& x) { return polar(exp(x.real()), x.imag()); }
I needed this:
template<typename T> inline complex<T> exp(const complex<T>& x) { return polar<T>(exp(x.real()), x.imag()); }
I don't know who's right here, your code or the compiler.
I didn't get any error message but I will explicitly add the type for compatibility.
6) The tanh() function and some other functions compute the exponent twice. By that I mean both exp(x) as well as exp(-x). I believe that division would be faster if exp(-x) were computed with (1 / exp(x)). Or is there an issue with branch cuts that mandates the potentially redundant calculation?
7) I hope to find time to test this code further with a tiny fixed-point class. But I might be too busy for that extra work. This would be interesting to test. For what it is worth, I have tested
I will have a look. Division is faster than exponentiation but the compiler may do smart things behind our back. I will try both versions. Thanks for this suggestion the class with boost::rational but this does not allow to test the more complicated functions.
Paul Bristow: PS Might Matthieu be qualified as a review manager for Boost.Multiprecision? Boost badly needs this reviewed soon (and preferably accepted!) I am afraid I am not qualified for software assessment. Furthermore, I have never taken part to a boost review.
Regards, Matthieu -- Matthieu Schaller

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Matthieu Schaller Sent: Tuesday, May 01, 2012 10:49 AM To: boost@lists.boost.org Subject: Re: [boost] [complex] Feedback and Potential Review Manager
1) Various functions use macros like M_PI_2. These macros do not provide enough precision for a multiprecision type. And they will be inefficient for some fixed-point types. Perhaps you need to "bite the bullet" and use boost::math::constants. (Is M_PI_2 even part of C++, or is it only lurking around in<math.h> of GCC?) 2) Ditto for the LN_10 and LN_2 constants. Does boost::math::constants support ln_ten()? I do know that it has ln_two(). You are right, M_PI_2 is not standard although present in gcc, icc and Visual C++. I will move the constants to boost::math::constant. LN_10 exists under the name one_div_log10_e as Paul Bristow mentioned.
now added ln_10 as a constant. Command: Commit Modified: I:\boost-trunk\boost\math\constants\calculate_constants.hpp Modified: I:\boost-trunk\boost\math\constants\constants.hpp Sending content: I:\boost-trunk\boost\math\constants\calculate_constants.hpp Sending content: I:\boost-trunk\boost\math\constants\constants.hpp Completed: At revision: 78288 You can get this from boost-trunk by SVN but please tell me if you would prefer constants.hpp direct by private mail. Tell me also if there are other constants that you need.
Paul Bristow: PS Might Matthieu be qualified as a review manager for Boost.Multiprecision? Boost badly needs this reviewed soon (and preferably accepted!) I am afraid I am not qualified for software assessment. Furthermore, I have never taken part to a boost review.
:-( But I hope you will participate in the review of Boost.Math nonetheless. Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

3) I disagree with
explicit complex(const T& re = T(0.), const T& im = T(0.)) I believe this constructor should be non-explicit. See ISO/IEC 14882:2011, Section 26.4.2.
Agreed, I should stick to the standard. However, I will keep the imaginary constructor explicit. I do not want spurious Real -> Imaginary conversions.
Thank you for your detailed response, Matthieu. Did you also notice in my other post that my suggested correction was, in fact, wrong? I am terribly sorry about that. You actually need this: explicit complex(const T& re = T(), const T& im = T());
4) I would really like to see global operators for seamless interaction of complex<T> with integer PODs. The proposed code of boost::complex forces explicit construction from integer "all the way" to boost::complex<T> just to, for example, multiply with 2. This is extremely costly as most multiprecision or fixed-point types have specially optimized routines for multiplication with integer PODs.
Great idea. I have done it for pow() but it is true that other operators may benefit from this. This optimization will only be available to POD types but T==double is probably the primary concern.
You know it's counter-intuitive but perhaps the global binary arithmetic operators (complex * integral) and (complex / integral) will be most beneficial. The heavyweight *users* of a potential boost::math::complex may include multiple precision types and fixed-point types. Both of these data types typically have optimized multiplication and division routines that are linear O(N) in the number of words N composing the type instead of O(N^2) or O(N * Log(N)). It's a crazy world with those specialized types. I briefly glanced at you pow-N routines. Perhaps I'm wrong, but I believe you are not doing binary splitting of the exponent here. Perhaps you might want to optimize this. Or maybe I'm misunderstanding your code. Please get back to me if this is unclear. @ Experienced Boosters: I fear lots of code here! Can Matthieu write global ops for binary arithmetic with PODs in a clever way combining boost::enable_if with is_integral, etc.? Or does the developer really need to diligently write all those global ops (with PODs on both RHS and LHS) for unequivocal resolution with char, signed char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long, long long, unsigned long long, float, double, long double, and maybe even const char*.
6) The tanh() function and some other functions compute the exponent twice. By that I mean both exp(x) as well as exp(-x). I believe that division would be faster if exp(-x) were computed with (1 / exp(x)). Or is there an issue with branch cuts that mandates the potentially redundant calculation?
I will have a look. Division is faster than exponentiation but the compiler may do smart things behind our back. I will try both versions. Thanks for this suggestion
You are right for compilers only dealing with built-ins. Here again, considering my expectation that the big users of boost::math::complex might be multiple-precision and fixed-point types, division will almost always be faster. I actually insist on this, unless there is an issue with branch cuts.
7) I hope to find time to test this code further with a tiny fixed-point class. But I might be too busy for that extra work.
This would be interesting to test. For what it is worth, I have tested the class with boost::rational but this does not allow to test the more complicated functions.
OK. I will try very hard to make the time for it next weekend or sooner. Keep up the good work, Matthieu! Best regards, Chris.

Did you also notice in my other post that my suggested correction was, in fact, wrong? I am terribly sorry about that. You actually need this: explicit complex(const T& re = T(), const T& im = T());
Geez! How many times can I get it wrong? I'm making a fool of myself here. It's non-explicit and the default params are not zero, but rather the default ctor of the template type. complex(const T& re = T(), const T& im = T()); It's time for me to take a break... Best regards, Chris.

Christopher Kormanyos wrote:
Did you also notice in my other post that my suggested correction was, in fact, wrong? I am terribly sorry about that. You actually need this: explicit complex(const T& re = T(), const T& im = T());
Geez! How many times can I get it wrong? I'm making a fool of myself here.
It's non-explicit and the default params are not zero, but rather the default ctor of the template type.
complex(const T& re = T(), const T& im = T());
It's time for me to take a break...
Best regards, Chris.
Do I understand correctly that the default contructor does not set real and imag parts to zero? Hurray!! I consider it a serious defect in std::complex that it's default constructor performs non-trivial work. Often large arrays of complex need to be constructed in high-performance computing applications.

<snip>
but rather the default ctor of the template type.
complex(const T& re = T(), const T& im = T());
Do I understand correctly that the default constructor does not set real and imag parts to zero? <snip>
Well, it's appears to be more subtle than we might want it to be. ISO/IEC 14882:2011 specifies the T() form for the generalized complex template. The language specification goes on, however, to specify that only float, double and long double are valid template parameters for complex. And the subsequent template specializations for the built-in floating-point numbers have default values of 0.0F, 0.0, 0.0L. For example, as in my other post, the specified constructor for float is: complex(float re = 0.0F, float im = 0.0F); I also believe that the value of, say, the default constructor of float is zero. In particular void do_something() { // F1 is initialized to zero. float f1 = float(); ... } So ISO/IEC 14882:2011 ensures that all valid uses of complex are default initialized to zero. Without judgement, this is the spec. What a potential Boost.Complex should do for user-defined types that may potentially default construct to non-zero is an open issue. Best regards, Chris.

but rather the default ctor of the template type. complex(const T& re = T(), const T& im = T());
Do I understand correctly that the default constructor does not set real and imag parts to zero? Well, it's appears to be more subtle than we might want it to be. ISO/IEC 14882:2011 specifies the T() form for the generalized complex template. The language specification goes on, however, to specify that only float, double and long double are valid template parameters for complex. And the subsequent template specializations for the built-in floating-point numbers have default values of 0.0F, 0.0, 0.0L.
For example, as in my other post, the specified constructor for float is:
complex(float re = 0.0F, float im = 0.0F);
I also believe that the value of, say, the default constructor of float is zero. In particular
void do_something() { // F1 is initialized to zero. float f1 = float(); ... }
So ISO/IEC 14882:2011 ensures that all valid uses of complex are default initialized to zero. Without judgement, this is the spec.
What a potential Boost.Complex should do for user-defined types that may potentially default construct to non-zero is an open issue.
In my opinion, any sensible number library (aiming at extending real/integer numbers) should provide a default constructor that initialize the value of the number to 0. It is the behavior that the end-user will most probably expect. There are three options: (a) Stick to T() and document that the type T should have a default constructor building a zero-valued T. (b) Change to T(0.) and not respect the standard. This does also imply that T must have a constructor taking a double as an argument. (c) Do not initialize the number when the default constructor is called. This would allow the allocation of an array of complex<T> without having to actually copy numbers. A zero-valued complex number could still be built using one of the other constructors. I would prefer (a) in order to stick to the standard or (c) to allow optimized allocations. For what it's worth, boost::rational does not rely on the default constructor of its template type and explicitly calls the constructor of the numerator with the value 0 in its default constructor. By the way, the constructors of boost::rational do not use constant references for their arguments. There might be some useless costly copies when the type used is not POD. Regards, Matthieu -- Matthieu Schaller

AMDG On 05/07/2012 01:35 PM, Christopher Kormanyos wrote:
I also believe that the value of, say, the default constructor of float is zero. In particular
void do_something() { // F1 is initialized to zero. float f1 = float(); ... }
This is /not/ the default constructor. f1 is copy constructed from an /value-initialized/ temporary. For scalar types, value initialization is the same as zero initialization.
So ISO/IEC 14882:2011 ensures that all valid uses of complex are default initialized to zero. Without judgement, this is the spec.
In Christ, Steven Watanabe

@ Experienced Boosters: I fear lots of code here! Can Matthieu write global ops for binary arithmetic with PODs in a clever way combining boost::enable_if with is_integral, etc.? Or does the developer really need to diligently write all those global ops (with PODs on both RHS and LHS) for unequivocal resolution with char, signed char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long, long long, unsigned long long, float, double, long double, and maybe even const char*.
For sure, off the top of my head and untested with a real compiler, but how about: template <class Float, class Arithmetic> typename enable_if<is_arithmetic<Arithmetic>, complex<typename boost::math::tools::promote_args<Float, Arithmetic>::type> >::type operator *(const complex<Float>& a, const Arithmetic& val) { typedef complex<typename boost::math::tools::promote_args<Float, Arithmetic>::type> result_type; return result_type(a.real() * val, a.imag() * val); } So basically this is only selected as a possible overload if the second argument is a builtin arithmetic type, then the type of the result is calculated using a helper template from Boost.Math which effectively promotes integers to double and then computes typeof(Float*promoted(Arithmetic)). Basically promote_args implements the logic in 26.8 para 11 of C++11. So that: complex<float> * double -> complex<double> complex<float> * int -> complex<double> (because integers get promoted to double) complex<double> * long long -> complex<double> etc etc. HTH, John. PS of course we could argue that ints longer than 53 bits should be promoted to long double to try to avoid loss of digits.... but that's a whole other ball game....

@ Experienced Boosters: I fear lots of code here! Can Matthieu write global ops for binary arithmetic with PODs in a clever way combining boost::enable_if with is_integral, etc.? Or does the developer really need to diligently write all those global ops (with PODs on both RHS and LHS) for unequivocal resolution with char, signed char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long, long long, unsigned long long, float, double, long double, and maybe even const char*.
For sure, off the top of my head and untested with a real compiler, but how about:
template <class Float, class Arithmetic> typename enable_if<is_arithmetic<Arithmetic>, complex<typename boost::math::tools::promote_args<Float, Arithmetic>::type> >::type operator *(const complex<Float>& a, const Arithmetic& val) { typedef complex<typename boost::math::tools::promote_args<Float, Arithmetic>::type> result_type; return result_type(a.real() * val, a.imag() * val); }
So basically this is only selected as a possible overload if the second argument is a builtin arithmetic type, then the type of the result is calculated using a helper template from Boost.Math which effectively promotes integers to double and then computes typeof(Float*promoted(Arithmetic)). Basically promote_args implements the logic in 26.8 para 11 of C++11.
So that:
complex<float> * double -> complex<double> complex<float> * int -> complex<double> (because integers get promoted to double) complex<double> * long long -> complex<double>
etc etc.
HTH, John.
PS of course we could argue that ints longer than 53 bits should be promoted to long double to try to avoid loss of digits.... but that's a whole other ball game....
But then aren't you removing the optimization complex<double> * int that Chris originally proposed ? As your int is promoted to a double. Or are you talking about non-POD integer-like types ? Bests, Matthieu -- Matthieu Schaller

But then aren't you removing the optimization complex<double> * int that Chris originally proposed ? As your int is promoted to a double. Or are you talking about non-POD integer-like types ?
No, the promotion is only in calculating the *type of the result*, if you look at the actual arithmetic in the example, then the real and imaginary components are directly multiplied by the arithmetic type. John.

But then aren't you removing the optimization complex<double> * int that Chris originally proposed ? As your int is promoted to a double. Or are you talking about non-POD integer-like types ?
No, the promotion is only in calculating the *type of the result*, if you look at the actual arithmetic in the example, then the real and imaginary components are directly multiplied by the arithmetic type. John.
Thanks for the suggestion, John. Matthieu, I checked John's suggestion with both (complex * unsigned int) as well as (signed int * complex) The desired optimization was achieved and numerical integrity was retained with the multiprecision type cpp_dec_float. Matthieu, your code is already looking good! I don't know if you would like to go with this suggestion and I can not pressure you to act on it. I do believe, however, that the suggestion to optimize binary ops with POD will arise in a potential review. In my opinion, this optimization will improve your code. So you might want to look into it. But the decision is yours. I can assist you with the testing of our multiprecision type if this would help your progress. So if you want to look into this, you may want to start with: 1. Binary add, sub, mul, div (complex op POD) and (POD op complex) = 8 template functions 2. Binary equality/inequality (complex op POD) and (POD op complex) = 4 template functions 3. Binary add, sub, mul, div (imaginary op POD) and (POD op imaginary) = 8 template functions The sample below shows both (complex * POD) and (POD * complex) You need to be very careful with stuff like (POD / complex) because the math is a bit more tricky. But you know all that tricky stuff with the norm, etc. Best regards, Chris. /// Returns the value of x times built-in arithmetic type val template <typename Float, typename Arithmetic> inline typename enable_if<is_arithmetic<Arithmetic>, complex<typename boost::math::tools::promote_args<Float, Arithmetic>::type> >::type operator*(const complex<Float>& x, const Arithmetic& val) { typedef complex<typename boost::math::tools::promote_args<Float, Arithmetic>::type> result_type; return result_type(x.real() * val, x.imag() * val); } /// Returns the value of built-in arithmetic type val times x template <typename Arithmetic, typename Float> inline typename enable_if<is_arithmetic<Arithmetic>, complex<typename boost::math::tools::promote_args<Float, Arithmetic>::type> >::type operator*(const Arithmetic& val, const complex<Float>& x) { typedef complex<typename boost::math::tools::promote_args<Float, Arithmetic>::type> result_type; return result_type(x.real() * val, x.imag() * val); }

Matthieu, I checked John's suggestion with both (complex * unsigned int) as well as (signed int * complex) The desired optimization was achieved and numerical integrity was retained with the multiprecision type cpp_dec_float.
Thanks for trying this out.
Matthieu, your code is already looking good! I don't know if you would like to go with this suggestion and I can not pressure you to act on it. I do believe, however, that the suggestion to optimize binary ops with POD will arise in a potential review. In my opinion, this optimization will improve your code. So you might want to look into it. But the decision is yours. I can assist you with the testing of our multiprecision type if this would help your progress.
It will be done. It is a non-neglectable optimization. I will also modify the code to take into account your other suggestions: a- Use the right signature for the complex constructor. b- Use the boost::math::constants instead of the gcc extensions. c- Use the template parameters in the function calls. d- Implement binary operators with the integer optimization. e- Improve some functions that do too many computations (e.g. tanh computing exp() twice). (a) has already been done and pushed in my repository. (b) and (c) will be done soon, although I will probably stick to one_div_log10_e instead of ln_10 as long as this modification has not been pushed in the boost release. This will allow for a larger test base. (d) and (e) require more work but will be done over the week-end probably. Cheers, Matthieu -- Matthieu Schaller

(a) has already been done and pushed in my repository. (b) and (c) will be done soon, although I will probably stick to one_div_log10_e instead of ln_10 as long as this modification has not been pushed in the boost release.
Just so you know, neither are in the last Boost release - very few constants were. John.

It will be done. It is a non-neglectable optimization. I will also modify the code to take into account your other suggestions:
a- Use the right signature for the complex constructor. b- Use the boost::math::constants instead of the gcc extensions. c- Use the template parameters in the function calls. d- Implement binary operators with the integer optimization. e- Improve some functions that do too many computations (e.g. tanh computing exp() twice).
(a) has already been done and pushed in my repository. (b) and (c) will be done soon, although I will probably stick to one_div_log10_e instead of ln_10 as long as this modification has not been pushed in the boost release. This will allow for a larger test base. (d) and (e) require more work but will be done over the week-end probably.
Cheers, Matthieu
Thank you, Matthieu. I believe that this will improve your code. Could you please give us a heads-up when you are finished? Then I will try to check the code with both a multiprecision type and a fixed-point type. I remain at your (and boost's) service to potentially moderate the review, if needed. Best regards, Chris.

Dear all, The modifications you suggested led to a set of questions.
b- Use the boost::math::constants instead of the gcc extensions.
I implemented this but it seems to add limitations on the types usable. They now require to have an overloaded istream extraction operator and a specialization of std::numeric_limits. Those two limitations are reasonable but I am not certain that we want to impose such conditions on the template types.
c- Use the template parameters in the function calls.
Christopher Kormanyos, could you indicate which compiler you are using ? I could not reproduce the bug you mentioned. Instead of: template<typename T> inline complex<T> exp(const complex<T>& x) { return polar(exp(x.real()), x.imag()); } I needed this: template<typename T> inline complex<T> exp(const complex<T>& x) { return polar<T>(exp(x.real()), x.imag()); } This code seems to work fine for gcc 4.4->4.7 and icc 12.1.
I remain at your (and boost's) service to potentially moderate the review, if needed.
I don't know if it is up to me to decide but I am definitely interested. Regards, Matthieu -- Matthieu Schaller

The modifications you suggested led to a set of questions.
b- Use the boost::math::constants instead of the gcc extensions.
I implemented this but it seems to add limitations on the types usable. They now require to have an overloaded istream extraction operator and a specialization of std::numeric_limits. Those two limitations are reasonable but I am not certain that we want to impose such conditions on the template types.
In my opinion, the requirement on numeric_limits<T> is fair. After all, it's easier on potential users to mandate limits rather than to require complicated implementations of the constants. Perhaps this is an issue for potential review. Are you sure about that requirement for istream extraction? I don't believe that is due to boost::math::constants, but rather originates from your own istream extraction code (but only if one choooses to use it). I have, for example, successfully used your code with my fixed-point type that does not have istream extraction. I simply extracted the results to std::cout manually. I have included my new test code in another message.
c- Use the template parameters in the function calls.
Christopher Kormanyos, could you indicate which compiler you are using ? I could not reproduce the bug you mentioned. <snip> template<typename T> inline complex<T> exp(const complex<T>& x) { return polar<T>(exp(x.real()), x.imag()); }
I use Microsoft Visual Studio 2010, SP1. Matthieu, I only get the error when using the proposed boost::multiprecision::cpp_dec_float type. I am beginning to understand this, but perhaps not fully. The reason for this seems not to be the type being used but rather that there also exists boost::math::polar<T>(const T&, const T&). And since your polar() function is also contained within the namespace boost::math, the compiler can not resolve which one to use when I include all those headers. You probably can not reproduce the error because you are not including <boost/math/complex/complex.hpp>. I suppose that's whay you do not get the potentially ambiguous function error. The error is this: 1> complex.cpp 1>C:\boost_1_49\boost/math/complex/complex.hpp(1271): error C2782: 'boost::math::complex<T> boost::math::polar(const T &,const T &)' : template parameter 'T' is ambiguous 1> C:\boost_1_49\boost/math/complex/complex.hpp(1035) : see declaration of 'boost::math::polar' 1> could be 'boost::multiprecision::cpp_dec_float_50' 1> or 'boost::multiprecision::detail::mp_exp<tag,Arg1,Arg2,Arg3>' 1> with 1> [ 1> tag=boost::multiprecision::detail::function, 1> Arg1=boost::multiprecision::detail::exp_funct<boost::multiprecision::backends::cpp_dec_float<50>>, 1> Arg2=boost::multiprecision::mp_number<boost::multiprecision::backends::cpp_dec_float<50>>, 1> Arg3=void 1> ] 1> complex.cpp(269) : see reference to function template instantiation 'boost::math::complex<T> boost::math::exp<boost::multiprecision::cpp_dec_float_50>(const boost::math::complex<T> &)' being compiled 1> with 1> [ 1> T=boost::multiprecision::cpp_dec_float_50 1> ]
I remain at your (and boost's) service to potentially moderate the review, if needed.
I don't know if it is up to me to decide but I am definitely interested. Regards, Matthieu
It's not really up to me either. I believe I can do it and we really need to make progress on this project. It would be my first boost review as well. Will someone who knows tell us how to proceed? Thanks for your efforts, Matthieu. I'm looking forward to potential progress with this project. Best regards, Chris.

I implemented this but it seems to add limitations on the types usable. They now require to have an overloaded istream extraction operator and a specialization of std::numeric_limits. Those two limitations are reasonable but I am not certain that we want to impose such conditions on the template types.
The requirements are more subtle than that, the logic goes something like this: * If there's a numeric_limits specialization: * If the precision is less than a float/double/long double, and T is constructible from those types, then construct from float/double/long double. * If the precision is < 100 decimal places: * If T is constructible from a const char* then construct the constant that way. * Use the iostream operator to stream the constant in (we could arguably use a trait to check whether this is supported before using it as well). * Precision > 100 digits - calculate the constant and cache the result. * No numeric_limits - use boost::math::tools::digits<T>() to check runtime precision and pick either construct-from string or calculate the constant. There's also a trait class that can be specialized for some type T to override the above logic. So you don't have to have either numeric_limits or an iostream operator, but you have to have *something* that allows us to get the constant in there. HTH, John.

I implemented this but it seems to add limitations on the
types usable. They now require to have an overloaded istream extraction operator and a specialization of std::numeric_limits. Those two limitations are reasonable but I am not certain that we want to impose such conditions on the template types.
The requirements are more subtle than that, the logic goes something like this:
* If there's a numeric_limits specialization: * If the precision is less than a float/double/long double, and T is constructible from those types, then construct from float/double/long double. * If the precision is < 100 decimal places: * If T is constructible from a const char* then construct the constant that way. * Use the iostream operator to stream the constant in (we could arguably use a trait to check whether this is supported before using it as well). * Precision > 100 digits - calculate the constant and cache the result. * No numeric_limits - use boost::math::tools::digits<T>() to check runtime precision and pick either construct-from string or calculate the constant.
There's also a trait class that can be specialized for some type T to override the above logic. So you don't have to have either numeric_limits or an iostream operator, but you have to have *something* that allows us to get the constant in there. HTH, John.
Thank you, John. @Matthieu: But I believe that this refinement of boost::math::constants is only available in the trunk. So if you would like to take advantage of this policy on mathematical constants, you have to test your <boost/math/complex/complex.hpp> with-respect-to the trunk. Otherwise, you need to make a workaround via providing your own template specializations of pi(), ln2(), ln_10() for now. This is what I did in my code from yesterday for my fixed-point class. Best regards, Chris.

I remain at your (and boost's) service to potentially moderate the review, if needed.
I don't know if it is up to me to decide but I am definitely interested. Regards, Matthieu
I will not be able to manage the review of the potential Boost.Complex. But I will try to participate in the review. Since I can not manage the review, another one of the few, the brave, the numerically oriented boosters may want to step forward... Matthieu has continued to work on several key improvements in the past weeks. @Matthieu: I remain available for private conversations on implementation and design details. Best regards, Chris.

I briefly glanced at your pow-N routines. Perhaps I'm wrong, but I believe you are not doing binary splitting of the exponent here.
My mistake. I apologize. You are doing binary splitting of the exponent. But it looks like you still have a redundant multiplication step that could be removed with a recursive call. In particular, I believe your routine would need 14 multiplications to compute x^255, whereas only 7 would be needed with a recursive call. Basically, one needs to ask if the overhead of recursive call exceeds that of a multiply. I'll leave this one for the potential reviewers. If you look into our preliminary work on Boost.Multiprecision, you will find the recursive pow-n algorithm in the implementation of pow_imp() in pow.hpp. It's a bit tricky with the eval_... functions, but the recursive binary splitting is clearly visible in the code. It can be viewed here. https://svn.boost.org/svn/boost/sandbox/big_number/boost/multiprecision/deta... Best regards, Chris.

AMDG On 05/01/2012 02:52 PM, Christopher Kormanyos wrote:
I briefly glanced at your pow-N routines. Perhaps I'm wrong, but I believe you are not doing binary splitting of the exponent here.
My mistake. I apologize.
You are doing binary splitting of the exponent. But it looks like you still have a redundant multiplication step that could be removed with a recursive call. In particular, I believe your routine would need 14 multiplications to compute x^255, whereas only 7 would be needed with a recursive call. Basically, one needs to ask if the overhead of recursive call exceeds that of a multiply. I'll leave this one for the potential reviewers.
I don't see how it's possible to compute x^255 in 7 multiplies. The best I can come up with is 10: x^2 = x * x x^4 = x^2 * x^2 x^8 = x^4 * x^4 x^16 = x^8 * x^8 x^17 = x^16 * x x^34 = x^17 * x^17 x^51 = x^34 * x^17 x^102 = x^51 * x^51 x^204 = x^102 * x^102 x^255 = x^204 * x^51 In Christ, Steven Watanabe

Le 02/05/12 08:12, Steven Watanabe a écrit :
AMDG
On 05/01/2012 02:52 PM, Christopher Kormanyos wrote:
I briefly glanced at your pow-N routines. Perhaps I'm wrong, but I believe you are not doing binary splitting of the exponent here. My mistake. I apologize.
You are doing binary splitting of the exponent. But it looks like you still have a redundant multiplication step that could be removed with a recursive call. In particular, I believe your routine would need 14 multiplications to compute x^255, whereas only 7 would be needed with a recursive call. Basically, one needs to ask if the overhead of recursive call exceeds that of a multiply. I'll leave this one for the potential reviewers.
I don't see how it's possible to compute x^255 in 7 multiplies. The best I can come up with is 10:
x^2 = x * x x^4 = x^2 * x^2 x^8 = x^4 * x^4 x^16 = x^8 * x^8 x^17 = x^16 * x x^34 = x^17 * x^17 x^51 = x^34 * x^17 x^102 = x^51 * x^51 x^204 = x^102 * x^102 x^255 = x^204 * x^51
You are right, when we decompose 255 as a product of prime numbers we get x^255 = x^(3*5*17) = ((x^17)^5)^3 In order to compute x^p where p is prime we can use its pow 2 decomposition 3= 1+2^1 5= 1+2^2 17= 1*2^4 x^16=2^4 needs 4 *operations x^17 = x * x^16 x^17 needs 5=1+4 *operations (x^17)^5 = x^85 = (x^17)*(x^17)^4 x^85 needs 8=5+1+2 *operations ((x^17)^5)^3 = x^255 = x^85 * x^85 * x^85 x^255 = needs 10=8+2 *operations x^2 = x * x x^4 = x^2 * x^2 x^8 = x^4 * x^4 x^16 = x^8 * x^8 x^17 = x^16 * x x^34 = x^17 * x^17 x^68 = x^34 * x^34 x^85 = x^68 * x^17 x^255 = x^85 * x^85 * x^85 Of course in order to achieve this we need to know the prime factorization at compile time. Best, Vicente

You are doing binary splitting of the exponent.
But it looks like you still have a redundant multiplication step that could be removed with a recursive call. In particular, I believe your routine would need 14 multiplications to compute x^255, whereas only 7 would be needed with a recursive call.
I don't see how it's possible to compute x^255 in 7 multiplies. The best I can come up with is 10: x^2 = x * x x^4 = x^2 * x^2 x^8 = x^4 * x^4 x^16 = x^8 * x^8 x^17 = x^16 * x x^34 = x^17 * x^17 x^51 = x^34 * x^17 x^102 = x^51 * x^51 x^204 = x^102 * x^102 x^255 = x^204 * x^51 In Christ, Steven Watanabe
Thanks. It looks like I was wrong again. I was mostly interested in the author's potential redundant multiplication---which we did successfully identify. I'll try to do better in the future. Best regards, Chris.

7) I hope to find time to test this code further with
a tiny fixed-point class.
This would be interesting to test.
OK. I will try very hard to make the time for it next weekend or sooner.
Matthieu, I have successfully tested your <complex.hpp> code with a fixed-point template class. I used my own Q15.16 fixed-point representation. Everything works fine, assuming we modify your header file according to the suggestions we are currently discussing. It was a very fun experience. I actually have complex calculations with 50 decimal digits of precision and 4 decimal digits of precision in the same file. Wow! There is absolutely fascinating potential for generic numeric programming here! Your complex<T> code also helped me find weaknesses in my fixed-point code. My test code is included this e-mail. I encourage you to try to compile and run this test code. If you have any problems, don't hesitate to contact me. You need to simply call do_complex() and do_complex_fixed() from main(). You need to go to the boost sandbox and get John's <boost/multiprecision/...> stuff from the "big_number" directory. My tests run OK with boost 1.49 patched with our proposed Boost.Multiprecision. My fixed_point.h, two util*.h header files and a modified version of your <complex.hpp> are included. My fixed-point stuff is not boost stuff. Best regards, Chris.
participants (8)
-
Christopher Kormanyos
-
Daryle Walker
-
John Maddock
-
Matthieu Schaller
-
Neal Becker
-
Paul A. Bristow
-
Steven Watanabe
-
Vicente J. Botet Escriba