Review request: extended complex number library

Dear all, Following the comments from V. Escriba, I formally propose my complex number library for review. The library is an extension of the std::complex class addressing two issues: - The standard does not guaranty the behaviour of the complex class if instantiated with types other than float/double/long double. - Some calculation where pure imaginary numbers (i.e. multiples of sqrt(-1)) appear are unnecessarily slowed down due to the lack of support for these numbers. The code I submit contains two interleaved classes boost::complex and boost::imaginary which can be instantiated with any type T provided T overloads the usual arithmetic operators and some basic (real) mathematical functions depending on which complex function will be used. It is thus an extended version of Thorsten Ottosen's n1869 proposal (http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1869.html) Performance tests show some non-negligible speed-ups compared to std::complex for calculations where pure imaginary numbers are involved. A speed-up of 25% has been observed when solving the Schroedinger equation explicitly on a regular mesh and some comparable figures can be observed when computing the Mandelbrot set (the two examples snippets provided in the archive). Furthermore, the functions (sin(), exp(), log(),...) involving boost::imaginary numbers are more precise than their equivalent using a std::complex with the real part set to 0. The code and (doxygen) documentation is available in the repository http://code.google.com/p/cpp-imaginary-numbers/ A comprehensive zip archive can be found in the "download" and the code can also be checked-out via the SVN repository. The archive contains the class header, two examples, a comprehensive precision test, a brute-force performance test and the documentation. I'd be happy to answer any question from your side and to provide more detailed information if required. Regards, Matthieu -- Matthieu Schaller PhD student - Durham University

On 03/05/2012 10:41 PM, Matthieu Schaller wrote:
Dear all,
Following the comments from V. Escriba, I formally propose my complex number library for review. The library is an extension of the std::complex class addressing two issues: - The standard does not guaranty the behaviour of the complex class if instantiated with types other than float/double/long double. - Some calculation where pure imaginary numbers (i.e. multiples of sqrt(-1)) appear are unnecessarily slowed down due to the lack of support for these numbers. The code I submit contains two interleaved classes boost::complex and boost::imaginary which can be instantiated with any type T provided T overloads the usual arithmetic operators and some basic (real) mathematical functions depending on which complex function will be used.
How about having a boost::real<T> (or whatever you want to name it), that references a complex number with an imaginary part set to 0? The difference would be that sqrt(boost::real<double>(-1)) would return a boost::complex<T> rather than a NaN.

Dear all, Thanks for your comments. How about having a boost::real<T> (or whatever you want to name it), that references a complex number with an imaginary part set to 0? This could in principle be done but I don't think that it is a good thing to do. boost::real<> would be a wrapper around the double type say. Now, this would require to rewrite all the mathematical functions for this type. Furthermore, people doing numerical sciences will probably not be keen to drop the use of native POD variable for simple calculations. It is true that some functions could be modified to return complex values instead of NaNs but this would come at a performance cost when performing real only computations. Seems quite interesting. One issue I have though, as a user of gcc/libstdc++, I see that the versions of many std::complex operations seem to be optimized in terms of gcc builtins. For example: #if _GLIBCXX_USE_C99_COMPLEX inline float __complex_abs(__complex__ float __z) { return __builtin_cabsf(__z); } inline double __complex_abs(__complex__ double __z) { return __builtin_cabs(__z); } So if I switched to boost::complex, I'd loose these optimizations. Is it useful to have an imaginary library that complements std::complex, rather than replaces it? I am actually using the std::complex<> versions for all non-trivial operations. This means that built-in functions can be used by the compiler. Now, you found one example (abs()) where this is not done. Measurements showed that this was not necessary for this particular operation but for consistency, I could do this for all operations. I'm not sure implicit conversion are desirable, but what about explicit conversions to/from std::complex? and a make_complex function? I don't think that implicit conversions to/from std::complex should be used. But you are right, I should provide an explicit conversion constructor and a make_complex function. I could then write template<> inline boost::complex<float> cos(const boost::complex<float>& x) { return boost::complex<float>(std::cos(make_std_complex<float>(x)); } Cheers, Matthieu -- Matthieu Schaller

On 03/07/2012 11:42 AM, Matthieu Schaller wrote:
Dear all,
Thanks for your comments.
How about having a boost::real<T> (or whatever you want to name it), that references a complex number with an imaginary part set to 0?
This could in principle be done but I don't think that it is a good thing to do. boost::real<> would be a wrapper around the double type say. Now, this would require to rewrite all the mathematical functions for this type. Furthermore, people doing numerical sciences will probably not be keen to drop the use of native POD variable for simple calculations. It is true that some functions could be modified to return complex values instead of NaNs but this would come at a performance cost when performing real only computations.
How does that not apply to boost::imaginary<T>? A complex has a real and imaginary part. boost::imaginary<T> is a complex with an imaginary part set to 0. The counterpart for the real part should also exist.

Matthieu Schaller wrote:
Dear all,
Following the comments from V. Escriba, I formally propose my complex number library for review. The library is an extension of the std::complex class addressing two issues: - The standard does not guaranty the behaviour of the complex class if instantiated with types other than float/double/long double. - Some calculation where pure imaginary numbers (i.e. multiples of sqrt(-1)) appear are unnecessarily slowed down due to the lack of support for these numbers. The code I submit contains two interleaved classes boost::complex and boost::imaginary which can be instantiated with any type T provided T overloads the usual arithmetic operators and some basic (real) mathematical functions depending on which complex function will be used. It is thus an extended version of Thorsten Ottosen's n1869 proposal (http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1869.html)
Performance tests show some non-negligible speed-ups compared to std::complex for calculations where pure imaginary numbers are involved. A speed-up of 25% has been observed when solving the Schroedinger equation explicitly on a regular mesh and some comparable figures can be observed when computing the Mandelbrot set (the two examples snippets provided in the archive). Furthermore, the functions (sin(), exp(), log(),...) involving boost::imaginary numbers are more precise than their equivalent using a std::complex with the real part set to 0.
The code and (doxygen) documentation is available in the repository http://code.google.com/p/cpp-imaginary-numbers/ A comprehensive zip archive can be found in the "download" and the code can also be checked-out via the SVN repository. The archive contains the class header, two examples, a comprehensive precision test, a brute-force performance test and the documentation.
I'd be happy to answer any question from your side and to provide more detailed information if required.
Regards,
Matthieu
Seems quite interesting. One issue I have though, as a user of gcc/libstdc++, I see that the versions of many std::complex operations seem to be optimized in terms of gcc builtins. For example: #if _GLIBCXX_USE_C99_COMPLEX inline float __complex_abs(__complex__ float __z) { return __builtin_cabsf(__z); } inline double __complex_abs(__complex__ double __z) { return __builtin_cabs(__z); } So if I switched to boost::complex, I'd loose these optimizations. Is it useful to have an imaginary library that complements std::complex, rather than replaces it?

Le 06/03/12 13:28, Neal Becker a écrit :
Matthieu Schaller wrote:
Dear all,
Following the comments from V. Escriba, I formally propose my complex number library for review. The library is an extension of the std::complex class addressing two issues: - The standard does not guaranty the behaviour of the complex class if instantiated with types other than float/double/long double. - Some calculation where pure imaginary numbers (i.e. multiples of sqrt(-1)) appear are unnecessarily slowed down due to the lack of support for these numbers. The code I submit contains two interleaved classes boost::complex and boost::imaginary which can be instantiated with any type T provided T overloads the usual arithmetic operators and some basic (real) mathematical functions depending on which complex function will be used. It is thus an extended version of Thorsten Ottosen's n1869 proposal (http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1869.html)
Performance tests show some non-negligible speed-ups compared to std::complex for calculations where pure imaginary numbers are involved. A speed-up of 25% has been observed when solving the Schroedinger equation explicitly on a regular mesh and some comparable figures can be observed when computing the Mandelbrot set (the two examples snippets provided in the archive). Furthermore, the functions (sin(), exp(), log(),...) involving boost::imaginary numbers are more precise than their equivalent using a std::complex with the real part set to 0.
The code and (doxygen) documentation is available in the repository http://code.google.com/p/cpp-imaginary-numbers/ A comprehensive zip archive can be found in the "download" and the code can also be checked-out via the SVN repository. The archive contains the class header, two examples, a comprehensive precision test, a brute-force performance test and the documentation.
I'd be happy to answer any question from your side and to provide more detailed information if required.
Regards,
Matthieu Seems quite interesting. One issue I have though, as a user of gcc/libstdc++, I see that the versions of many std::complex operations seem to be optimized in terms of gcc builtins. For example:
#if _GLIBCXX_USE_C99_COMPLEX inline float __complex_abs(__complex__ float __z) { return __builtin_cabsf(__z); }
inline double __complex_abs(__complex__ double __z) { return __builtin_cabs(__z); }
So if I switched to boost::complex, I'd loose these optimizations.
Is it useful to have an imaginary library that complements std::complex, rather than replaces it?
Hi, you are right. The goal been to provide a faster library implies that the Boost library should use the standard library when the standard is more efficient, and we could expect that the standard is faster for the scope it covers. This means that overloading such as template<> inline float abs(const complex<float>& x) { return std::sqrt(x.real() * x.real() + x.imag() * x.imag()); } should be replaced by template<> inline float abs(const complex<float>& x) { return std::abs(x); } Best, Vicente

Le 06/03/12 19:49, Vicente J. Botet Escriba a écrit :
Le 06/03/12 13:28, Neal Becker a écrit :
Matthieu Schaller wrote:
Dear all,
Following the comments from V. Escriba, I formally propose my complex number library for review. The library is an extension of the std::complex class addressing two issues: - The standard does not guaranty the behaviour of the complex class if instantiated with types other than float/double/long double. - Some calculation where pure imaginary numbers (i.e. multiples of sqrt(-1)) appear are unnecessarily slowed down due to the lack of support for these numbers. The code I submit contains two interleaved classes boost::complex and boost::imaginary which can be instantiated with any type T provided T overloads the usual arithmetic operators and some basic (real) mathematical functions depending on which complex function will be used. It is thus an extended version of Thorsten Ottosen's n1869 proposal (http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1869.html)
Performance tests show some non-negligible speed-ups compared to std::complex for calculations where pure imaginary numbers are involved. A speed-up of 25% has been observed when solving the Schroedinger equation explicitly on a regular mesh and some comparable figures can be observed when computing the Mandelbrot set (the two examples snippets provided in the archive). Furthermore, the functions (sin(), exp(), log(),...) involving boost::imaginary numbers are more precise than their equivalent using a std::complex with the real part set to 0.
The code and (doxygen) documentation is available in the repository http://code.google.com/p/cpp-imaginary-numbers/ A comprehensive zip archive can be found in the "download" and the code can also be checked-out via the SVN repository. The archive contains the class header, two examples, a comprehensive precision test, a brute-force performance test and the documentation.
I'd be happy to answer any question from your side and to provide more detailed information if required.
Regards,
Matthieu Seems quite interesting. One issue I have though, as a user of gcc/libstdc++, I see that the versions of many std::complex operations seem to be optimized in terms of gcc builtins. For example:
#if _GLIBCXX_USE_C99_COMPLEX inline float __complex_abs(__complex__ float __z) { return __builtin_cabsf(__z); }
inline double __complex_abs(__complex__ double __z) { return __builtin_cabs(__z); }
So if I switched to boost::complex, I'd loose these optimizations.
Is it useful to have an imaginary library that complements std::complex, rather than replaces it?
Hi,
you are right. The goal been to provide a faster library implies that the Boost library should use the standard library when the standard is more efficient, and we could expect that the standard is faster for the scope it covers.
This means that overloading such as
template<> inline float abs(const complex<float>& x) { return std::sqrt(x.real() * x.real() + x.imag() * x.imag()); } should be replaced by
template<> inline float abs(const complex<float>& x) { return std::abs(x); }
BTW, the preceding code would work only if the conversion to std::complex are implicit. I'm not sure implicit conversion are desirable, but what about explicit conversions to/from std::complex? and a make_complex function? This could allow to replace e.g. the code template<> inline complex<float> cos(const complex<float>& x) { const std::complex<float> temp(x.real(), x.imag()); const std::complex<float> ret = cos(temp); return complex<float>(ret.real(), ret.imag()); } by template<> inline complex<float> cos(const complex<float>& x) { return make_complex(std::cos(std::complex<float>(x)); } An alternative that could perform better could be to specialize boost::math::complex for float so that it contains a std::complex<float> member, let me call it underlying. template<> inline complex<float> cos(const complex<float>& x) { return make_complex(std::cos(x.get_underlying()); } Of course, inspecting the generated code will be needed to see if the compiler is able to optimize this better. Best, Vicente

On 03/06/2012 07:49 PM, Vicente J. Botet Escriba wrote:
you are right. The goal been to provide a faster library implies that the Boost library should use the standard library when the standard is more efficient, and we could expect that the standard is faster for the scope it covers.
What about correctness? Most implementations of standard library functions on complex numbers are not correct from a numerical point of view.

On 7 Mar 2012, at 22:12, Mathias Gaunard wrote:
On 03/06/2012 07:49 PM, Vicente J. Botet Escriba wrote:
you are right. The goal been to provide a faster library implies that the Boost library should use the standard library when the standard is more efficient, and we could expect that the standard is faster for the scope it covers.
What about correctness?
Most implementations of standard library functions on complex numbers are not correct from a numerical point of view.
Really? Can you give some examples? Have you reported these issues to the various compiler designers? Chris

On 03/07/2012 11:20 PM, Christopher Jefferson wrote:
On 7 Mar 2012, at 22:12, Mathias Gaunard wrote:
On 03/06/2012 07:49 PM, Vicente J. Botet Escriba wrote:
you are right. The goal been to provide a faster library implies that the Boost library should use the standard library when the standard is more efficient, and we could expect that the standard is faster for the scope it covers.
What about correctness?
Most implementations of standard library functions on complex numbers are not correct from a numerical point of view.
Really? Can you give some examples? Have you reported these issues to the various compiler designers?
Verbatim from the libstdc++ headers // 26.2.5/13 // XXX: This is a grammar school implementation. template<typename _Tp> template<typename _Up> complex<_Tp>& complex<_Tp>::operator*=(const complex<_Up>& __z) { const _Tp __r = _M_real * __z.real() - _M_imag * __z.imag(); _M_imag = _M_real * __z.imag() + _M_imag * __z.real(); _M_real = __r; return *this; } This is incorrect for infinite types and causes undue overflow or underflow. See the C99 or C11 standards, annex G. It also comes with a possible implementation. C++ has no equivalent to this annex AFAIK.

Le 07/03/12 23:36, Mathias Gaunard a écrit :
On 03/07/2012 11:20 PM, Christopher Jefferson wrote:
On 7 Mar 2012, at 22:12, Mathias Gaunard wrote:
On 03/06/2012 07:49 PM, Vicente J. Botet Escriba wrote:
you are right. The goal been to provide a faster library implies that the Boost library should use the standard library when the standard is more efficient, and we could expect that the standard is faster for the scope it covers.
What about correctness?
Most implementations of standard library functions on complex numbers are not correct from a numerical point of view.
Really? Can you give some examples? Have you reported these issues to the various compiler designers?
Verbatim from the libstdc++ headers
// 26.2.5/13 // XXX: This is a grammar school implementation. template<typename _Tp> template<typename _Up> complex<_Tp>& complex<_Tp>::operator*=(const complex<_Up>& __z) { const _Tp __r = _M_real * __z.real() - _M_imag * __z.imag(); _M_imag = _M_real * __z.imag() + _M_imag * __z.real(); _M_real = __r; return *this; }
This is incorrect for infinite types and causes undue overflow or underflow.
IIUC the standard cover just complex on builtin types, isn't it? See the C99 or C11 standards, annex G.
It also comes with a possible implementation.
I don't which can be the correct implementation for C++ complex on builtin types without using arbitrary precision. I have no access to this standard. Please could you add it here? Best, Vicente

Le 08/03/2012 19:40, Vicente J. Botet Escriba a écrit :
Le 07/03/12 23:36, Mathias Gaunard a écrit :
On 03/07/2012 11:20 PM, Christopher Jefferson wrote:
On 7 Mar 2012, at 22:12, Mathias Gaunard wrote:
On 03/06/2012 07:49 PM, Vicente J. Botet Escriba wrote:
you are right. The goal been to provide a faster library implies that the Boost library should use the standard library when the standard is more efficient, and we could expect that the standard is faster for the scope it covers.
What about correctness?
Most implementations of standard library functions on complex numbers are not correct from a numerical point of view.
Really? Can you give some examples? Have you reported these issues to the various compiler designers?
Verbatim from the libstdc++ headers
// 26.2.5/13 // XXX: This is a grammar school implementation. template<typename _Tp> template<typename _Up> complex<_Tp>& complex<_Tp>::operator*=(const complex<_Up>& __z) { const _Tp __r = _M_real * __z.real() - _M_imag * __z.imag(); _M_imag = _M_real * __z.imag() + _M_imag * __z.real(); _M_real = __r; return *this; }
This is incorrect for infinite types and causes undue overflow or underflow.
IIUC the standard cover just complex on builtin types, isn't it? See the C99 or C11 standards, annex G.
It also comes with a possible implementation.
I don't which can be the correct implementation for C++ complex on builtin types without using arbitrary precision. I have no access to this standard. Please could you add it here?
Best, Vicente
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
look for instance n1124.pdf which is a C99 draft. The aim is not to have full precision but to minimize overflows/underflows and to get good behviour with inf and nan. For example 1) (0+0i)*(inf+0i) must return nan+0i, not nan+nani 2) if modulus(a)² overflow this must not necesseraly implies that 1/a which is mathematically conj(a)/modulus(a)² underflows Best, jt lapresté

Le 09/03/12 00:50, jtl a écrit :
Le 08/03/2012 19:40, Vicente J. Botet Escriba a écrit :
Le 07/03/12 23:36, Mathias Gaunard a écrit :
On 03/07/2012 11:20 PM, Christopher Jefferson wrote:
On 7 Mar 2012, at 22:12, Mathias Gaunard wrote:
On 03/06/2012 07:49 PM, Vicente J. Botet Escriba wrote:
you are right. The goal been to provide a faster library implies that the Boost library should use the standard library when the standard is more efficient, and we could expect that the standard is faster for the scope it covers.
What about correctness?
Most implementations of standard library functions on complex numbers are not correct from a numerical point of view.
Really? Can you give some examples? Have you reported these issues to the various compiler designers?
Verbatim from the libstdc++ headers
// 26.2.5/13 // XXX: This is a grammar school implementation. template<typename _Tp> template<typename _Up> complex<_Tp>& complex<_Tp>::operator*=(const complex<_Up>& __z) { const _Tp __r = _M_real * __z.real() - _M_imag * __z.imag(); _M_imag = _M_real * __z.imag() + _M_imag * __z.real(); _M_real = __r; return *this; }
This is incorrect for infinite types and causes undue overflow or underflow.
IIUC the standard cover just complex on builtin types, isn't it? See the C99 or C11 standards, annex G.
It also comes with a possible implementation.
I don't which can be the correct implementation for C++ complex on builtin types without using arbitrary precision. I have no access to this standard. Please could you add it here?
look for instance n1124.pdf which is a C99 draft.
The aim is not to have full precision but to minimize overflows/underflows and to get good behviour with inf and nan. For example 1) (0+0i)*(inf+0i) must return nan+0i, not nan+nani 2) if modulus(a)² overflow this must not necesseraly implies that 1/a which is mathematically conj(a)/modulus(a)² underflows
Thanks. I see now what you mean. I agree, correctness is important. Vicente

IIUC the standard cover just complex on builtin types, isn't it? See the C99 or C11 standards, annex G.
It also comes with a possible implementation.
I don't which can be the correct implementation for C++ complex on builtin types without using arbitrary precision. I have no access to this standard. Please could you add it here?
look for instance n1124.pdf which is a C99 draft.
The aim is not to have full precision but to minimize overflows/underflows and to get good behviour with inf and nan. For example 1) (0+0i)*(inf+0i) must return nan+0i, not nan+nani 2) if modulus(a)² overflow this must not necesseraly implies that 1/a which is mathematically conj(a)/modulus(a)² underflows
Thanks. I see now what you mean. I agree, correctness is important.
The question is thus do we (you) rather want a library that reproduces the std::complex (i.e. the gcc and icc ones at least) class both in terms of performance and accuracy or do we want a library that is more accurate when de-normalized numbers are involved but that would be slower than the standard one ? I am not sure people would easily give up the speed of the standard one just to be able to handle numbers that should anyway never enter everyday calculations. What is your opinion ? -- Matthieu Schaller

Matthieu Schaller wrote:
IIUC the standard cover just complex on builtin types, isn't it? See the C99 or C11 standards, annex G.
It also comes with a possible implementation.
I don't which can be the correct implementation for C++ complex on builtin types without using arbitrary precision. I have no access to this standard. Please could you add it here?
look for instance n1124.pdf which is a C99 draft.
The aim is not to have full precision but to minimize overflows/underflows and to get good behviour with inf and nan. For example 1) (0+0i)*(inf+0i) must return nan+0i, not nan+nani 2) if modulus(a)² overflow this must not necesseraly implies that 1/a which is mathematically conj(a)/modulus(a)² underflows
Thanks. I see now what you mean. I agree, correctness is important.
The question is thus do we (you) rather want a library that reproduces the std::complex (i.e. the gcc and icc ones at least) class both in terms of performance and accuracy or do we want a library that is more accurate when de-normalized numbers are involved but that would be slower than the standard one ?
I am not sure people would easily give up the speed of the standard one just to be able to handle numbers that should anyway never enter everyday calculations. What is your opinion ?
Can we have both? With gcc, -ffast-math will allow faster/less correct code.

Interestingly, there is a discussion of accuracy of the standard gcc libm here: http://permalink.gmane.org/gmane.comp.lib.glibc.alpha/18040

Le 09/03/12 17:54, Matthieu Schaller a écrit :
IIUC the standard cover just complex on builtin types, isn't it? See the C99 or C11 standards, annex G.
It also comes with a possible implementation.
I don't which can be the correct implementation for C++ complex on builtin types without using arbitrary precision. I have no access to this standard. Please could you add it here?
look for instance n1124.pdf which is a C99 draft.
The aim is not to have full precision but to minimize overflows/underflows and to get good behviour with inf and nan. For example 1) (0+0i)*(inf+0i) must return nan+0i, not nan+nani 2) if modulus(a)² overflow this must not necesseraly implies that 1/a which is mathematically conj(a)/modulus(a)² underflows
Thanks. I see now what you mean. I agree, correctness is important.
The question is thus do we (you) rather want a library that reproduces the std::complex (i.e. the gcc and icc ones at least) class both in terms of performance and accuracy or do we want a library that is more accurate when de-normalized numbers are involved but that would be slower than the standard one ?
I am not sure people would easily give up the speed of the standard one just to be able to handle numbers that should anyway never enter everyday calculations. What is your opinion ?
You can always provide two variants of the arithmetic operations: one working with finite numbers (not inf or nan) that is fast and the other that allows inf or nan as operands that is slower. The question is which variant is chosen for the C++ arithmetic operators? I suspect that the default choice must follow the standard. An alternative is to have a specific type for finite numbers (which ensures that the the value is not nan nor inf). The library could specialize the behavior of the needed operations for this specific type. The main problem of course is readability z = x + y; becomes z = finite(x) + finite(y); The user is saying here that she know the variables x and y contains a finite number, so that the + implementation can take care of this information. I don't like it too much, but it allows the user to get the best performance when she knows the possible contents. Just my 2cts, Vicente

I am not sure people would easily give up the speed of the standard one just to be able to handle numbers that should anyway never enter everyday calculations.
The underflow/overflow problem occurs with finite values.
True. But are you willing to give up speed for this ? It is a question to everyone. I don't know what boost-members think. Some of the boost::math functions are implemented in a very conservative way which ensures a correct result in any case. Looking at the standard (n3242 draft to be exact, the following elements are, in my opinion, important: 26.4.0.3> If the result of a function is not mathematically defined or not in the range of representable values for its type, the behavior is undefined. De-normalized numbers are apparently not supported. 26.4.0.4> If z is an lvalue expression of type cv std::complex<T> then: — the expression reinterpret_cast<cv T(&)[2]>(z) shall be well-formed, — reinterpret_cast<cv T(&)[2]>(z)[0] shall designate the real part of z, and — reinterpret_cast<cv T(&)[2]>(z)[1] shall designate the imaginary part of z. Moreover, if a is an expression of type cv std::complex<T>* and the expression a[i] is well-defined for an integer expression i, then: — reinterpret_cast<cv T*>(a)[2*i] shall designate the real part of a[i], and — reinterpret_cast<cv T*>(a)[2*i + 1] shall designate the imaginary part of a[i]. The implementation must thus contain a real and imaginary part. 26.4.8 states that the transcendental functions should behave as the equivalent C functions. Nothing else is said about the precision of the functions and operators. So, if I'm correct, any implementation compliant with the standard should contain a real en imaginary part but not support for de-normalized numbers is necessary. Apparently, the constructors (at least GCC and ICC) have chosen to use the simplest solution: do nothing about these special cases. -- Matthieu Schaller

On 11/03/12 13:16, Matthieu Schaller wrote:
26.4.0.3> If the result of a function is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.
This is not one of those cases, since the result is indeed mathematically defined and is representable as a perfectly normal value.
De-normalized numbers are apparently not supported.
I don't see how denormalized numbers are related to this.

26.4.0.3> If the result of a function is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.
This is not one of those cases, since the result is indeed mathematically defined and is representable as a perfectly normal value.
De-normalized numbers are apparently not supported.
I don't see how denormalized numbers are related to this.
I do totally agree with you. I am not arguing against you. I am just providing the extracts of the norm more or less (rather less in this case) related to the question of precision that has been risen earlier in the discussion. The original question still remains. Should a boost::complex class rely on the std::complex implementation no matter how (im-)precise or should boost provide a truly precise complex class. Regards, M. -- Matthieu Schaller

Le 12/03/12 22:50, Matthieu Schaller a écrit :
26.4.0.3> If the result of a function is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.
This is not one of those cases, since the result is indeed mathematically defined and is representable as a perfectly normal value.
De-normalized numbers are apparently not supported.
I don't see how denormalized numbers are related to this.
I do totally agree with you. I am not arguing against you. I am just providing the extracts of the norm more or less (rather less in this case) related to the question of precision that has been risen earlier in the discussion.
The original question still remains. Should a boost::complex class rely on the std::complex implementation no matter how (im-)precise or should boost provide a truly precise complex class.
Hi, Boost::complex should define the semantics of its operations. Its will be difficult to rely on sts::complex if different std library implementations use different semantic. For example; libc++ of clang takes care of nan and infinity. So, once you have defined the semantic of the operations boost::complex supports, you could use the provided std::complex operation if the library provides the same semantics and is faster than your default implementation. From the lecture of this thread, it seems that some booster requires a std compliant implementation, while other could rely on faster and less compliant implementations. I guess you need to see how to provide both. Have you explored the idea to define a finite_real wrapper, that will assert its value is not nan nor infinity? using namespace boost; complex<double> a, b c; c = finite(a) * finite(b); Or to define specific non-accurate functions? c = boost::complex::times(a,b); that assume their arguments are not nan nor infinity? What others think? Best, Vicente

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Matthieu Schaller Sent: Monday, March 12, 2012 9:51 PM To: boost@lists.boost.org Subject: Re: [boost] Review request: extended complex number library
The original question still remains. Should a boost::complex class rely on the std::complex implementation no matter how (im-)precise or should boost provide a truly precise complex class.
Believing that 'if a little does it well, a lot will do it better', I would say go for a truly precise complex class. But if both behaviours can not-too-painfully be provided using a macro, that might please everyone. Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

Hello Matthieu, I have received your request and will add it to the review schedule. Best, Ron On Mar 5, 2012, at 1:41 PM, Matthieu Schaller wrote:
Dear all,
Following the comments from V. Escriba, I formally propose my complex number library for review. The library is an extension of the std::complex class addressing two issues: - The standard does not guaranty the behaviour of the complex class if instantiated with types other than float/double/long double. - Some calculation where pure imaginary numbers (i.e. multiples of sqrt(-1)) appear are unnecessarily slowed down due to the lack of support for these numbers. The code I submit contains two interleaved classes boost::complex and boost::imaginary which can be instantiated with any type T provided T overloads the usual arithmetic operators and some basic (real) mathematical functions depending on which complex function will be used. It is thus an extended version of Thorsten Ottosen's n1869 proposal (http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1869.html)
Performance tests show some non-negligible speed-ups compared to std::complex for calculations where pure imaginary numbers are involved. A speed-up of 25% has been observed when solving the Schroedinger equation explicitly on a regular mesh and some comparable figures can be observed when computing the Mandelbrot set (the two examples snippets provided in the archive). Furthermore, the functions (sin(), exp(), log(),...) involving boost::imaginary numbers are more precise than their equivalent using a std::complex with the real part set to 0.
The code and (doxygen) documentation is available in the repository http://code.google.com/p/cpp-imaginary-numbers/ A comprehensive zip archive can be found in the "download" and the code can also be checked-out via the SVN repository. The archive contains the class header, two examples, a comprehensive precision test, a brute-force performance test and the documentation.
I'd be happy to answer any question from your side and to provide more detailed information if required.
Regards,
Matthieu -- Matthieu Schaller PhD student - Durham University
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (8)
-
Christopher Jefferson
-
jtl
-
Mathias Gaunard
-
Matthieu Schaller
-
Neal Becker
-
Paul A. Bristow
-
Ronald Garcia
-
Vicente J. Botet Escriba