
I am not sure people would easily give up the speed of the standard one just to be able to handle numbers that should anyway never enter everyday calculations.
The underflow/overflow problem occurs with finite values.
True. But are you willing to give up speed for this ? It is a question to everyone. I don't know what boost-members think. Some of the boost::math functions are implemented in a very conservative way which ensures a correct result in any case. Looking at the standard (n3242 draft to be exact, the following elements are, in my opinion, important: 26.4.0.3> If the result of a function is not mathematically defined or not in the range of representable values for its type, the behavior is undefined. De-normalized numbers are apparently not supported. 26.4.0.4> If z is an lvalue expression of type cv std::complex<T> then: — the expression reinterpret_cast<cv T(&)[2]>(z) shall be well-formed, — reinterpret_cast<cv T(&)[2]>(z)[0] shall designate the real part of z, and — reinterpret_cast<cv T(&)[2]>(z)[1] shall designate the imaginary part of z. Moreover, if a is an expression of type cv std::complex<T>* and the expression a[i] is well-defined for an integer expression i, then: — reinterpret_cast<cv T*>(a)[2*i] shall designate the real part of a[i], and — reinterpret_cast<cv T*>(a)[2*i + 1] shall designate the imaginary part of a[i]. The implementation must thus contain a real and imaginary part. 26.4.8 states that the transcendental functions should behave as the equivalent C functions. Nothing else is said about the precision of the functions and operators. So, if I'm correct, any implementation compliant with the standard should contain a real en imaginary part but not support for de-normalized numbers is necessary. Apparently, the constructors (at least GCC and ICC) have chosen to use the simplest solution: do nothing about these special cases. -- Matthieu Schaller