
Le 12/03/12 22:50, Matthieu Schaller a écrit :
26.4.0.3> If the result of a function is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.
This is not one of those cases, since the result is indeed mathematically defined and is representable as a perfectly normal value.
De-normalized numbers are apparently not supported.
I don't see how denormalized numbers are related to this.
I do totally agree with you. I am not arguing against you. I am just providing the extracts of the norm more or less (rather less in this case) related to the question of precision that has been risen earlier in the discussion.
The original question still remains. Should a boost::complex class rely on the std::complex implementation no matter how (im-)precise or should boost provide a truly precise complex class.
Hi, Boost::complex should define the semantics of its operations. Its will be difficult to rely on sts::complex if different std library implementations use different semantic. For example; libc++ of clang takes care of nan and infinity. So, once you have defined the semantic of the operations boost::complex supports, you could use the provided std::complex operation if the library provides the same semantics and is faster than your default implementation. From the lecture of this thread, it seems that some booster requires a std compliant implementation, while other could rely on faster and less compliant implementations. I guess you need to see how to provide both. Have you explored the idea to define a finite_real wrapper, that will assert its value is not nan nor infinity? using namespace boost; complex<double> a, b c; c = finite(a) * finite(b); Or to define specific non-accurate functions? c = boost::complex::times(a,b); that assume their arguments are not nan nor infinity? What others think? Best, Vicente