Boost.Test feature request

I have a couple of annoying issues with Boost.Test's output on failed tests: 1) The printing code doesn't restore the iostream's original precision after it's changed it for formatted output. It's a minor issue but very annoying at times :-) Looks trivial to patch in print_log_value as well? 2) If the type has no numeric_limits support, say if it's a composite type, like std::complex<double> then you don't get enough digits to tell what the problem is. Could the code default to whatever digits long-double has? Again a minor change to print_log_value (in my experience std lib's ignore requests for more digits than a type really has anyway, so it should be harmless?). 3) If a BOOST_CHECK_CLOSE fails, you get the two values, but not what the % difference between them was. And it's the % difference that matters :-) I know this has come up before, but it's still bugging me. Here's the problem: if the type is say a 128-bit long double then you get two very long 35-digit numbers printed out: which have two many digits for the average calculator, so you can't even calculate the % difference manually to see how far off you were! Many thanks, John.

"John Maddock" <john@johnmaddock.co.uk> wrote in message news:000701c70357$ae886180$c9541b56@fuji...
I have a couple of annoying issues with Boost.Test's output on failed tests:
1) The printing code doesn't restore the iostream's original precision after it's changed it for formatted output. It's a minor issue but very annoying at times :-) Looks trivial to patch in print_log_value as well?
Ok.
2) If the type has no numeric_limits support, say if it's a composite type, like std::complex<double> then you don't get enough digits to tell what the problem is. Could the code default to whatever digits long-double has? Again a minor change to print_log_value (in my experience std lib's ignore requests for more digits than a type really has anyway, so it should be harmless?).
Could you provide the code?
3) If a BOOST_CHECK_CLOSE fails, you get the two values, but not what the % difference between them was. And it's the % difference that matters :-) I know this has come up before, but it's still bugging me. Here's the problem: if the type is say a 128-bit long double then you get two very long 35-digit numbers printed out: which have two many digits for the average calculator, so you can't even calculate the % difference manually to see how far off you were!
OK I will look again once I have a moment. Gennadiy

I'd like to strongly add my vote for these too. They are a continuing nuisance, of which I have recent and relevant experience testing the statistical distributions - plug ;-). I'd also like to see additional macros provided to exactly duplicate BOOST_CHECK_CLOSE (and warn and require of course) but with names BOOST_CHECK_CLOSE_PERCENT ... to reduce continuing confusion with percent and fractions. Thanks Paul PS Boost.Test remains an extremely valuable test tool - I vow never to start coding again without using it as I go along! | -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental | Sent: 08 November 2006 18:48 | To: boost@lists.boost.org | Subject: Re: [boost] Boost.Test feature request | | | "John Maddock" <john@johnmaddock.co.uk> wrote in message | news:000701c70357$ae886180$c9541b56@fuji... | >I have a couple of annoying issues with Boost.Test's output | on failed tests: | > 2) If the type has no numeric_limits support, say if it's | a composite | > type, | > like std::complex<double> then you don't get enough digits | to tell what | > the | > problem is. Could the code default to whatever digits | long-double has? | > Again a minor change to print_log_value (in my experience | std lib's ignore | > requests for more digits than a type really has anyway, so | it should be | > harmless?). | | Could you provide the code? Perhaps? void set_precision( std::ostream& ostr, mpl::false_ ) { if(std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 10) { // All decimal types, perhaps including NTL::RR, // where digits10 == runtime set precision // (but where digits (significand bits) may not be meaningful. ostr.precision(std::numeric_limits<T>::digits10) } else if (std::numeric_limits<T>::digits == 0) { // User-defined type or composite type like std::complex<double> // for which digits (significand bits) value has not been assigned, // even if is_specialized, perhaps assigning other values. ostr.precision(std::numeric_limits<long double>::digits10 + 2;) // Default to the longest built-in type, plus a couple of noisy digits. } else if( std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 2 ) { // && std::numeric_limits<T>::digits != 0) // Ensure all potentially significant, if noisy, digits are shown. ostr.precision( 2 + std::numeric_limits<T>::digits * 301/1000 ); // std::numeric_limits<T>::max_digits10; for C++0x } // void set_precision Untested! Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

Hi list, I read in some previous post about some unique identifier facility in boost (even two different ways IIRC). But I can't find it on the website. Is this library yet to be released, or is it included and hidden within some other ? Cheers, SeskaPeel

SeskaPeel wrote:
Hi list,
I read in some previous post about some unique identifier facility in boost (even two different ways IIRC). But I can't find it on the website. Is this library yet to be released, or is it included and hidden within some other ?
It's in the file vault, as uuid.zip. Sebastian Redl

Thanks Sebastian, The file I downloaded was GUIDv3.zip, as I couldn't find anyother (did I go to the wrong place: boost-consulting.com/vault ?). And meanwhile I found and adobe version (zuid) too. Why isn't this lib already included into boost ? Will it be reviewed someday ? Thanks, SeskaPeel. Sebastian Redl a écrit :
SeskaPeel wrote:
Hi list,
I read in some previous post about some unique identifier facility in boost (even two different ways IIRC). But I can't find it on the website. Is this library yet to be released, or is it included and hidden within some other ?
It's in the file vault, as uuid.zip.
Sebastian Redl _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Thanks Sebastian,
The file I downloaded was GUIDv3.zip, as I couldn't find anyother (did I go to the wrong place: boost-consulting.com/vault ?).
That is the correct file.
And meanwhile I found and adobe version (zuid) too.
Why isn't this lib already included into boost ? Will it be reviewed someday ?
<snip> First, I must apologize for neglecting the guid library and neglecting posts about the guid library. I have recently been working on it again and I just want to finish up some documentation and I will put a new version in the vault. I hope to request a review for it soon after that. Andy

Andy said: (by the date of Thu, 9 Nov 2006 15:57:31 +0000 (UTC))
Thanks Sebastian,
The file I downloaded was GUIDv3.zip, as I couldn't find anyother (did I go to the wrong place: boost-consulting.com/vault ?).
GUID which stands for Global Uniqie IDentifier is already in boost. Why include a second one? If you want to work on this, then better improve the existing one. For the beginning, it can be separated from lirbary boost::serialization (it is an extended_type_info): http://boost.org/libs/serialization/doc/extended_type_info.html In fact it is already a separate module (because it makes testing much simpler), so the separation will just make things more formal, and involve just changing namespaces. -- Janek Kozicki |

That's not exactly correct. From the link sited: "permits association of an arbitrary key with a type. Often this key would be the class name - but it doesn't have to be. This key is referred to as a GUID - Globally Unique IDentifier. Presumably it should be unique in the universe. Typically this GUID would be in header files and be used to match type accross applications." So the extended_type_info is a USER rather than a SUPPLIER of ag GUID. One of the main functions of BOOST_CLASS_EXPORT_GUID is to associate at type with type with a portable GUID. The specific GUID is left up to the user. The macro BOOST_CLASS_EXPORT(class name) just uses the class name as the GUID. So the existence of an open, portable GUID would complement extended_type_info rather than conflict with it. This system let me isolate serialization from having to address a number of issues related to comming up with a GUID. Among those are: a) it would seem attractive to make a GUID derived from typeinfo as provided by RTTI. In practice this doesn't seem attractive since there is no standardized typeinfo format. b) namespaces: the same type might appear in different namespaces in different programs. In fact, one of the uses of namespaces is to "wrap" libraries so that it can be guarenteed not to accidently conflict. c) Having the GUID look like a class name is attractive - but then it runs into character set issues. d) Having the GUID be derived from the class declaration might seem attractive, but what bout when the class changes - we don't want to obsolete all the programs which depend upon it - or do we? and others. So what starts out seeming like a simple idea, starts to get pretty complicated as one tries to realize it in a specific way. So - the extended_type_info - and by implication the serialization library takes a pass - it presumes you've got your own GUID or are happy with using the class name. And this shows up as a problem from time to time as user want to use classnames including a namespace as the GUID which conflicts with HTML. Robert Ramey Janek Kozicki wrote:
Andy said: (by the date of Thu, 9 Nov 2006 15:57:31 +0000 (UTC))
Thanks Sebastian,
The file I downloaded was GUIDv3.zip, as I couldn't find anyother (did I go to the wrong place: boost-consulting.com/vault ?).
GUID which stands for Global Uniqie IDentifier is already in boost. Why include a second one? If you want to work on this, then better improve the existing one.
For the beginning, it can be separated from lirbary boost::serialization (it is an extended_type_info):
http://boost.org/libs/serialization/doc/extended_type_info.html
In fact it is already a separate module (because it makes testing much simpler), so the separation will just make things more formal, and involve just changing namespaces.

Robert Ramey said: (by the date of Thu, 9 Nov 2006 08:42:22 -0800)
So the existence of an open, portable GUID would complement extended_type_info rather than conflict with it.
oops, of course you are right. I drew conclusions too fast :) -- Janek Kozicki |

SeskaPeel <seskapeel@gmail.com> writes:
Hi list,
I read in some previous post about some unique identifier facility in boost (even two different ways IIRC). But I can't find it on the website. Is this library yet to be released, or is it included and hidden within some other ?
Please don't start new threads by replying to existing messages. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams a écrit :
SeskaPeel <seskapeel@gmail.com> writes:
Hi list,
I read in some previous post about some unique identifier facility in boost (even two different ways IIRC). But I can't find it on the website. Is this library yet to be released, or is it included and hidden within some other ?
Please don't start new threads by replying to existing messages.
How do you know I did this? It doesn't make any difference for me on Thunderbird. SeskaPeel.

SeskaPeel <seskapeel@gmail.com> writes:
How do you know I did this?
From your message headers:
References: <E1Gi70F-0007ak-LX@he304war.uk.vianw.net>
It doesn't make any difference for me on Thunderbird.
Apparently Thunderbird thinks it does, and rightfully so. -- Steven E. Harris

"Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message news:E1Gi70F-0007ak-LX@he304war.uk.vianw.net...
I'd also like to see additional macros provided to exactly duplicate BOOST_CHECK_CLOSE (and warn and require of course) but with names
BOOST_CHECK_CLOSE_PERCENT ...
to reduce continuing confusion with percent and fractions.
There is already BOOST_CHECK_CLOSE and BOOST_CHECK_CLOSE_FRACTION. First based on percent tolerance. Second is based on fraction tolerance Gennadiy

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental | Sent: 09 November 2006 20:38 | To: boost@lists.boost.org | Subject: Re: [boost] Boost.Test feature request | | "Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message | news:E1Gi70F-0007ak-LX@he304war.uk.vianw.net... | > I'd also like to see additional macros provided to exactly | duplicate | > BOOST_CHECK_CLOSE (and warn and require of course) but with | > names | > | > BOOST_CHECK_CLOSE_PERCENT ... | > | > to reduce continuing confusion with percent and fractions. | | There is already | BOOST_CHECK_CLOSE and BOOST_CHECK_CLOSE_FRACTION. First | based on percent | tolerance. Second is based on fraction tolerance Yes, but it's a matter of making it absolutely clear to the reader that the check is with tolerance as a percent - without him having to re-check the docs to refresh his (in my case failing) memory. Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

"Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message news:E1GiShS-0000eL-00@he304war.uk.vianw.net...
| There is already | BOOST_CHECK_CLOSE and BOOST_CHECK_CLOSE_FRACTION. First | based on percent | tolerance. Second is based on fraction tolerance
Yes, but it's a matter of making it absolutely clear to the reader that the check is with tolerance as a percent - without him having to re-check the docs to refresh his (in my case failing) memory.
You are free to define whatever names are convenient for you. As of now I do not see a basis to introduce another synonym for existing tool. Gennadiy

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental | Sent: 08 November 2006 18:48 | To: boost@lists.boost.org | Subject: Re: [boost] Boost.Test feature request | | | "John Maddock" <john@johnmaddock.co.uk> wrote in message | news:000701c70357$ae886180$c9541b56@fuji... | >I have a couple of annoying issues with Boost.Test's output | on failed >tests: | > 2) If the type has no numeric_limits support, say if it's | a composite | > type, | > like std::complex<double> then you don't get enough digits | to tell what | > the | > problem is. Could the code default to whatever digits | long-double has? | > Again a minor change to print_log_value (in my experience | std lib's ignore | > requests for more digits than a type really has anyway, so | it should be | > harmless?). | | Could you provide the code? I've been investigating and experimenting with this a bit. At present, with a UDT one can get a message that, in effect, says "difference between a{2} and a{2} exceeds 0" It can be avoided by adding ostr.precision(std::numeric_limits<long double>::digits10 + 2); to get "difference between {2} and {2.0000000000000009} exceeds 0" although I would personally actually prefer for all the trailing zeros to be shown too thus, explicitly showing the precision of all values: "difference between a{2.0000000000000000} and b{2.0000000000000009} exceeds 0" but this may not be everyones taste, so perhaps it can be optional? So I've got as far as changing set_precision to: void set_precision( std::ostream& ostr, mpl::false_ ) { ostr.setf(std::ios::showpoint); // Show all significant trailing zeros. if(std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 10) { // All decimal types, perhaps including NTL::RR where digits10 == runtime set precision. ostr.precision(std::numeric_limits<T>::digits10); } else if (std::numeric_limits<T>::digits == 0) { // User-defined type for which digits (significand bits) value has not been assigned, // even if is_specialized == true, perhaps assigning other numeric_limits values. ostr.precision(std::numeric_limits<long double>::digits10 + 2); // But this may still not be enough to avoid misleading error messages // caused by not enough decimal digits being displayed. // User-defined types shuld define digits, even if approximately, as a workaround. } else if( std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 2 ) { // && std::numeric_limits<T>::digits != 0) ostr.precision( 2 + std::numeric_limits<T>::digits * 301/1000 ); // std::numeric_limits<T>::max_digits10; for C++0x } // else default is equivalent to ostr.precision(6); } // void set_precision And this seems to work as I suggest it should. But the logic of this is more complicated than might appear, so other views are welcome. (Also what are views on the usefulness of trailing zeros to show the implicit range? It is certainly helpful for in testing precision of floating-point functions). HTH Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

"Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message news:E1GiE1n-000204-61@he304war.uk.vianw.net...
| > 2) If the type has no numeric_limits support, say if it's | a composite | > type, | > like std::complex<double> then you don't get enough digits | to tell what | > the | > problem is. Could the code default to whatever digits | long-double has? | > Again a minor change to print_log_value (in my experience | std lib's ignore | > requests for more digits than a type really has anyway, so | it should be | > harmless?). | | Could you provide the code?
I've been investigating and experimenting with this a bit. ...
So I've got as far as changing set_precision to:
void set_precision( std::ostream& ostr, mpl::false_ ) { ostr.setf(std::ios::showpoint); // Show all significant trailing zeros. if(std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 10) { // All decimal types, perhaps including NTL::RR where digits10 == runtime set precision. ostr.precision(std::numeric_limits<T>::digits10); } else if (std::numeric_limits<T>::digits == 0) { // User-defined type for which digits (significand bits) value has not been assigned, // even if is_specialized == true, perhaps assigning other numeric_limits values. ostr.precision(std::numeric_limits<long double>::digits10 + 2); // But this may still not be enough to avoid misleading error messages // caused by not enough decimal digits being displayed. // User-defined types shuld define digits, even if approximately, as a workaround. } else if( std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 2 ) { // && std::numeric_limits<T>::digits != 0) ostr.precision( 2 + std::numeric_limits<T>::digits * 301/1000 ); // std::numeric_limits<T>::max_digits10; for C++0x } // else default is equivalent to ostr.precision(6); } // void set_precision
And this seems to work as I suggest it should.
But the logic of this is more complicated than might appear, so other views are welcome.
(Also what are views on the usefulness of trailing zeros to show the implicit range? It is certainly helpful for in testing precision of floating-point functions).
We will have to return to this a bit later. Also I would like to hear from you the rationale for all the conditions and selected precisions. And I would also like to hear more opinions on this. Gennadiy

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental | Sent: 09 November 2006 20:42 | To: boost@lists.boost.org | Subject: Re: [boost] Boost.Test feature request | | Also I would like to hear from | you the rationale for all the conditions and selected precisions. My thinking is: 1 The number of digits shown should be only those that are potentially significant for the Floating-point type being reported. For float 9, double 17, long double - varies. 2 Despite the clutter caused by trailing zeros, these have the advantage of showing the meaningful precision for the value. 3 The precision is very important because many obscure not-equal, and not-quite-close-enough errors involve computation, round-off errors and unexpected conversions losing accuracy. The last thing we want is error messages saying "{1.2345} and {1.2345} are not equal": this appears to be nonsense and some users will immediately assume Boost.Test is at fault. Ideally precision should be std::numeric_limits<T>::max_digits10, as now accepted for C++0X, and this should be defined for ALL types, built-in as User-defined; but we should not attempt this yet until it becomes standard. However we might consider designating a BOOST_HAS_MAX_DIGITS10 macro (but not yet defining it) and using this now thus: #ifdef BOOST_HAS_MAX_DIGITS10 && std::numeric_limits<T>::is_specialized precision = std::numeric_limits<T>::max_digits10) #else existing code #endif So for the time being - it's already complicated enough;): For all Decimal types, and this includes User-defined decimal types like NTL::RR, a typical arbitrary but pre-set precision, the existing std::numeric_limits<T>::digit10 is correct. Since std::numeric_limits<T>::digits may also be correct, it is essential to test for radix == 10 first. As supplied, NTL::RR does not have numeric_limits specialized and digits10 is undefined, but users can do this if they wish. However, the precision in bits is set at compile time, but the displayed digits, effectively digits10, can be altered at run-time. For built-in floating point types, the formula you are now using, which derives from std::numeric_limits<T>digits, the number of significand bits, is correct. For UD floating-point types, provided std::numeric_limits<T>digits is valid (has been specilized, and has had a value assigned (all unassigned are zero by default), this formula is correct. If digits is (still) zero, then the most likely reason is a high (but not necessarily fixed at compile time) precision. For any other radix or where std::numeric_limits<T>::digits = 0 (almost certainly unassigned), we really don't have much to go on. But the default precision of 6 decimal digits is almost certainly not the most helpful - far too few. So John Maddock has suggested using the precision of the most accurate built-in long double type std::numeric_limits<long double>::digits. This deals with the case of a composite type like <complex>. It may give some superflous, but harmless decimal digits, but should never produce an unhelpful "{1.2345} and {1.2345} are not equal" message. Ideally the user should always specialize numeric_limits for his type, and assign a suitable value to digits, digits10 and/or eventually max_digits10. For integer types, precision is meaningless, so ideally set_precision should not be called at all, or do nothing. HTH Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

I've had some further thoughts on this. I think a few macros will allow things to be easily 'personaized' I can agree that not everyone wants tons of decimal digits, or trailing zeros all the time. So why not use a macro BOOST_TEST_SHOW_ZEROS, so #ifdef BOOST_TEST_SHOW_ZEROS ostr << std::showpoint; // show trailing zeros to indicate the FP precision. #endif #ifdef BOOST_TEST_SHOW_PRECISION #if (BOOST_TEST_SHOW_PRECISION > 0) ostr.precision(BOOST_TEST_SHOW_PRECISION) // provides a simple way to fix the precision you want. // For example with some arbitrary precision type that would show hundreds of digits, you might want to limit the output from test. // Or you are only interested in the first few digits because test failures are likely to be big differences. #else // leave at default precision (6) #endif #else set_precision(...) // as at present, so existing behaviour is unchanged. #endif We could also meet some peoples very reasonable desire to have the integer output in hex with #ifdef BOOST_TEST_SHOW_HEX ostr << std;;hex; #endif and even BOOST_TEST_SHOW_HEX_FP to show the hex representation of floating-point values. But this would be much more work ;-) HTH more. Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com | | -----Original Message----- | | From: boost-bounces@lists.boost.org | | [mailto:boost-bounces@lists.boost.org] On Behalf Of | Gennadiy Rozental | | Sent: 09 November 2006 20:42 | | To: boost@lists.boost.org | | Subject: Re: [boost] Boost.Test feature request | | | | Also I would like to hear from | | you the rationale for all the conditions and selected precisions. | | My thinking is: | | 1 The number of digits shown should be only those that are | potentially significant for the Floating-point type being | reported. For | float 9, double 17, long double - varies. | | 2 Despite the clutter caused by trailing zeros, these have | the advantage of showing the meaningful precision for the value. | | 3 The precision is very important because many obscure | not-equal, and not-quite-close-enough errors involve | computation, round-off | errors and unexpected conversions losing accuracy. The last | thing we want is error messages saying "{1.2345} and | {1.2345} are not | equal": this appears to be nonsense and some users will | immediately assume Boost.Test is at fault. | | Ideally precision should be | std::numeric_limits<T>::max_digits10, as now accepted for | C++0X, and this should be defined for ALL | types, built-in as User-defined; but we should not attempt | this yet until it becomes standard. However we might consider | designating a BOOST_HAS_MAX_DIGITS10 macro (but not yet | defining it) and using this now thus: | | #ifdef BOOST_HAS_MAX_DIGITS10 && | std::numeric_limits<T>::is_specialized | precision = std::numeric_limits<T>::max_digits10) | #else | existing code | #endif | | So for the time being - it's already complicated enough;): | | For all Decimal types, and this includes User-defined | decimal types like NTL::RR, a typical arbitrary but pre-set | precision, the | existing std::numeric_limits<T>::digit10 is correct. | Since std::numeric_limits<T>::digits may also be correct, it | is essential to test for radix == 10 first. As supplied, | NTL::RR does | not have numeric_limits specialized and digits10 is | undefined, but users can do this if they wish. However, the | precision in bits | is set at compile time, but the displayed digits, | effectively digits10, can be altered at run-time. | | For built-in floating point types, the formula you are now | using, which derives from std::numeric_limits<T>digits, the | number of | significand bits, is correct. | | For UD floating-point types, provided | std::numeric_limits<T>digits is valid (has been specilized, | and has had a value assigned (all | unassigned are zero by default), this formula is correct. | | If digits is (still) zero, then the most likely reason is a | high (but not necessarily fixed at compile time) precision. | | For any other radix or where std::numeric_limits<T>::digits | = 0 (almost certainly unassigned), we really don't have much | to go on. | But the default precision of 6 decimal digits is almost | certainly not the most helpful - far too few. So John | Maddock has suggested | using the precision of the most accurate built-in long | double type std::numeric_limits<long double>::digits. This | deals with the | case of a composite type like <complex>. It may give some | superflous, but harmless decimal digits, but should never produce an | unhelpful "{1.2345} and {1.2345} are not equal" message. | | Ideally the user should always specialize numeric_limits for | his type, and assign a suitable value to digits, digits10 and/or | eventually max_digits10. | | For integer types, precision is meaningless, so ideally | set_precision should not be called at all, or do nothing. | | HTH
participants (10)
-
Andy
-
David Abrahams
-
Gennadiy Rozental
-
Janek Kozicki
-
John Maddock
-
Paul A Bristow
-
Robert Ramey
-
Sebastian Redl
-
SeskaPeel
-
Steven E. Harris