
We don't seem to be running any tests on Comeau C++, why? Not using the "reference compiler" for our testing doesn't make much sense. Has the "reference compiler" changed without me noticing? -- Peter Dimov http://www.pdimov.com

Peter Dimov wrote:
We don't seem to be running any tests on Comeau C++, why? Not using the "reference compiler" for our testing doesn't make much sense. Has the "reference compiler" changed without me noticing?
This is really a valid question! I remember a discussion before 1.32 where the decision was made to cease the como tests for some reason, I think it was due to (auto-)link problems: http://article.gmane.org/gmane.comp.lib.boost.devel/115594 Anyway, at the moment the only tests on EDG are those for Intel Compiler what makes it difficult to isolate EDG typical problems. E.g. there have been some problems in boost.test for a long time because of overloading issues where EDG works more correct than all the other compilers: http://tinyurl.com/5g65r (this special one has is caused by Genaddiy's changes after 1.32). Cheers, Stefan

Stefan Slapeta wrote:
Peter Dimov wrote:
We don't seem to be running any tests on Comeau C++, why? Not using the "reference compiler" for our testing doesn't make much sense. Has the "reference compiler" changed without me noticing?
This is really a valid question! I remember a discussion before 1.32 where the decision was made to cease the como tests for some reason, I think it was due to (auto-)link problems:
It certainly seems very odd to cease testing all header-only libraries because of auto-link problems.
Anyway, at the moment the only tests on EDG are those for Intel Compiler what makes it difficult to isolate EDG typical problems. E.g. there have been some problems in boost.test for a long time because of overloading issues where EDG works more correct than all the other compilers: http://tinyurl.com/5g65r (this special one has is caused by Genaddiy's changes after 1.32).
You should probably contribute a test case for the test library that demonstrates the problem. I'm starting to think that we need a proper bug/patch reporting policy at Boost, one that says "Your bug report or patch WILL PROBABLY NOT be considered unless you contribute a test case" and "Never fix a problem in a Boost library without creating a test case first, doubly so if you are not the maintainer for this specific library". ;-) To balance this we should probably state also that "Committing a new test for any Boost library, no matter who maintains it, is fair game. Please do that often."

Peter Dimov wrote: [...]
You should probably contribute a test case for the test library that demonstrates the problem.
I'm starting to think that we need a proper bug/patch reporting policy at Boost, one that says "Your bug report or patch WILL PROBABLY NOT be considered unless you contribute a test case" and "Never fix a problem in a Boost library without creating a test case first, doubly so if you are not the maintainer for this specific library". ;-)
[...] I'll do that all next week - sorry, too busy at the moment! Stefan

It certainly seems very odd to cease testing all header-only libraries because of auto-link problems.
I don't believe that was *ever* an issue, auto-linking has never been enabled for Commeau. There was a discussion about this, but the problem turned out to be Commeau not supporting dll's at all. John.

Anyway, at the moment the only tests on EDG are those for Intel Compiler what makes it difficult to isolate EDG typical problems. E.g. there have been some problems in boost.test for a long time because of overloading issues where EDG works more correct than all the other compilers: http://tinyurl.com/5g65r (this special one has is caused by Genaddiy's changes after 1.32).
I believe this is a compiler/STL problem. Here is snippet of offending code: template<typename T> struct print_log_value { void operator()( std::ostream& ostr, T const& t ) { // Show all possibly significant digits (for example, 17 for 64-bit double). if( std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 2 ) ostr.precision(2 + std::numeric_limits<T>::digits * 301/1000); ostr << t; // by default print the value } }; If I try to print C literal this function is instantiated with char [N]. And if fails to instantiate std::numeric_limits<char [N]>. Shouldn't numeric_limits work for any type? Gennadiy

Gennadiy Rozental wrote:
I believe this is a compiler/STL problem. Here is snippet of offending code:
template<typename T> struct print_log_value { void operator()( std::ostream& ostr, T const& t ) { // Show all possibly significant digits (for example, 17 for 64-bit double). if( std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 2 ) ostr.precision(2 + std::numeric_limits<T>::digits * 301/1000);
ostr << t; // by default print the value } };
If I try to print C literal this function is instantiated with char [N]. And if fails to instantiate std::numeric_limits<char [N]>.
Shouldn't numeric_limits work for any type?
I _think_ (correct me if I'm wrong) the problem is quite different and related to the portability issues of BOOST_CHECK_EQUAL with C string literals. The code above doesn't compile on VC 7.1, either. However, the regression tests don't show any problems with VC 7.1!! Thus, the reason for this issue is IMO that this code isn't instantiated at all for VC 7.1 (but it _is_ for Intel) and that's very similar to the BOOST_CHECK_EQUAL problem! Here is a test case - it compiles on VC 7.1 and many others (but shouldn't) but doesn't compile on EDG: #include <limits> template <typename T> std::size_t test(T const& t) { // this is the correct one, called by EDG // however, numeric_limits can't be instantiated // for char[N] return std::numeric_limits<T>()::digits; } std::size_t test(const char* t) { // this is the one called by many others return 0; } int main(int argc, char **argv) { char buf[] = "asdf"; test(buf); // we have an array, not const char*!!! } But there are also other (related) even more subtle overloading errors...I'll have to look into my mailbox for those. Stefan

I _think_ (correct me if I'm wrong) the problem is quite different and related to the portability issues of BOOST_CHECK_EQUAL with C string literals.
The code above doesn't compile on VC 7.1, either. However, the regression tests don't show any problems with VC 7.1!! Thus, the reason for this issue is IMO that this code isn't instantiated at all for VC 7.1 (but it _is_ for Intel) and that's very similar to the BOOST_CHECK_EQUAL problem!
No. VC7.1 has exactly the same problem. The reason it works is that I tested with this compiler and code actually looks like this: template<typename T> struct print_log_value { void operator()( std::ostream& ostr, T const& t ) { #if !BOOST_WORKAROUND(BOOST_MSVC,BOOST_TESTED_AT(1310)) // Show all possibly significant digits (for example, 17 for 64-bit double). if( std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 2 ) ostr.precision(2 + std::numeric_limits<T>::digits * 301/1000); #endif ostr << t; // by default print the value } };
Here is a test case - it compiles on VC 7.1 and many others (but shouldn't) but doesn't compile on EDG:
Why do you think it shouldn't? numeric_limits should work for any type, isn't it? Gennadiy

template<typename T> struct print_log_value { void operator()( std::ostream& ostr, T const& t ) { #if !BOOST_WORKAROUND(BOOST_MSVC,BOOST_TESTED_AT(1310)) // Show all possibly significant digits (for example, 17 for 64-bit double). if( std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 2 ) ostr.precision(2 + std::numeric_limits<T>::digits * 301/1000); #endif ostr << t; // by default print the value } };
Would solution like following will cover all FPTs? template<typename is_spec> struct set_log_precision { static _( std::ostream& ostr ) {} } // Show all possibly significant digits (for example, 17 for 64-bit double). template<> struct set_log_precision<true> { static _( std::ostream& ostr ) { ostr.precision(2 + std::numeric_limits<T>::digits * 301/1000); } } template<typename T> struct print_log_value { void operator()( std::ostream& ostr, T const& t ) { set_log_precision<std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 2>::_( ostr ); ostr << t; } }; Gennadiy

Gennadiy Rozental wrote:
template<typename T> struct print_log_value { void operator()( std::ostream& ostr, T const& t ) { set_log_precision<std::numeric_limits<T>::is_specialized && std::numeric_limits<T>::radix == 2>::_( ostr );
ostr << t; } };
I'm afraid, no, as you still demand a spezialization (std::numeric_limits<T>::radix). Stefan

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental | Sent: 15 March 2005 18:49 | To: boost@lists.boost.org | Subject: [boost] Re: No tests on como? | | > template<typename T> | > struct print_log_value { | > void operator()( std::ostream& ostr, T const& t ) | > { | > #if !BOOST_WORKAROUND(BOOST_MSVC,BOOST_TESTED_AT(1310)) | > // Show all possibly significant digits (for example, 17 for | 64-bit | > double). | > if( std::numeric_limits<T>::is_specialized && | > std::numeric_limits<T>::radix == 2 ) | > ostr.precision(2 + | std::numeric_limits<T>::digits * 301/1000); | > #endif | > ostr << t; // by default print the value | > } | > }; | | | Would solution like following will cover all FPTs? | | template<typename is_spec> | struct set_log_precision { static _( std::ostream& ostr ) {} } | | // Show all possibly significant digits (for example, 17 for 64-bit double). | template<> | struct set_log_precision<true> | { | static _( std::ostream& ostr ) | { | ostr.precision(2 + std::numeric_limits<T>::digits * 301/1000); | } | } | | template<typename T> | struct print_log_value { | void operator()( std::ostream& ostr, T const& t ) | { | set_log_precision<std::numeric_limits<T>::is_specialized && | std::numeric_limits<T>::radix == 2>::_( ostr ); | | ostr << t; | } | }; | And for a UDT it relies on a specialization being provided for numeric_limits. For example, for NTL, a popular collection of very high precision types, no specialisation is provided for numeric_limits. And if radix is something funny like 10, this formula 2 + std::numeric_limits<T>::digits * 301/1000 is wrong. So I think you need a fall-back default precision. It could be the default 6, but it isn't enough for all the possibly significant digits even for 32 bit floats for which you need 9 decimal digits (6 is the number of guaranteed accurate decimal digits) or it could be something else like 17 which is enough for popular 64-bit double (BUT the least significant 2 digits are noisy and so you would get a lot of uninformative decimal digits for numbers that aren't quite representable into the binary format). Some comments in the code here would be helpful? Paul PS The code in lexical cast STILL has weakness in this area, and doesn't even use the 2 + std::numeric_limits<T>::digits * 301/1000 formula. So if you convert to a decimal string and back again you lose LOTS of precision. PPS MSVC 8.0 introduced a reduction in precision by 1 least significant binary bit in a third of values when inputting floats (doubles are exactly correct). As you will see from the attached float_input.txt, MS say this is now a 'feature'. You can guess my view on this. Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com

| template<typename T> | struct print_log_value { | void operator()( std::ostream& ostr, T const& t ) | { | set_log_precision<std::numeric_limits<T>::is_specialized && | std::numeric_limits<T>::radix == 2>::_( ostr ); | | ostr << t; | } | }; |
And for a UDT it relies on a specialization being provided for numeric_limits.
For UDT you could provide your own specialization for print_log_value;
For example, for NTL, a popular collection of very high precision types, no specialization is provided for numeric_limits.
Well, then default one is going to be used.
And if radix is something funny like 10, this formula
2 + std::numeric_limits<T>::digits * 301/1000
is wrong.
As you could see above I apply this formula only if radix ==2
So I think you need a fall-back default precision.
It could be the default 6,
Are you saying that I need to set a precision to fixed value 6? Why? There is still an open issue how to use numeric_limits in generic function. Any ideas? I guess I could use is_float, but then I cut any UDT that does specialize numeric_limits. Gennadiy

"Peter Dimov" <pdimov@mmltd.net> wrote in message news:001301c52a34$4f5d7cc0$6601a8c0@pdimov...
Gennadiy Rozental wrote:
There is still an open issue how to use numeric_limits in generic function. Any ideas?
Don't instantiate it on an array type?
Here is the logic I want: 1. If numeric_limit<T>::is_specialized is true and numeric_limit<T>::radix == 2 then set precision by formula ... 2. print value of type T Gennadiy

There is still an open issue how to use numeric_limits in generic function. Any ideas?
I guess I could use is_float, but then I cut any UDT that does specialize numeric_limits.
I think Peter Dimov spotted the problem here: numeric_limits does provide a default specialisation, but it won't instantiate unless T is a type that can be returned from a function, which means it may not be: An incomplete type, An abstract type, An array type, A function type. I think you can get close to this, by only using numeric_limits if is_object<T>::value is true (you could test for abstract types as well with is_abstract, but that template works with only a minority of compilers). Does this help? John.

John Maddock wrote:
I think Peter Dimov spotted the problem here: numeric_limits does provide a default specialisation, but it won't instantiate unless T is a type that can be returned from a function, which means it may not be:
An incomplete type, An abstract type, An array type, A function type.
I think that T can be incomplete or void.

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental | Sent: 16 March 2005 14:18 | To: boost@lists.boost.org | Subject: [boost] Re: Re: No tests on como? | | > | template<typename T> | > | struct print_log_value { | > | void operator()( std::ostream& ostr, T const& t ) | > | { | > | | set_log_precision<std::numeric_limits<T>::is_specialized && | > | std::numeric_limits<T>::radix == 2>::_( ostr ); | > | | > | ostr << t; | > | } | > | }; | > | | > | > And for a UDT it relies on a specialization being provided for | > numeric_limits. | | > For UDT you could provide your own specialization for print_log_value; | > For example, for NTL, a popular collection of very high | > precision types, no specialization is provided for numeric_limits. | | Well, then default one is going to be used. Indeed, I have encouraged the author Victor Shoup, to specialise numeric_limits for NTL. | | > And if radix is something funny like 10, this formula | > | > 2 + std::numeric_limits<T>::digits * 301/1000 | > | > is wrong. | | As you could see above I apply this formula only if radix ==2 Agreed. | > So I think you need a fall-back default precision. | > | > It could be the default 6, | | Are you saying that I need to set a precision to fixed value 6? Why? Only that if T isn't specialized or radix != 2, then the stream default precision will apply, 6 for floats, unless it is changed elsewhere, which might be helpful because it easily leads to apparently nonsensical reports like 1.23456 != 1.23456 (where the difference is smaller, perhaps only 1 least significant bit, when you would might precision 17 to avoid a similar problem). Perhaps a global MACRO value could allow users to chose? For some, the clutter of 17 decimal digits would be intolerable - they would rather put up with the "1.23456 != 1.23456" reports. For others, seeing all the digits is essential to understanding the log. Paul Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com

Gennadiy Rozental wrote:
If I try to print C literal this function is instantiated with char [N]. And if fails to instantiate std::numeric_limits<char [N]>.
Shouldn't numeric_limits work for any type?
No. "Specializations shall be provided for each fundamental type, both floating point and integer, including bool." "Non-fundamental standard types, such as complex<T> (26.2.2), shall not have specializations." Since numeric_limits is not specialized for arrays, the primary template is used, which is ill-formed.

"Peter Dimov" <pdimov@mmltd.net> wrote in message news:017001c52980$40bcb9b0$6801a8c0@pdimov...
Gennadiy Rozental wrote:
If I try to print C literal this function is instantiated with char [N]. And if fails to instantiate std::numeric_limits<char [N]>.
Shouldn't numeric_limits work for any type?
No. "Specializations shall be provided for each fundamental type, both floating point and integer, including bool." "Non-fundamental standard types, such as complex<T> (26.2.2), shall not have specializations."
Since numeric_limits is not specialized for arrays, the primary template is used, which is ill-formed.
Ok. So, how could I use numeric_limits in generic function? In my particular case I need to set a precision for FP types based on information from numeric limits. Gennadiy. P.S. BTW, does it says anywhere that primary template is required to be ill-format for array types? Could STL implementers does a better job?

Gennadiy Rozental wrote:
P.S. BTW, does it says anywhere that primary template is required to be ill-format for array types? Could STL implementers does a better job?
It is implied by the signature of numeric_limits<T>::max: template<class T> class numeric_limits { public: static T max() throw(); // ... }; When T is an array type, this declaration is ill-formed. I'm not sure whether STL implementers are allowed to let a program that uses numeric_limits<X[N]> compile anyway, but they are certainly not required to. This probably applies to abstract classes as well.

Peter Dimov <pdimov@mmltd.net> wrote:
We don't seem to be running any tests on Comeau C++, why? Not using the "reference compiler" for our testing doesn't make much sense.
I can run, and in fact I was running regression tests on como 4.3.3 some time before release 1.32 . There are however some issues with it: 1 como in "strict" mode does not support platform specific headers (at least on Windows, and this is my testing environment) 2 como does not support dynamic libraries at all; this is going to be improved with next version (hopefully to be released soon) 3 libcomo (Comeau implementation of the C++ standard library) is far from perfect, but is improving 4 como, at least with my backend (MSVC) has worst compilation times ever seen; in fact, running regression tests with como was real PITA 5 there does not seem to be big interest on developers side to take como results into account (but that's my totaly subjective feeling - I know that there are developers who care about como results, and Greg Comeau seemed to be also interested in good compatiblity with boost) I can run como again (as soon as I have my test machine running - it's still not setup), but I do not want to waste great amount of CPU cycles and disk space if its not used, and there seems to be open question about C++ conformance mode (strict vs. released) that should be used in tests. B.

Bronek Kozicki wrote:
Peter Dimov <pdimov@mmltd.net> wrote:
We don't seem to be running any tests on Comeau C++, why? Not using the "reference compiler" for our testing doesn't make much sense.
I can run, and in fact I was running regression tests on como 4.3.3 some time before release 1.32 . There are however some issues with it: 1 como in "strict" mode does not support platform specific headers (at least on Windows, and this is my testing environment) 2 como does not support dynamic libraries at all; this is going to be improved with next version (hopefully to be released soon) 3 libcomo (Comeau implementation of the C++ standard library) is far from perfect, but is improving 4 como, at least with my backend (MSVC) has worst compilation times ever seen; in fact, running regression tests with como was real PITA 5 there does not seem to be big interest on developers side to take como results into account (but that's my totaly subjective feeling - I know that there are developers who care about como results, and Greg Comeau seemed to be also interested in good compatiblity with boost)
I understand your concerns, but it seems unfair not to test anything because some libraries have problems on como. Is it possible to setup a como test suite that only tests libraries that aren't affected by (1) or (2)? This would presumably address (4) as well. As for (5), I'm very interested in seeing the test results of running a strict EDG-based compiler. It is true that few of us use como as our primary compiler, but it helps people write correct code, instead of code that just happens to compile on our test matrix. The latest Metrowerks and g++ seem promising, but I'm not sure whether they can displace EDG as a test platform yet when strict conformance is desired.

Peter Dimov wrote:
We don't seem to be running any tests on Comeau C++, why?
Greg asked me to hold the results back for como on Linux since there was a number of problems (no support for shared libraries, no support for threads, problems with signal handling in Boost.Test (and probably some other problems I currently don't recall)). Regards, m

I've tried to run tests with como of the serialization library. I've run into a problem whereby the serialization library doesn't build due to a an issue regarding libcomo 4.3? (with vc 7.1) stdarg.h header not have va_list or va_arg not being hoisted into the std namespace. I sent a test case to comeau an they acknowledged that its a problem but I haven't persued it. I want to test with this platform. the Jamfile for serialization tests and library build include code to exclude comeau from DLL builds and tests so that is no ta problem for me. Robert Ramey Peter Dimov wrote:
We don't seem to be running any tests on Comeau C++, why? Not using the "reference compiler" for our testing doesn't make much sense. Has the "reference compiler" changed without me noticing?
participants (8)
-
Bronek Kozicki
-
Gennadiy Rozental
-
John Maddock
-
Martin Wille
-
Paul A Bristow
-
Peter Dimov
-
Robert Ramey
-
Stefan Slapeta