Sun C++ regression tests missing?

Where did our Sun C++ test results go? I wanted to make sure that I haven't introduced any regressions with my latest changes to bind. -- Peter Dimov http://www.pdimov.com

On Thu, 17 Mar 2005 23:09:59 +0200, Peter Dimov <pdimov@mmltd.net> wrote:
Where did our Sun C++ test results go? I wanted to make sure that I haven't introduced any regressions with my latest changes to bind.
I stopped running them last Monday (3/14). The general consensus seemed to be that the number of failures was so high that running the tests was a waste of time. This is with SunPRO 5.3, which is the only version I have access to. -- Caleb Epstein caleb dot epstein at gmail dot com

I stopped running them last Monday (3/14). The general consensus seemed to be that the number of failures was so high that running the tests was a waste of time. This is with SunPRO 5.3, which is the only version I have access to.
Pity, I did used to check those results for Regex and Type Traits compatibility, one of the main annoyances was dependent libs like Boost.Test and Program Options not compiling, which caused some tests to fail unnecessarily; other than that many libs should really be supported on that platform. John.

John Maddock writes:
I stopped running them last Monday (3/14). The general consensus seemed to be that the number of failures was so high that running the tests was a waste of time. This is with SunPRO 5.3, which is the only version I have access to.
Pity, I did used to check those results for Regex and Type Traits compatibility, one of the main annoyances was dependent libs like Boost.Test and Program Options not compiling, which caused some tests to fail unnecessarily; other than that many libs should really be supported on that platform.
And the rest can be marked as "n/a", thus getting us a useable picture for tracking new failures / regressions. -- Aleksey Gurtovoy MetaCommunications Engineering

On Mon, 21 Mar 2005 10:00:45 -0600, Aleksey Gurtovoy <agurtovoy@meta-comm.com> wrote:
John Maddock writes:
I stopped running them last Monday (3/14). The general consensus seemed to be that the number of failures was so high that running the tests was a waste of time. This is with SunPRO 5.3, which is the only version I have access to.
Pity, I did used to check those results for Regex and Type Traits compatibility, one of the main annoyances was dependent libs like Boost.Test and Program Options not compiling, which caused some tests to fail unnecessarily; other than that many libs should really be supported on that platform.
And the rest can be marked as "n/a", thus getting us a useable picture for tracking new failures / regressions.
OK, I'll start running them again tomorrow. Its just uncommenting a line in a shell script, no biggie. FWIW, there had been numerous comments to the effect of "why do we even bother with this compiler?", and noone ever responded to my numerous threats to stop running the regression tests with SunPRO, so I just stopped them last week to save cycles. I didn't think they would be missed. Who should I work with to get the appropriate tests marked as known-to-fail or N/A? This exercise is necessary for some gcc-on-Solaris tests as well. For instance, all of the "*_lib" tests in Boost.Thread fail because "gcc -static" won't link with *any* shared libs. Solaris does not provide static versions of -lrt or -lthread, so these tests fail to link. There really should really be a "-prefer-static" option to gcc.. Alternately, the Jamfiles could be hacked to use -Wl,-Bstatic / -Wl,-Bdynamic guards around the inclusion of the Boost.Thread lib for these tests. -- Caleb Epstein caleb dot epstein at gmail dot com

OK, I'll start running them again tomorrow. Its just uncommenting a line in a shell script, no biggie.
FWIW, there had been numerous comments to the effect of "why do we even bother with this compiler?", and noone ever responded to my numerous threats to stop running the regression tests with SunPRO, so I just stopped them last week to save cycles. I didn't think they would be missed.
Fair enough. If only we could figure out why Boost.Test was failing (it looked like a legitimate compiler error), we could probably get the failure rate down quite a bit.
Who should I work with to get the appropriate tests marked as known-to-fail or N/A? This exercise is necessary for some gcc-on-Solaris tests as well. For instance, all of the "*_lib" tests in Boost.Thread fail because "gcc -static" won't link with *any* shared libs. Solaris does not provide static versions of -lrt or -lthread, so these tests fail to link. There really should really be a "-prefer-static" option to gcc.. Alternately, the Jamfiles could be hacked to use -Wl,-Bstatic / -Wl,-Bdynamic guards around the inclusion of the Boost.Thread lib for these tests.
That would probably be my preferred option, is it just a case of adding -Wl,-Bdynamic to the very end of the command line? Rene would probably know the best way to fix this. John.

Fair enough. If only we could figure out why Boost.Test was failing (it looked like a legitimate compiler error), we could probably get the failure rate down quite a bit.
One of the recent versions of Boost.Test should've compile with Sunpro 5.3. I will make sure release version pass this compiler either. Where could I see an errors? Gennadiy

On Tue, 22 Mar 2005 09:14:59 -0500, Gennadiy Rozental <gennadiy.rozental@thomson.com> wrote:
One of the recent versions of Boost.Test should've compile with Sunpro 5.3.
I will make sure release version pass this compiler either. Where could I see an errors?
My tests for today which once again include SunPRO 5.3 are still running. They should be up later today on meta-comm. I really wish I could run the tests with "bjam -j#". They are running on a single CPU of a 24-way box so they take ages. Turning on parallelism confuses the output processing to the point of making the results unusable however :( -- Caleb Epstein caleb dot epstein at gmail dot com

I really wish I could run the tests with "bjam -j#". They are running on a single CPU of a 24-way box so they take ages. Turning on parallelism confuses the output processing to the point of making the results unusable however :(
I'm not sure if this would work, but how about: Do a clean build using the -j option (but discard the results). Then do an incremental test build (not parallel) and collect the results for processing. Not sure how well this would work though as we still have problems with incremental builds? John.

On Tue, 22 Mar 2005 09:14:59 -0500, Gennadiy Rozental <gennadiy.rozental@thomson.com> wrote:
Fair enough. If only we could figure out why Boost.Test was failing (it looked like a legitimate compiler error), we could probably get the failure rate down quite a bit.
One of the recent versions of Boost.Test should've compile with Sunpro 5.3.
I will make sure release version pass this compiler either. Where could I see an errors?
OK, the SunPRO results are back now. Boost.Test isn't faring so well: http://www.meta-comm.com/engineering/boost-regression/cvs-head/developer/tes... and more specifically: http://tinyurl.com/5qr49 -- Caleb Epstein caleb dot epstein at gmail dot com

OK, the SunPRO results are back now.
Thanks Caleb!
Boost.Test isn't faring so well:
http://www.meta-comm.com/engineering/boost-regression/cvs-head/developer/tes...
and more specifically:
Yes it's the failure of the library to build that's the issue, Thanks, John.

On Wed, 23 Mar 2005 16:16:12 -0500, Gennadiy Rozental <gennadiy.rozental@thomson.com> wrote:
I will make sure release version pass this compiler either. Where could I see an errors?
Ok. Current cvs head should pass library compilation on this ^%$^% compiler.
Funny coincidence. I call it that too :-) FYI there are still a large number of Boost.Test failures on most platforms. I believe they are specifically the ones that read input files and are having problems when run from regression.py (where the working directory is $BOOST_ROOT/status I believe). You might want to take a look at the DateTime test program called testtz_database which has code that looks in 3 different places for the input file to work around this same issue. -- Caleb Epstein caleb dot epstein at gmail dot com

On Tue, 22 Mar 2005 13:02:38 -0000, John Maddock <john@johnmaddock.co.uk> wrote:
Who should I work with to get the appropriate tests marked as known-to-fail or N/A? This exercise is necessary for some gcc-on-Solaris tests as well. For instance, all of the "*_lib" tests in Boost.Thread fail because "gcc -static" won't link with *any* shared libs. Solaris does not provide static versions of -lrt or -lthread, so these tests fail to link. There really should really be a "-prefer-static" option to gcc.. Alternately, the Jamfiles could be hacked to use -Wl,-Bstatic / -Wl,-Bdynamic guards around the inclusion of the Boost.Thread lib for these tests.
That would probably be my preferred option, is it just a case of adding -Wl,-Bdynamic to the very end of the command line? Rene would probably know the best way to fix this.
In my testing, I have found that you can't use "gcc -static" on Solaris if you have *any* shared libs to link with. It overrides even later uses of -Wl,-Bdynamic (in fact using both -static and -Wl,-Bdynamic causes a linker error - see below). If you want to link with static versions of some libs and dynamic versions of others (as one must in the case of the Boost.Thread tests which use -lpthread and -lrt for which no static versions exist), it appears that the only way to do this is to specify the static libs as fully qualified filenames (e.g. /path/to/libfoo.a). See the examples below: static.c: --- #include <curses.h> int main () { initscr (); endwin (); return 0; } --- Link statically with -lcurses (OK) % gcc -static -o static static.c -lcurses Try to link statically with -lcurses and -lrt (fails - no librt.a exists) % gcc -static -o static static.c -lcurses -lrt ld: fatal: library -lrt: not found ld: fatal: File processing errors. No output written to static collect2: ld returned 1 exit status Try to link statically with -lcurses and dynamically with -lrt (fails - linker error) % gcc -static -o static static.c -lcurses -Wl,-Bdynamic -lrt ld: fatal: option -dn and -Bdynamic are incompatible ld: fatal: library -lrt: not found ld: fatal: File processing errors. No output written to static collect2: ld returned 1 exit status Try to link statically with -lcurses only (OK): % gcc -o static static.c -Wl,-Bstatic -lcurses -Wl,-Bdynamic -lrt % ldd static librt.so.1 => /usr/lib/librt.so.1 libc.so.1 => /usr/lib/libc.so.1 libaio.so.1 => /usr/lib/libaio.so.1 libdl.so.1 => /usr/lib/libdl.so.1 /usr/platform/SUNW,Netra-T12/lib/libc_psr.so.1 Also, note that -Wl,-Bdynamic must be in effect at the end of the command line because gcc does not provide a static version of -lgcc_s. Not sure why this happens, though, because -static works for this case: hello.cpp: #include <iostream> int main () { std::cout << "Hello World!\n"; return 0; } % g++ -o hello hello.cpp -Wl,-Bstatic ld: fatal: library -lgcc_s: not found ld: fatal: library -lgcc_s: not found ld: fatal: File processing errors. No output written to hello collect2: ld returned 1 exit status % g++ -o hello hello.cpp -Wl,-Bstatic -Wl,-Bdynamic % ./hello Hello World! % g++ -static -o hello hello.cpp % ./hello Hello World! Not sure what the difference is between the -Wl,-Bstatic and -static cases is here. -- Caleb Epstein caleb dot epstein at gmail dot com

In my testing, I have found that you can't use "gcc -static" on Solaris if you have *any* shared libs to link with. It overrides even later uses of -Wl,-Bdynamic (in fact using both -static and -Wl,-Bdynamic causes a linker error - see below).
Darn.
If you want to link with static versions of some libs and dynamic versions of others (as one must in the case of the Boost.Thread tests which use -lpthread and -lrt for which no static versions exist), it appears that the only way to do this is to specify the static libs as fully qualified filenames (e.g. /path/to/libfoo.a). See the examples below:
Double darn :-( I guess there's no easy way to fix this then, pity. Thanks for trying, John.

Hi, On Mar 22, 2005, at 6:11 PM, Caleb Epstein wrote:
On Tue, 22 Mar 2005 13:02:38 -0000, John Maddock <john@johnmaddock.co.uk> wrote:
Who should I work with to get the appropriate tests marked as known-to-fail or N/A? This exercise is necessary for some gcc-on-Solaris tests as well. For instance, all of the "*_lib" tests in Boost.Thread fail because "gcc -static" won't link with *any* shared libs. Solaris does not provide static versions of -lrt or -lthread, so these tests fail to link. There really should really be a "-prefer-static" option to gcc.. Alternately, the Jamfiles could be hacked to use -Wl,-Bstatic / -Wl,-Bdynamic guards around the inclusion of the Boost.Thread lib for these tests.
That would probably be my preferred option, is it just a case of adding -Wl,-Bdynamic to the very end of the command line? Rene would probably know the best way to fix this.
In my testing, I have found that you can't use "gcc -static" on Solaris if you have *any* shared libs to link with. It overrides even later uses of -Wl,-Bdynamic (in fact using both -static and -Wl,-Bdynamic causes a linker error - see below).
This might also be even more confusing on Solaris 10 - see this blog entry: http://tinyurl.com/5sqxc for details. [snip rest of message] Michael

Caleb Epstein writes:
Who should I work with to get the appropriate tests marked as known-to-fail or N/A?
The markup, split by library, is here: http://cvs.sourceforge.net/viewcvs.py/boost/boost/status/explicit-failures-m... Please see http://article.gmane.org/gmane.comp.lib.boost.devel/110725, http://article.gmane.org/gmane.comp.lib.boost.devel/116415, and the file's content for an explanation/examples of the markup notation. -- Aleksey Gurtovoy MetaCommunications Engineering
participants (6)
-
Aleksey Gurtovoy
-
Caleb Epstein
-
Gennadiy Rozental
-
John Maddock
-
Michael van der Westhuizen
-
Peter Dimov