
On Mon, 21 Mar 2005 10:00:45 -0600, Aleksey Gurtovoy <agurtovoy@meta-comm.com> wrote:
John Maddock writes:
I stopped running them last Monday (3/14). The general consensus seemed to be that the number of failures was so high that running the tests was a waste of time. This is with SunPRO 5.3, which is the only version I have access to.
Pity, I did used to check those results for Regex and Type Traits compatibility, one of the main annoyances was dependent libs like Boost.Test and Program Options not compiling, which caused some tests to fail unnecessarily; other than that many libs should really be supported on that platform.
And the rest can be marked as "n/a", thus getting us a useable picture for tracking new failures / regressions.
OK, I'll start running them again tomorrow. Its just uncommenting a line in a shell script, no biggie. FWIW, there had been numerous comments to the effect of "why do we even bother with this compiler?", and noone ever responded to my numerous threats to stop running the regression tests with SunPRO, so I just stopped them last week to save cycles. I didn't think they would be missed. Who should I work with to get the appropriate tests marked as known-to-fail or N/A? This exercise is necessary for some gcc-on-Solaris tests as well. For instance, all of the "*_lib" tests in Boost.Thread fail because "gcc -static" won't link with *any* shared libs. Solaris does not provide static versions of -lrt or -lthread, so these tests fail to link. There really should really be a "-prefer-static" option to gcc.. Alternately, the Jamfiles could be hacked to use -Wl,-Bstatic / -Wl,-Bdynamic guards around the inclusion of the Boost.Thread lib for these tests. -- Caleb Epstein caleb dot epstein at gmail dot com