[build] Regression tests failing across the board

The build regression tests are still failing across the board. http://beta.boost.org/development/tests/trunk/developer/summary.html http://beta.boost.org/development/tests/release/developer/summary.html This is not acceptable. Having tests that always fail requires viewers to make an unwanted mental exception and sends the wrong message to other developers. The tests should either be fixed or marked up. If they aren't really tests at all, but are just informational like some in the config library, they should be changed to return 0 and "<test-info>always_show_run_output" should be added to the Jamfile. If the build test failures can't be cleared from the reporting, the tests themselves should be removed. --Beman

Beman Dawes wrote:
The build regression tests are still failing across the board.
http://beta.boost.org/development/tests/trunk/developer/summary.html http://beta.boost.org/development/tests/release/developer/summary.html
This is not acceptable. Having tests that always fail requires viewers to make an unwanted mental exception and sends the wrong message to other developers.
The tests should either be fixed or marked up. If they aren't really tests at all, but are just informational like some in the config library, they should be changed to return 0 and "<test-info>always_show_run_output" should be added to the Jamfile.
If the build test failures can't be cleared from the reporting, the tests themselves should be removed.
I'll work on some cleanup -- presumably, only running the tests for runners that are known to have no configuration issues. - Volodya -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

On Thu, Feb 3, 2011 at 6:57 AM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Beman Dawes wrote:
... If the build test failures can't be cleared from the reporting, the tests themselves should be removed.
I'll work on some cleanup -- presumably, only running the tests for runners that are known to have no configuration issues.
Out of curiosity, what are the main configuration issues? Can they be detected and turned into a warning rather than a failure? --Beman

Beman Dawes wrote:
On Thu, Feb 3, 2011 at 6:57 AM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Beman Dawes wrote:
... If the build test failures can't be cleared from the reporting, the tests themselves should be removed.
I'll work on some cleanup -- presumably, only running the tests for runners that are known to have no configuration issues.
Out of curiosity, what are the main configuration issues? Can they be detected and turned into a warning rather than a failure?
This, unfortunately, requires investigation. Basically, there are: 1. One test that fails everywhere. This one I'll just fix. 2. Two or three tests that fail for some testers. These are something I'll look into. 3. Several test runners where most of the tests fail, with error messages that suggest that compiler cannot be invoked (whereas it's invoked just fine for library tests). The information in regression table is not particularly helpful, so I either have to make tests report more information, or adjust regression reporting, or contact runners, or just suppress testing. -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40
participants (2)
-
Beman Dawes
-
Vladimir Prus