[testing] How not to mark up explicit failures

https://svn.boost.org/trac/boost/ticket/2258 describes a problem with explicit failure markup that I suspect is not atypical. I believe we need to establish some ground rules for markup that ensure * Boost problems don't pass unnoticed * there is a uniform standard for interpreting test results across libraries * "non-Boost problems" and non-problems don't add alarm bells for people reading test results You can view the markup in question here: https://svn.boost.org/trac/boost/browser/trunk/status/explicit-failures-mark... Aside from the specifics in the ticket, the following problems jump out at me just from looking at that XML: * even on compilers that do support nonstandard calling conventions, failures in these tests will never be flagged as problematic * even on compilers that don't support nonstandard calling conventions, these tests will fail and add useless noise to the chart. I can think of a few general principles and practices that will prevent such problems: * Boost regression tests should test the functionality of code that is expected to work, not to test the capabilities of the platform or C++ implementation. There is a place for capability testing, but it needs to be segregated from regular regression tests somehow. If/when we move to using CMake globally, capability tests should take place in its configure stage. In the meantime, such tests must be kept out of Boost regression testing. * Tests that do not apply to a given compiler or platform should ideally not be run on that compiler. * If we can't find a way to avoid running the test at all, the test should be written to trivially succeed on that compiler or platform. * Test annotation wildcards should be written restrictively so that they only match the compilers/platforms to which they apply. This is probably just a start; I'd like to hear other ideas. I intend that this thread will result in an official Boost policy for test annotation. Thanks, -- Dave Abrahams BoostPro Computing http://www.boostpro.com
participants (1)
-
David Abrahams