
Hmmm - I'm not sure if I'm an offender here. For HEAD (aka boost 1.35) I replaced the usage of Boost Test with boost/detail/lightweight test. I found it to be: a) sufficient for my needs b) easier to use c) not requiring build of another library - which is simultaneously and continuously being enhanced. So sometimes test failures were artifacts of the test system. This resulted in lots of time being spent to distinguish them. d) I believe that starting with Boost 1.35 - support for older, less conforming compilers will no longer be available, it was convenient to me to be able test on compilers that the serialization library can work with but the test system can't. When I switched over to boost/detail/lightweight/test and re-compiled, I found a number of test macros weren't supported in the new system. In some cases I altered my tests to use supported macros, and in other cases I implemented in the missing macros in the common header that all serialization tests invoked. I used the same names that Boost Test uses. This would permit me to switch back to Boost Test should I later decide to do that. Its possible that this might have created some confusion - or maybe not. Which brings up a larger issue. It's very time consuming to be simultaneosly testing and developing all of boost at once. I'm not referring to actually running the tests - though that is a problem. I'm refering to the fact that although I don't change anything in my code - I can all of sudden get a raft of new "regressions" because of a change or error in some other library. Then I have to spend some time finding the source of the new error which could be anywhere. In the future I plan to do the following in my own environment. Run all serialization tests against the latest boost official release. When all tests pass I will: a) check batch of changes into the HEAD b) make available the snapshot of the file altered since the latest boost offical release on my website. This will permit those who can't or don't want to wait for the next official release to use the latest fixs/enhancements with the confidence that the have been tested at least on a few compilers. Robert Ramey David Abrahams wrote:
I just did a sweep through all our code and found and fixed lots of issues related to the use of BOOST_TEST macro, all of which boiled down to the fact that it is possible to use it and never call boost::report_errors(). In several places it had been used as a drop-in replacement for assert(), which it isn't because failure to call report_errors will allow test failures to pass silently. It was also used in a couple places in lieu of BOOST_CHECK, i.e. the author obviously was trying to use Boost.Test but got the wrong macro name.
I suggest we do something to make it harder to make that mistake. The only approach I can think of is to create an object at namespace scope whose destructor checks to see if we're exiting with a nonzero error count and without having called report_errors. Or maybe it should just do something like:
namespace boost { namespace detail { template <bool = false> struct lightweight_error_reporter { ~lightweight_error_reporter() { if (boost::report_errors()) { abort(); } }
static lightweight_error_reporter instance; }; }}
Whether abort() or exit(1) or throw xxx is more appropriate there I don't know (haven't done the research).
Thoughts?