
On Wed, Jan 21, 2015 at 12:11 PM, Andrzej Krzemienski
Hi Everyone, I need the advice from the experts, on what is the recommended way to handle the following situation with regression tests.
A number of tests of Boost.Optional fail. These are the failures in optional
(optional references), because quite a lot of compilers implement reference binding in a non-standard-conforming manner. The question is, how these testing failures should be reflected in regression testing configuration, so that they do not appear as bugs in the implementation of Boost.Optional?
I don't think there is an official guideline about this. The traditional way of handling this is to mark the tests as expected failures with an appropriate comment. But since you can't do this, your only option is to leave them as is, I suppose. At least everyone will be able to see that a certain compiler does not handle certain cases covered by tests. Using Boost.Config based checks to run the tests, I think, is aiming a different purpose. I didn't use it in my tests but my understanding is that it allows to set preconditions for the test to build and run. If those preconditions are not met, the test is not built (i.e. excluded from the matrix) or can be built differently. It's more like a configure script, which can be used to adjust the build depending on what the compiler supports. I think, there is a limited set of checks provided by Boost.Config, which is probably similar to those features it provide macros for. Of course, you can write your own checks, similar to Boost.Config, which will test the compiler for the features you need. That'll require a certain amount of Boost.Build knowledge. Whichever approach you choose, it is a good idea to reflect in the docs what compilers don't support optional references, if it's not done yet.