Hi Everyone,
I need the advice from the experts, on what is the recommended way to
handle the following situation with regression tests.
A number of tests of Boost.Optional fail. (see here:
http://www.boost.org/development/tests/develop/developer/optional.html)
These are the failures in optional (optional references), because quite
a lot of compilers implement reference binding in a non-standard-conforming
manner. (For details of these bugs see here:
http://www.boost.org/doc/libs/develop/libs/optional/doc/html/boost_optional/...
).
The question is, how these testing failures should be reflected in
regression testing configuration, so that they do not appear as bugs in the
implementation of Boost.Optional?
1. I hear that marking explicit failures is not recommended, and build
config based on Boost.Config should be preferred. Also, the markup will not
work because from the toolset name alone I cannot make the decision whether
to mark the explicit failure or not.
2. On the other hand, I hear that Boost.config should not be used to detect
any bug in a compiler that is relevant only to a single Boost library.
These two pieces of advice appear contradictory in my case. I would like to
abide to the recommendations in Boost, but in this case it appears that I
am stuck.
Is there any recommendation in such case for Boost developers?
Regards,
&rzej