
"Dave Steffen" <dgsteffen@numerica.us> wrote in message news:17874.2615.76539.30581@yttrium.numerica.us...
BOOST_CHECK(!sameobject.doesntwork());
Well, that looks like any other test assertion, and results in a pass or fail. What we're after here is something different: we're distinguishing between two different kinds of failures, and we want them reported as such. In contrast, what you've got above turns an "expected failure" into a "pass", which isn't what we want.
This is really the same what "expected failure" feature does: "temporary" shut up failing test case.
There is a larger question, that is probably better directed at Gennady, since he wrote the library: why support expected failures? What do "expected failures" mean? What's the use case? This is, however, a different discussion (that we can have if people are interested).
IMO it should be used primarily as either a temporary solution in case you need to clean up your regression test charts before release and don't have time to fix the failing assertion or as a portability tool when particular assertion is expected to fail under some of the configurations you test against. There may be other usages, but you need not overuse it.
But the point is that failures and expected failures are reported differently. There are, in effect, three possible outcomes of a test: pass, fail, and fail (but we expected it to).
What I'm asking for is that the "expected failure" notion be specified, not at the test case level, but at the test assertion level... but with the same reporting scheme as is currently in use.
I don't really see a big advantage over just commenting out the line in question. Gennadiy