[test] How to structure tests where compilation should fail?

Naturally, for compile-time units there are some cases where compile- time failure is the correct behavior - how should I structure tests so that this is recognized? Presumably I need to have a separate test source file for each expected compilation error, right? Matthias

Matthias Schabel wrote:
Naturally, for compile-time units there are some cases where compile- time failure is the correct behavior - how should I structure tests so that this is recognized? Presumably I need to have a separate test source file for each expected compilation error, right?
Hi Matthias, I'm using the following little program: $ cat failcheck.cpp ... int main() { // Stuff that needs to compile // or declarations needed ... // Errors to be detected, one in each clause #ifdef ERROR_A01 // no matching function for call ... #elif ERROR_A02 // constructor is private ... #elif ERROR_A03 // <whatever, comment will be printed during test> ... #endif } $ Then the following little script: $ cat failcheck.sh echo echo "trying without error emulation" g++ -Wall -DDEBUG -o failcheck failcheck.cpp || { echo; echo "*** ERROR: compilation doesn't succeeded at all ***"; echo; kill $$; exit 1; } cat failcheck.cpp \ | grep -e '^#ifdef' -e '^#elif' \ | awk '$2~/ERROR/{$1=""; print}' \ | while read label comment do echo echo "trying $label: $comment" g++ -Wall -D${label} -DDEBUG -o failcheck failcheck.cpp && { echo; echo "*** ERROR: compilation succeeded for $label ***"; echo; kill $$; exit 1; } done echo echo "*** FAILCHECK OK ***" echo $ The script scans the sample programme for test cases and tries to compile it with one test activate at a time. If a compilation succeeds, the test is aborted with an error message. (Note the kill command! Piping into a while loop causes the loop body to be executed in a subshell, so just exiting wouldn't be enough, you need to kill the parent process.) You don't need a different source files for each error, just a different clause. Make sure, however, that there's only one error per clause and that each label occurs only once. (The script should probably check this, maybe in the next version :-) Andreas

Thanks for the script, Andreas. I'll give it spin - it looks like just what I was hoping for...I'm a little surprised that there isn't support for this built into Boost.Test. Matthias
Hi Matthias, I'm using the following little program:
$ cat failcheck.cpp
[snip]

AMDG Matthias Schabel <boost <at> schabel-family.org> writes:
Thanks for the script, Andreas. I'll give it spin - it looks like just what I was hoping for...I'm a little surprised that there isn't support for this built into Boost.Test.
Matthias
There is. import testing ; { test-suite units : [ compile-fail /files/ : /options/ : ] ; } In Christ, Steven Watanabe

Matthias Schabel wrote:
I'm a little surprised that there isn't support for this built into Boost.Test.
Well, not really. Typically people try hard to make their code compile, not to fail ;-) By the way, the script comes from a library I wrote about two years ago that implements orientational analysis and a vector class based upon it. I think both libraries would fit nicely together and I hope to have it ready to present it here soon. It just needs a bit more struggling with the documentation... Andreas

Matthias Schabel wrote:
I'm a little surprised that there isn't support for this built into Boost.Test.
Well, not really. Typically people try hard to make their code compile, not to fail ;-)
That depends, of course, on how much metaprogramming you're doing. In order to test the compile-time dimensional analysis code, we need to be able to ensure that a wide range of cases that should be prevented at compile time do, in fact, fail. Many, many cases... Matthias

One way to do that is to use some template meta-programming to make the test, in fact, succeed but with a different outcome (using SFINAE). For instance, how do you check that A is *not* convertible to B? You can write the conversion, but if it is not it will fail compilation, whereas you would like to get a value 0 (or mpl::false, rather). This is done by using SFINAE and there are various ways of doing it, check the relevant boost libraries for implementations. Granted, the way to do it depends on your code, you have to write a small amount of code every time, it's highly non-trivial and perhaps not for everyone :) So simply checking for expected failures is probably much easier! -- Herve On Feb 28, 2007, at 6:34 PM, Matthias Schabel wrote:
Naturally, for compile-time units there are some cases where compile- time failure is the correct behavior - how should I structure tests so that this is recognized? Presumably I need to have a separate test source file for each expected compilation error, right?
Matthias _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost

"Matthias Schabel" <boost@schabel-family.org> wrote in message news:F6F8940E-6622-4D6D-854B-A02553BD494F@schabel-family.org...
Naturally, for compile-time units there are some cases where compile- time failure is the correct behavior - how should I structure tests so that this is recognized? Presumably I need to have a separate test source file for each expected compilation error, right?
Right. Plus you need an outside regression testing facility that supports checks for expected compilation failures. Boost.Build system does support this as it was mentioned in some other post. Gennadiy
participants (5)
-
Andreas Harnack
-
Gennadiy Rozental
-
Hervé Brönnimann
-
Matthias Schabel
-
Steven Watanabe