[Test] Incorrect assertion count/Warning when tests do not call one of the Test Tools?
I was just wondering if there is an option to generate an error/warning if a test function called by the test library does no 'asserts' (or any of the BOOST_* test tools). I know there is the option to output the number of assertions and the number of tests, but as we don't always have the one assert per test style of testing here, we can't use that directly to catch tests that do no testing, the detailed ouput level in this case does something like this: Running 5 test cases... unknown location(0): fatal error in "read_one_line_file": std::runtime_error: not implemented Test suite "Master Test Suite" failed with: 8 assertions out of 8 passed 4 test cases out of 5 passed 1 test case out of 5 failed Test case "construct_default_object" passed with: 1 assertion out of 1 passed Test case "no_such_file" passed with: 2 assertions out of 2 passed Test case "blank_filename" passed with: 2 assertions out of 2 passed Test case "read_empty_file" passed with: 2 assertions out of 2 passed Test case "read_one_line_file" aborted with: 1 assertion out of 1 passed The problem comes from (sample): BOOST_AUTO_TEST_CASE( construct_default_object ) { CSVFileReader defaultObject; boost::ignore_unused_variable_warning(defaultObject); } I'd like to say I don't see any asserts here, and would like a warning, or at least to note that I did not call an assert. In fact it looks like BOOST_AUTO_TEST_CASE() is adding an additional test to all of my test cases. Is this by design? Thanks Kevin -- | Kevin Wheatley, Cinesite (Europe) Ltd | Nobody thinks this | | Senior Technology | My employer for certain | | And Network Systems Architect | Not even myself |
"Kevin Wheatley"
I was just wondering if there is an option to generate an error/warning if a test function called by the test library does no 'asserts' (or any of the BOOST_* test tools).
There is not. I could consider addiing this feature post release. Why do you need this?
BOOST_AUTO_TEST_CASE( construct_default_object ) { CSVFileReader defaultObject; boost::ignore_unused_variable_warning(defaultObject); }
I'd like to say I don't see any asserts here, and would like a warning, or at least to note that I did not call an assert. In fact it looks like BOOST_AUTO_TEST_CASE() is adding an additional test to all of my test cases. Is this by design?
This is not the case anymore. Gennadiy
Gennadiy Rozental wrote:
"Kevin Wheatley"
wrote in message news:44285016.667649CC@cinesite.co.uk... I was just wondering if there is an option to generate an error/warning if a test function called by the test library does no 'asserts' (or any of the BOOST_* test tools).
There is not. I could consider addiing this feature post release. Why do you need this?
think of it like a coding guidline aid, it would allow me to have a short style ouput, thus not overloading the user with information, but allows me to easily see that I have a test that isn't actually ever going to 'fail' in an expected way. It suggests either a test that can be removed, or needs rewriting. Ideally I'd have some environment variable to turn on the warning/error, I guess some people would want it to be a failure of the test (me) and thus give me the red bar... others might want to ignore it Thanks Kevin -- | Kevin Wheatley, Cinesite (Europe) Ltd | Nobody thinks this | | Senior Technology | My employer for certain | | And Network Systems Architect | Not even myself |
"Kevin Wheatley"
Gennadiy Rozental wrote:
"Kevin Wheatley"
wrote in message news:44285016.667649CC@cinesite.co.uk... I was just wondering if there is an option to generate an error/warning if a test function called by the test library does no 'asserts' (or any of the BOOST_* test tools).
There is not. I could consider addiing this feature post release. Why do you need this?
think of it like a coding guidline aid, it would allow me to have a short style ouput, thus not overloading the user with information, but allows me to easily see that I have a test that isn't actually ever going to 'fail' in an expected way. It suggests either a test that can be removed, or needs rewriting. Ideally I'd have some environment variable to turn on the warning/error, I guess some people would want it to be a failure of the test (me) and thus give me the red bar... others might want to ignore it
OK. I will consider adding a warning. You will have to set up log level properly to see is though. Gennadiy
Gennadiy Rozental wrote:
OK. I will consider adding a warning. You will have to set up log level properly to see is though.
Gennadiy, thanks for looking into this. It is not a problem to have to ask for the feature as long as I can set it like the others using an environment variable/command line option, like I say I'd expect it at 'short' and 'detailed' and it would for me indicate a failing test. Kevin -- | Kevin Wheatley, Cinesite (Europe) Ltd | Nobody thinks this | | Senior Technology | My employer for certain | | And Network Systems Architect | Not even myself |
participants (2)
-
Gennadiy Rozental
-
Kevin Wheatley