
Maria Kozinska wrote:
Thanks for answer.
On 4 June 2010 00:27, Steven Watanabe
wrote: AMDG
Maria Kozinska wrote:
I tried to write a test suite with one expected failure, however it is not working.
Apparently expected failures do not work with test suites properly. The issue is that the reporting function validate both expected failures AND failed test cases within the test unit. In your case at least one test case will always fail and we do not have option to tell that one test case is expected to fail. I guess I can skip test case check, but in this case you may get into the weird false positives with test suite passing when you actually it expected to fail. For example you expected 2 failures in 1 test case, but you got them in 2 different test cases. Another option is to treat "expected failures" for test suite as expected number of failed test cases, but this really changes semantic thus is not backward compatible change. Not sure if anyone uses current semantic (though it's obviously broken and never worked from what I can tell)
The idea is that I have a function that can be implemented in few ways, thus there are few possible correct results. Depending on the implementation, I want to test it with different set of test cases. I don't want to check at the beginning of each test case which option is valid, I would prefer to skip unnecessary test cases. Is it possible?
In init_unit_test_suite, can you try only adding the test cases that you need? I have to add all of them, because at compile time I don't know yet which ones should be executed. It is ensured at runtime by dependencies - as with test1_1, which is executed only if option1 succeeded (and skipped otherwise).
Why don't you put everything in a single test case and switch based on the value of xxx to do one set of tests or another? Gennadiy