Regression testing unobservable behavior?
Certain library function errors are difficult to regression test because it is not obvious how to observe their behavior in a test framework. Examples include: 1) Selection of correct overload or template specialization when the only difference in effect is a performance enhancement. 2) Verification that BOOST_ASSERT fires when a precondition is violated. 3) Verification that I/O and other operating system API errors are handled correctly. 4) More generally, something occurs in a function under test that is hard to observe but needs to be verified. How do other boost developers address these regression testing needs? It seems to me all of the examples can be solved by creating some form of back channel (or channels) to pass information between caller and callee functions. All require getting additional information from a function after it is called. (3) also requires passing additional information to a function when it is called. An ideal solution requires: * Works in the existing Boost regression test framework. So much the better if it works in other test frameworks. * Does not violate the ODR or depend on undefined behavior. * Does not alter the interface of the function under test. * Does not generate any code except when the function is actually being tested. * Does not require that each test case be run as a separate program. * Works even if the function under test is noexcept. * Is lightweight, utterly reliable, and has few if any dependencies beyond those already present. --Beman
Beman Dawes wrote:
Certain library function errors are difficult to regression test because it is not obvious how to observe their behavior in a test framework. Examples include:
...
2) Verification that BOOST_ASSERT fires when a precondition is violated.
You can do this by enabling a custom assert handler and then test that it has been invoked.
On Fri, Aug 8, 2014 at 9:53 AM, Peter Dimov
Beman Dawes wrote:
Certain library function errors are difficult to regression test because
it is not obvious how to observe their behavior in a test framework. Examples include:
...
2) Verification that BOOST_ASSERT fires when a precondition is violated.
You can do this by enabling a custom assert handler and then test that it has been invoked.
Yes, but the question in my mind is how to "test that it has been invoked"? That involves some mutually agreed upon way for the handler to inform the test framework that it has been invoked, and possibly the other information available. Is there a packaged way to handle that communication, or at least a pattern for such back channel communication? Or do I have to figure it out from scratch every time the need arises? Even if the other examples have to be treated individually, perhaps we could come up with an optional BOOST_ASSERT handler and package it in boost/assert.hpp so anyone could easily use it. Thanks, --Beman
Beman Dawes wrote:
On Fri, Aug 8, 2014 at 9:53 AM, Peter Dimov
Beman Dawes wrote:
Certain library function errors are difficult to regression test because it is not obvious how to observe their behavior in a test framework. Examples include:
...
2) Verification that BOOST_ASSERT fires when a precondition is violated.
You can do this by enabling a custom assert handler and then test that it has been invoked.
Yes, but the question in my mind is how to "test that it has been invoked"?
I just do the obvious: increment an "assertion_failed_" global variable in the handler, then BOOST_TEST that it has been incremented the correct number of times. It's a bit ad-hoc, but it works.
I wrote:
I just do the obvious: increment an "assertion_failed_" global variable in the handler, then BOOST_TEST that it has been incremented the correct number of times.
Of course this only works when the code can continue after a failed assertion without crashing. :-) So another option is to throw from the handler. This, in turn, only works if stack unwinding after a failed assertion doesn't crash. But that's pretty rare. And I'm not sure that we can do any better.
On Fri, Aug 8, 2014 at 10:39 AM, Beman Dawes
On Fri, Aug 8, 2014 at 9:53 AM, Peter Dimov
wrote:
Beman Dawes wrote:
2) Verification that BOOST_ASSERT fires when a precondition is violated.
You can do this by enabling a custom assert handler and then test that it has been invoked.
Yes, but the question in my mind is how to "test that it has been invoked"?
That involves some mutually agreed upon way for the handler to inform the test framework that it has been invoked, and possibly the other information available. Is there a packaged way to handle that communication, or at least a pattern for such back channel communication? Or do I have to figure it out from scratch every time the need arises?
I can't help thinking that coroutines might be helpful for this back-channel communication. Minimally, one could launch the test case on an asymmetric coroutine, establishing a BOOST_ASSERT handler that passes control back to the main test logic. For bullet 3, it would be great to be able to intercept control at the API level, pass control back to the main test logic and allow it the opportunity to inject errors (or data) to be verified. That might require a pair of symmetric coroutines. But to me, the harder part is the dependency injection.
participants (3)
-
Beman Dawes
-
Nat Goodspeed
-
Peter Dimov