Certain library function errors are difficult to regression test because it is not obvious how to observe their behavior in a test framework. Examples include: 1) Selection of correct overload or template specialization when the only difference in effect is a performance enhancement. 2) Verification that BOOST_ASSERT fires when a precondition is violated. 3) Verification that I/O and other operating system API errors are handled correctly. 4) More generally, something occurs in a function under test that is hard to observe but needs to be verified. How do other boost developers address these regression testing needs? It seems to me all of the examples can be solved by creating some form of back channel (or channels) to pass information between caller and callee functions. All require getting additional information from a function after it is called. (3) also requires passing additional information to a function when it is called. An ideal solution requires: * Works in the existing Boost regression test framework. So much the better if it works in other test frameworks. * Does not violate the ODR or depend on undefined behavior. * Does not alter the interface of the function under test. * Does not generate any code except when the function is actually being tested. * Does not require that each test case be run as a separate program. * Works even if the function under test is noexcept. * Is lightweight, utterly reliable, and has few if any dependencies beyond those already present. --Beman