
Le 20/10/12 05:03, Gennadiy Rozenal a écrit :
Hi,
It's been a long while since I merged any changes into boost release and by now there are whole bunch of new features there. Some are small, but some others are big ones such that they can change the way people use the library. I'd like to list them here (excluding bug fixes and removal of deprecated interfaces) and ask what should we do with these. Specifically let me know if you think these require some kind of (mini) review from community, but any other comments are welcome as well. Hi,
thanks for taking the time to make this ready for release. I would appreciate if all the new Boost.Test macros start by BOOST_TEST_ and you provide the equivalent macros for existing ones that don't follows this pattern. I understand that you want short names, but avoid possible collisions is better than having short names. I could accept also BOOST_TF_ as Boost Test Framework or BOOST_TL_ as Boost Test Library.
So here we go:
I. New testing tool BOOST_CHECKA
This tool is based on excellent idea from Kevlin Henney. I chose the name CHECKA for this tool due to the lack of better name, but I am open to suggestions. This testing tool capable of replacing whole bunch of existing other tools like BOOST_CHECK_EQUAL, BOOST_CHECK_GT etc. Usage is the most natural you can wish for:
BOOST_CHECKA( var1 - var2 >= 12 );
And the output will include as much information as we can get:
error: in "foo": check var1 - var2 >= 12 failed [23-15<12] I like this a lot. How it works? What about BOOST_TEST_CHECK (see above)?
II. New "data driven test case" subsystem
New data driven test case subsystem represent a generalization of parameterized test case and test case template and eventually will replace both. The idea is to allow user to specify an arbitrary (monomorphic or polymorphic) set of samples and run a test case on each of the samples. Samples can have different arity, thus you can have test cases with multiple parameters of different types. For now we support following dataset kinds (aside of generators none of dataset constructions routines performs *any* copying):
a) singleton - dataset constructed out of single sample
data::make(10) - singleton dataset with integer sample data::make("qwerty") - singleton dataset with string sample
b) array - dataset constructed out of C array
int a[] = {1,2,3,4,5,6}; data::make(a) - dataset with 6 integer values
c) collection - dataset constructed out of C++ forward iterable collection
std::vector<double> v{1.2, 2.3, 3.4, 5.6 }; data::make(v) - dataset with 6 double values
d) join - dataset constructed by joining 2 datasets of the same type
int a[] = {1,2,3}; int b[] = {7,8,9}; data::make(a) + data::make(b) - dataset with 6 integer values
e) zip - dataset constructed by zipping 2 datasets of the same size, but not necessarily the same type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd", "zxc"};
data::make(a) ^ data::make(b) dataset with 3 samples which are pairs of int and char*.
f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2};
data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double.
g) xrange generator dataset which produes samples in some range
data::xrange( 0., 3., 0.4 ) - dataset with 8 double samples data::xrange<int>((data::begin=9, data::end=15)) - dataset with 6 int samples data::xrange( 1., 7.5 ) - dataset with 7 double samples data::xrange( 5, 0, -1 ) - dataset with 5 int samples
h) random - generator dataset with unlimited number of random samples
data::random(data::distribution = std::normal_distribution<>(5.,2)) dataset with random double numbers following specified distribution
data::random(( data::engine = std::minstd_rand(), data::distribution = std::discrete_distribution<>(), data::seed = 20UL )); dataset with random int numbers following specified distribution
While all this interfaces can be used by itself to build complex datasets with various purposes, primary use case it was developped for is new data driven test case interface:
BOOST_DATA_TEST_CASE( test_name, dataset, parameter_names... )
Here are couple examples:
int samples1[] = {1,2,3}; BOOST_DATA_TEST_CASE( t1, samples1, sample ) { BOOST_CHECKA( foo(sample) > 0 ); }
Above test case is going to be executed 3 times with different sample values
char* strs[] = {"qwe", "asd", "zxc", "mkl" };
BOOST_DATA_TEST_CASE( t1, data::xrange(4) ^ strs ^ data::random(), intval, str, dblval ) { MyObj obj( dblval, str );
BOOST_CHECKA( obj.goo() == intval ); }
Above test case will be executed 4 times with different values of parameters intval, str, and dblval. Yes, this will be very useful.
Polymorphic datasets are still being developed, but should appear soon(ish).
III. Auto-registered test unit decorators.
Previously is was not possible and/or convenient to assign attributes to the automatically registered test units. To alleviate this I've introduced a notion of test unit decorator. These can be "attached" to any test unit similarly to like it is done in other languages Could you give some references?
Following decorators already implemented: label - adds labels to a test unit expected_failures - set expected failures for a test unit Could you give an example showing the utility of this decorator? timeout - sets timeout for a test unit Very useful. description - sets a test unit description depends_on - sets a test unit dependency What do you mean by depends_on? Is the execution of the test subject to
Which will be the values for intval? is the index of the tuple? What about adding something like a product test case that will execute the test with the Cartesian product? Is this already available. the success of its dependencies or whether the test is enabled/disabled?
enable_if/disable_if - facilitates a test unit status change fixture - assigns fixture to a test units
Test unit description is new test unit attribute, which is reported by new list_content command line argument described below. Usage of labels covered below as well. enable_if/disable_if allow new, much more flexible test management. By adding enable_if/disable_if decorators to the test unit you can conditionally select which test units to run at construction time based on some compile time or run-time parameters.
And we finally we have suite level fixtures which are set by attaching fixture decorator to a test suite (suite level fixtures executed once per test suite).
Attachment of decorator is facilitated by using of BOOST_TEST_DECORATOR. Note that you can use any of '+', '-', '*' symbols to attach decorator (and any number of '*'): Is there a difference between them? and if not why do you provide all of
I guess that this should be useful, but could you give a practical example? them?
IV Support for "run by label" plus more improvements for filtered runs
Previously you only had an ability to filter test unit to execute by name. Now you can attach labels to test units and create collections to run which are located in arbitrary positions in test tree. For example you can have special label for performance test cases, which is attached to all test units which are responsible for testing performance of your various components. Or you might want to collect exception safety tests etc. To filter test units by level you still use --run_test CLA. Labels are dented by @prefix:
test.exe --run=@performance
You can now repeat --run_test argument to specify multiple conditions:
I guess you mean --run.
test.exe --run=@performance,@exception_safety --run=prod/subsystem2
In addition run by name/label now recognizes dependencies. So if test unit A depends on test unit B and test unit B is disabled/is not part of current run test unit A will not run as well.
Finally you now have an ability to specify "negative" conditions, by prefixing name or label with !
test.exe --run=!@performance
This will run all test units which are not labeled with "performance".
V. Support for failure context
In many scenarios it is desirable to specify an additional information to a failure, but you do not need to see it if run is successful. Thus using regular print statements is undesirable. This becomes especially important when you develop common routine, which performs your testing and invoke it from multiple test units. In this case failure location is no help at all.
Yes this should be very useful.
To alleviate this problem two new tools are introduced: BOOST_TEST_INFO BOOST_TEST_CONTEXT
BOOST_TEST_INFO attaches context to next assertion which is executed. For example:
BOOST_TEST_INFO( "a=" << a ); BOOST_CHECKA( foo(a) == 0 ); Could the following be provided as well?
BOOST_CHECKA( foo(a) == 0, "a=" << a );
BOOST_CHECK_CONTEXT attaches context to all assertions within a scope:
BOOST_CHECK_CONTEXT( "Starting test foo from subset " << ss ) {
BOOST_CHECKA( foo(ss, a) == 1 ); BOOST_CHECKA( foo(ss, b) == 2 );
}
VI. Two new command line arguments: list_context, wait_for_debugger
Using list_context you can see a full or partial test tree of your test module. wait_for_debugger allows to force test module to wait till you attach debugger.
VII. Colored output.
Using new CLA --color_output you can turn on colored output for error and information messages
VIII New production testing tools interface introduced
This feature was covered in details in my presentation in BoostCon 2010. The gist if this is an ability to use Boost.Test testing tools interfaces in production (nothing to do with testing) code. User supplied implementation is plugged in Could you tell us more about the utilities? Couldn't these utilities be moved to existing libraries or a specific library? IX. Number of smaller improvements:
* Notion of framework shutdown. Allows to eliminate some fake memory leaks * Added checkpoints at fixture entry points, test case entry point and test case exit point for auto registered test cases * New FPE portable interfaces introduced and FPE handling is separated from system errors handling. You can detect FPE even if catch_system_error is false * Added an ability to erase registered exception translator * execution_monitor: new interface vexecute - to be used to monitor nullary functions with no result values * test_tree_visitor interface extended to facilitate visitors applying the same action to all test units * execution_monitor use typeid to report "real" exception type if possible. * New ability to redirect leaks report into a file
Wow, you have added a lot of new interesting features. I will check the documentation and start using it as soon as possible. About if a mini review is needed it is up to you, but it is clear that a review will provide you some feedback before you deliver all these features. Best, Vicente