Boost.Test updates in trunk: need for (mini) review?

Hi, It's been a long while since I merged any changes into boost release and by now there are whole bunch of new features there. Some are small, but some others are big ones such that they can change the way people use the library. I'd like to list them here (excluding bug fixes and removal of deprecated interfaces) and ask what should we do with these. Specifically let me know if you think these require some kind of (mini) review from community, but any other comments are welcome as well. So here we go: I. New testing tool BOOST_CHECKA This tool is based on excellent idea from Kevlin Henney. I chose the name CHECKA for this tool due to the lack of better name, but I am open to suggestions. This testing tool capable of replacing whole bunch of existing other tools like BOOST_CHECK_EQUAL, BOOST_CHECK_GT etc. Usage is the most natural you can wish for: BOOST_CHECKA( var1 - var2 >= 12 ); And the output will include as much information as we can get: error: in "foo": check var1 - var2 >= 12 failed [23-15<12] II. New "data driven test case" subsystem New data driven test case subsystem represent a generalization of parameterized test case and test case template and eventually will replace both. The idea is to allow user to specify an arbitrary (monomorphic or polymorphic) set of samples and run a test case on each of the samples. Samples can have different arity, thus you can have test cases with multiple parameters of different types. For now we support following dataset kinds (aside of generators none of dataset constructions routines performs *any* copying): a) singleton - dataset constructed out of single sample data::make(10) - singleton dataset with integer sample data::make("qwerty") - singleton dataset with string sample b) array - dataset constructed out of C array int a[] = {1,2,3,4,5,6}; data::make(a) - dataset with 6 integer values c) collection - dataset constructed out of C++ forward iterable collection std::vector<double> v{1.2, 2.3, 3.4, 5.6 }; data::make(v) - dataset with 6 double values d) join - dataset constructed by joining 2 datasets of the same type int a[] = {1,2,3}; int b[] = {7,8,9}; data::make(a) + data::make(b) - dataset with 6 integer values e) zip - dataset constructed by zipping 2 datasets of the same size, but not necessarily the same type This dataset has an arity which is sum of argument dataset arities. int a[] = {1,2,3}; char* b[] = {"qwe", "asd", "zxc"}; data::make(a) ^ data::make(b) dataset with 3 samples which are pairs of int and char*. f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type This dataset has an arity which is sum of argument dataset arities. int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2}; data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double. g) xrange generator dataset which produes samples in some range data::xrange( 0., 3., 0.4 ) - dataset with 8 double samples data::xrange<int>((data::begin=9, data::end=15)) - dataset with 6 int samples data::xrange( 1., 7.5 ) - dataset with 7 double samples data::xrange( 5, 0, -1 ) - dataset with 5 int samples h) random - generator dataset with unlimited number of random samples data::random(data::distribution = std::normal_distribution<>(5.,2)) dataset with random double numbers following specified distribution data::random(( data::engine = std::minstd_rand(), data::distribution = std::discrete_distribution<>(), data::seed = 20UL )); dataset with random int numbers following specified distribution While all this interfaces can be used by itself to build complex datasets with various purposes, primary use case it was developped for is new data driven test case interface: BOOST_DATA_TEST_CASE( test_name, dataset, parameter_names... ) Here are couple examples: int samples1[] = {1,2,3}; BOOST_DATA_TEST_CASE( t1, samples1, sample ) { BOOST_CHECKA( foo(sample) > 0 ); } Above test case is going to be executed 3 times with different sample values char* strs[] = {"qwe", "asd", "zxc", "mkl" }; BOOST_DATA_TEST_CASE( t1, data::xrange(4) ^ strs ^ data::random(), intval, str, dblval ) { MyObj obj( dblval, str ); BOOST_CHECKA( obj.goo() == intval ); } Above test case will be executed 4 times with different values of parameters intval, str, and dblval. Polymorphic datasets are still being developed, but should appear soon(ish). III. Auto-registered test unit decorators. Previously is was not possible and/or convenient to assign attributes to the automatically registered test units. To alleviate this I've introduced a notion of test unit decorator. These can be "attached" to any test unit similarly to like it is done in other languages Following decorators already implemented: label - adds labels to a test unit expected_failures - set expected failures for a test unit timeout - sets timeout for a test unit description - sets a test unit description depends_on - sets a test unit dependency enable_if/disable_if - facilitates a test unit status change fixture - assigns fixture to a test units Test unit description is new test unit attribute, which is reported by new list_content command line argument described below. Usage of labels covered below as well. enable_if/disable_if allow new, much more flexible test management. By adding enable_if/disable_if decorators to the test unit you can conditionally select which test units to run at construction time based on some compile time or run-time parameters. And we finally we have suite level fixtures which are set by attaching fixture decorator to a test suite (suite level fixtures executed once per test suite). Attachment of decorator is facilitated by using of BOOST_TEST_DECORATOR. Note that you can use any of '+', '-', '*' symbols to attach decorator (and any number of '*'): BOOST_TEST_DECORATOR( + unittest::fixture<suite_fixture>() ) BOOST_AUTO_TEST_SUITE( my_suite1 ) { } BOOST_TEST_DECORATOR( - unittest::timeout( 100 ) - unittest::expected_failures( 1 ) - unittest::enable_if( 100 < 50 ) ) BOOST_AUTO_TEST_CASE( my_test5 ) BOOST_TEST_DECORATOR( **** unittest::label( "L3" ) **** unittest::description( "suite description" ) **** unittest::depends_on( "my_suite2/my_test7" ) ) BOOST_AUTO_TEST_CASE( my_test11 ) IV Support for "run by label" plus more improvements for filtered runs Previously you only had an ability to filter test unit to execute by name. Now you can attach labels to test units and create collections to run which are located in arbitrary positions in test tree. For example you can have special label for performance test cases, which is attached to all test units which are responsible for testing performance of your various components. Or you might want to collect exception safety tests etc. To filter test units by level you still use --run_test CLA. Labels are dented by @prefix: test.exe --run=@performance You can now repeat --run_test argument to specify multiple conditions: test.exe --run=@performance,@exception_safety --run=prod/subsystem2 In addition run by name/label now recognizes dependencies. So if test unit A depends on test unit B and test unit B is disabled/is not part of current run test unit A will not run as well. Finally you now have an ability to specify "negative" conditions, by prefixing name or label with ! test.exe --run=!@performance This will run all test units which are not labeled with "performance". V. Support for failure context In many scenarios it is desirable to specify an additional information to a failure, but you do not need to see it if run is successful. Thus using regular print statements is undesirable. This becomes especially important when you develop common routine, which performs your testing and invoke it from multiple test units. In this case failure location is no help at all. To alleviate this problem two new tools are introduced: BOOST_TEST_INFO BOOST_TEST_CONTEXT BOOST_TEST_INFO attaches context to next assertion which is executed. For example: BOOST_TEST_INFO( "a=" << a ); BOOST_CHECKA( foo(a) == 0 ); BOOST_CHECK_CONTEXT attaches context to all assertions within a scope: BOOST_CHECK_CONTEXT( "Starting test foo from subset " << ss ) { BOOST_CHECKA( foo(ss, a) == 1 ); BOOST_CHECKA( foo(ss, b) == 2 ); } VI. Two new command line arguments: list_context, wait_for_debugger Using list_context you can see a full or partial test tree of your test module. wait_for_debugger allows to force test module to wait till you attach debugger. VII. Colored output. Using new CLA --color_output you can turn on colored output for error and information messages VIII New production testing tools interface introduced This feature was covered in details in my presentation in BoostCon 2010. The gist if this is an ability to use Boost.Test testing tools interfaces in production (nothing to do with testing) code. User supplied implementation is plugged in IX. Number of smaller improvements: * Notion of framework shutdown. Allows to eliminate some fake memory leaks * Added checkpoints at fixture entry points, test case entry point and test case exit point for auto registered test cases * New FPE portable interfaces introduced and FPE handling is separated from system errors handling. You can detect FPE even if catch_system_error is false * Added an ability to erase registered exception translator * execution_monitor: new interface vexecute - to be used to monitor nullary functions with no result values * test_tree_visitor interface extended to facilitate visitors applying the same action to all test units * execution_monitor use typeid to report "real" exception type if possible. * New ability to redirect leaks report into a file Thank you for your time, Gennadiy

On Sat, Oct 20, 2012 at 03:03:57AM +0000, Gennadiy Rozenal wrote:
Hi,
It's been a long while since I merged any changes into boost release and by now there are whole bunch of new features there. Some are small, but some others are big ones such that they can change the way people use the library. I'd like to list them here (excluding bug fixes and removal of deprecated interfaces) and ask what should we do with these. Specifically let me know if you think these require some kind of (mini) review from community, but any other comments are welcome as well.
I couldn't really determine from the recent thread titled "[boost] [test] new features use c++11 features" whether this new and improved Test unconditionally or conditionally would use C++11 features? I know you all are all up in arms about moving the "state of the language forward" and all, but it'd be a bit sad if a rather important piece of Boost infrastructure suddenly became unusable without any rationale or warning. Call me a luddite, but I like when my code can continue to benefit from library bugfixes and improvements. Staying on legacy Boost doesn't help anyone. I really don't like playing whack-a-mole on every single C++11 creep into Boost, but it seems that unless checked, it poisons everything. -- Lars Viklund | zao@acc.umu.se

Lars Viklund <zao <at> acc.umu.se> writes:
I couldn't really determine from the recent thread titled "[boost] [test] new features use c++11 features" whether this new and improved Test unconditionally or conditionally would use C++11 features?
All new features are going to be optional. There no plans to drop support for pre-C++11 able compilers (and only few really need it as well at this point). Gennadiy

Le 20/10/12 05:03, Gennadiy Rozenal a écrit :
Hi,
It's been a long while since I merged any changes into boost release and by now there are whole bunch of new features there. Some are small, but some others are big ones such that they can change the way people use the library. I'd like to list them here (excluding bug fixes and removal of deprecated interfaces) and ask what should we do with these. Specifically let me know if you think these require some kind of (mini) review from community, but any other comments are welcome as well.
Hi, I'm trying to build the documentation in trunk and I'm getting bjam /Users/viboes/boost/trunk/tools/build/v2/util/path.jam:516: in make-UNIX from module path error: Empty path passed to 'make-UNIX' /Users/viboes/boost/trunk/tools/build/v2/util/path.jam:41: in path.make from module path /Users/viboes/boost/trunk/libs/test/doc/utf-boostbook.jam:20: in load from module utf-boostbook /Users/viboes/boost/trunk/tools/build/v2/kernel/modules.jam:289: in import from module modules /Users/viboes/boost/trunk/tools/build/v2/build/toolset.jam:39: in toolset.using from module toolset /Users/viboes/boost/trunk/tools/build/v2/build/project.jam:995: in using from module project-rules Jamfile.v2:12: in modules.load from module Jamfile</Users/viboes/boost/trunk/libs/test/doc> /Users/viboes/boost/trunk/tools/build/v2/build/project.jam:311: in load-jamfile from module project /Users/viboes/boost/trunk/tools/build/v2/build/project.jam:64: in load from module project /Users/viboes/boost/trunk/tools/build/v2/build/project.jam:145: in project.find from module project /Users/viboes/boost/trunk/tools/build/v2/build-system.jam:552: in load from module build-system /Users/viboes/boost/trunk/tools/build/v2/kernel/modules.jam:289: in import from module modules /Users/viboes/boost/trunk/tools/build/v2/kernel/bootstrap.jam:139: in boost-build from module /Users/viboes/boost/trunk/boost-build.jam:17: in module scope from module What could be wrong on my environment? BTW, could you tell us which features are exclusive of a c++11 compliant compiler? Best, Vicente

Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
I'm trying to build the documentation in trunk and I'm getting [...] What could be wrong on my environment?
I an not big on documentation tools. I did not make any changes since I build it last time. Steven, would you mind helping here again?
BTW,
could you tell us which features are exclusive of a c++11 compliant compiler?
Data driven test case 100% c++11. BOOST_CHECKA only some advanced variations. I also plan to use std::thread to finally make Boost.Test thread safe and couple other new things here and there. Gennadiy

Hello, 20.10.2012 13:09, Gennadiy Rozenal wrote:
BTW,
could you tell us which features are exclusive of a c++11 compliant compiler? Data driven test case 100% c++11. BOOST_CHECKA only some advanced variations.
"data driven test case" subsystem looks like a good feature, but why restricting to C++11? Which kind of features from C++11 are required? Variadic macro for BOOST_DATA_TEST_CASE? Best Regards, Evgeny

Evgeny Panasyuk <evgeny.panasyuk <at> gmail.com> writes:
"data driven test case" subsystem looks like a good feature, but why restricting to C++11? Which kind of features from C++11 are required? Variadic macro for BOOST_DATA_TEST_CASE?
No. It uses a lot of C++11 inside the implementation. It might have been possible to implement somewhat limited c++98 version, but I do not have time to work on that. Feel free to submit a patch ;) Gennadiy

20.10.2012 13:44, Gennadiy Rozenal wrote:
No. It uses a lot of C++11 inside the implementation. It might have been possible to implement somewhat limited c++98 version, but I do not have time to work on that. Feel free to submit a patch ;)
1. Where is the latest version? In svn trunk? 2. Is it OK to introduce dependencies on other boost libraries in order to replace C++11 features? For instance replace (std::list + std::move) with (boost::container::list + boost::move), (std::shared_ptr + std::make_shared) with (boost::shared_ptr + boost::make_shared)? Best Regards, Evgeny

Evgeny Panasyuk <evgeny.panasyuk <at> gmail.com> writes:
20.10.2012 13:44, Gennadiy Rozenal wrote:
No. It uses a lot of C++11 inside the implementation. It might have been possible to implement somewhat limited c++98 version, but I do not have time to work on that. Feel free to submit a patch ;)
1. Where is the latest version? In svn trunk?
Yes.
2. Is it OK to introduce dependencies on other boost libraries in order to replace C++11 features? For instance replace (std::list + std::move)
std::list older compilers should be able to handle. std::move, std::forward could probably be removed altogether. They use only to achieve proper move semantic of dataset construction
with (boost::container::list + boost::move), (std::shared_ptr + std::make_shared) with (boost::shared_ptr + boost::make_shared)? boost shared_ptr is fine I guess.
Gennadiy

20.10.2012 23:31, Gennadiy Rozenal wrote:
2. Is it OK to introduce dependencies on other boost libraries in order to replace C++11 features? For instance replace (std::list + std::move) std::list older compilers should be able to handle.
yes, of course :) Question was related to move-semantic of std::list itself (like move-contructor). boost::container::list also has move-constructor (like std::list in C++11) which works in all C++ ISOs, with help of Boost.Move.
std::move, std::forward could probably be removed altogether. They use only to achieve proper move semantic of dataset construction Yes, instead of using Boost's substitutions for C++11, it is possible to refactor code for removing need in these new features. But obviously that would take more time than just replacing std:: with boost::, that why I am asking.
with (boost::container::list + boost::move), (std::shared_ptr + std::make_shared) with (boost::shared_ptr + boost::make_shared)? boost shared_ptr is fine I guess. ok.
Best Regards, Evgeny

AMDG On 10/20/2012 02:09 AM, Gennadiy Rozenal wrote:
Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
I'm trying to build the documentation in trunk and I'm getting [...] What could be wrong on my environment?
I an not big on documentation tools. I did not make any changes since I build it last time. Steven, would you mind helping here again?
Honestly, I'm not sure how this ever worked. I've just committed a fix for this. You'll have to update Boost.Build as well, because I also had to patch auto-index.jam to eliminate an ambiguity. In Christ, Steven Watanabe

AMDG On 10/19/2012 08:03 PM, Gennadiy Rozenal wrote:
Hi,
It's been a long while since I merged any changes into boost release and by now there are whole bunch of new features there. Some are small, but some others are big ones such that they can change the way people use the library. I'd like to list them here (excluding bug fixes and removal of deprecated interfaces) and ask what should we do with these. Specifically let me know if you think these require some kind of (mini) review from community, but any other comments are welcome as well.
So here we go:
I. New testing tool BOOST_CHECKA
This tool is based on excellent idea from Kevlin Henney. I chose the name CHECKA for this tool due to the lack of better name, but I am open to suggestions.
BOOST_CHECK_EXPR? Ideally I'd like this to be called BOOST_CHECK, but I don't think that can be made perfectly backwards compatible.
This testing tool capable of replacing whole bunch of existing other tools like BOOST_CHECK_EQUAL, BOOST_CHECK_GT etc. Usage is the most natural you can wish for:
BOOST_CHECKA( var1 - var2 >= 12 );
And the output will include as much information as we can get:
error: in "foo": check var1 - var2 >= 12 failed [23-15<12]
I like it, however, the limitations need to be clearly documented.
II. New "data driven test case" subsystem
<snip> Here are couple examples:
int samples1[] = {1,2,3}; BOOST_DATA_TEST_CASE( t1, samples1, sample ) { BOOST_CHECKA( foo(sample) > 0 ); }
This is great. I've often needed this functionality and I've always ended up writing manual loops. In Christ, Steven Watanabe

Steven Watanabe wrote:
I. New testing tool BOOST_CHECKA
This tool is based on excellent idea from Kevlin Henney. I chose the name CHECKA for this tool due to the lack of better name, but I am open to suggestions.
BOOST_CHECK_EXPR? Ideally I'd like this to be called BOOST_CHECK, but I don't think that can be made perfectly backwards compatible.
How about BOOST_CHECK_EXPRESSION ?
II. New "data driven test case" subsystem
<snip> Here are couple examples:
int samples1[] = {1,2,3}; BOOST_DATA_TEST_CASE( t1, samples1, sample ) { BOOST_CHECKA( foo(sample) > 0 ); }
How about BOOST_TEST_FOREACH(<test?>, <collection>, <item name>) ? Robert Ramey

Robert Ramey <ramey <at> rrsd.com> writes:
Steven Watanabe wrote:
I. New testing tool BOOST_CHECKA How about BOOST_CHECK_EXPRESSION ?
Kinda too long. And I'd like BOOST_CHECK_ASSERTION better if we are going with long name
II. New "data driven test case" subsystem BOOST_DATA_TEST_CASE( t1, samples1, sample ) BOOST_TEST_FOREACH(<test?>, <collection>, <item name>) ?
1. I already use this macro since the time before BOOST_FOREACH was introduced into trunk 2. The name should really convey that this a test case 3. Data test case can have arbitrary arity. Foreach may look ambiguous in such scenario. Gennadiy

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozenal Sent: Saturday, October 20, 2012 8:47 PM To: boost@lists.boost.org Subject: Re: [boost] Boost.Test updates in trunk: need for (mini) review?
Robert Ramey <ramey <at> rrsd.com> writes:
Steven Watanabe wrote:
I. New testing tool BOOST_CHECKA How about BOOST_CHECK_EXPRESSION ?
Kinda too long. And I'd like BOOST_CHECK_ASSERTION better if we are going with long name.
What's wrong with BOOST_CHECK_ASSERT? Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

Paul A. Bristow <pbristow <at> hetp.u-net.com> writes:
-----Original Message----- From: boost-bounces <at> lists.boost.org [mailto:boost-bounces <at>
lists.boost.org] On Behalf Of Gennadiy
Rozenal Sent: Saturday, October 20, 2012 8:47 PM To: boost <at> lists.boost.org Subject: Re: [boost] Boost.Test updates in trunk: need for (mini) review?
Robert Ramey <ramey <at> rrsd.com> writes:
Steven Watanabe wrote:
I. New testing tool BOOST_CHECKA
What's wrong with BOOST_CHECK_ASSERT?
This is just two verbs stuck together. Both means essentially the same thing. CHECK_ASSERTION at least syntactically makes sense. Gennadiy

on Sun Oct 21 2012, Gennadiy Rozenal <rogeeff-AT-gmail.com> wrote:
Paul A. Bristow <pbristow <at> hetp.u-net.com> writes:
-----Original Message----- From: boost-bounces <at> lists.boost.org [mailto:boost-bounces <at>
lists.boost.org] On Behalf Of Gennadiy
Rozenal Sent: Saturday, October 20, 2012 8:47 PM To: boost <at> lists.boost.org Subject: Re: [boost] Boost.Test updates in trunk: need for (mini) review?
Robert Ramey <ramey <at> rrsd.com> writes:
Steven Watanabe wrote:
I. New testing tool BOOST_CHECKA
What's wrong with BOOST_CHECK_ASSERT?
This is just two verbs stuck together. Both means essentially the same thing. CHECK_ASSERTION at least syntactically makes sense.
BOOST_TEST? :-)

David Abrahams <dave <at> boostpro.com> writes:
BOOST_TEST?
Ok. How about this: BOOST_TEST - check level tool BOOST_TEST_REQUIRE - require level tool BOOST_TEST_WARN - warning level tool Does anyone have any other comments about new features? Do they look release ready? Gennadiy P.S. If anyone still interested, I have several more non trivial boost.test based extensions/new features. Please contact new directly at rogeeff <at> gmail dot com.

Le 26/10/12 05:51, Gennadiy Rozenal a écrit :
David Abrahams <dave <at> boostpro.com> writes:
BOOST_TEST? Ok. How about this:
BOOST_TEST - check level tool BOOST_TEST_REQUIRE - require level tool BOOST_TEST_WARN - warning level tool
Hi, I don't think that there should be conflicts but I wanted just to report that boost/detail/lightweight_test.hpp defines #define BOOST_TEST(expr) ((expr)? (void)0: ::boost::detail::test_failed_impl(#expr, __FILE__, __LINE__, BOOST_CURRENT_FUNCTION)) #define BOOST_ERROR(msg) ::boost::detail::error_impl(msg, __FILE__, __LINE__, BOOST_CURRENT_FUNCTION) #define BOOST_TEST_EQ(expr1,expr2) ( ::boost::detail::test_eq_impl(#expr1, #expr2, __FILE__, __LINE__, BOOST_CURRENT_FUNCTION, expr1, expr2) ) #define BOOST_TEST_NE(expr1,expr2) ( ::boost::detail::test_ne_impl(#expr1, #expr2, __FILE__, __LINE__, BOOST_CURRENT_FUNCTION, expr1, expr2) )
Does anyone have any other comments about new features? Do they look release ready?
I could not say nothing until the documentation is ready for review :( Please add an history with a list of the new features. Best, Vicente

Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
Hi,
I don't think that there should be conflicts but I wanted just to report that boost/detail/lightweight_test.hpp defines
#define BOOST_TEST(expr) ((expr)? (void)0:
No one is expected to include both Boost.Test and other unit testing variants in the test file. Gennadiy

On 4 November 2012 09:19, Gennadiy Rozenal <rogeeff@gmail.com> wrote:
Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
I don't think that there should be conflicts but I wanted just to report that boost/detail/lightweight_test.hpp defines
#define BOOST_TEST(expr) ((expr)? (void)0:
No one is expected to include both Boost.Test and other unit testing variants in the test file.
Boost.Test and lightweight_test both define BOOST_ERROR so they're already incompatible.

Steven Watanabe <watanabesj <at> gmail.com> writes:
I. New testing tool BOOST_CHECKA
This tool is based on excellent idea from Kevlin Henney. I chose the name CHECKA for this tool due to the lack of better name, but I am open to suggestions.
BOOST_CHECK_EXPR? Ideally I'd like this to be called BOOST_CHECK, but I don't think that can be made perfectly backwards compatible.
Yep. There are few cases where BOOST_CHECKA can't handle what BOOST_CHECK can. I really would prefer name as short as possible. This is intended to be a primary testing tool from now on. And I am kind of stuck with triplets of WARN/CHECK/REQUIRE, so any name has to fit into this schema. I've considered: BOOST_UCHECK (U for universal) BOOST_CHECK_EX (EX for extended) BOOST_CHECK_ASSERTION too long? BOOST_CASSERT/BOOST_WASSERT/BOOST_RASSERT (W/C/R for warn check and require levels) Gennadiy

Le 20/10/12 05:03, Gennadiy Rozenal a écrit :
Hi,
It's been a long while since I merged any changes into boost release and by now there are whole bunch of new features there. Some are small, but some others are big ones such that they can change the way people use the library. I'd like to list them here (excluding bug fixes and removal of deprecated interfaces) and ask what should we do with these. Specifically let me know if you think these require some kind of (mini) review from community, but any other comments are welcome as well. Hi,
thanks for taking the time to make this ready for release. I would appreciate if all the new Boost.Test macros start by BOOST_TEST_ and you provide the equivalent macros for existing ones that don't follows this pattern. I understand that you want short names, but avoid possible collisions is better than having short names. I could accept also BOOST_TF_ as Boost Test Framework or BOOST_TL_ as Boost Test Library.
So here we go:
I. New testing tool BOOST_CHECKA
This tool is based on excellent idea from Kevlin Henney. I chose the name CHECKA for this tool due to the lack of better name, but I am open to suggestions. This testing tool capable of replacing whole bunch of existing other tools like BOOST_CHECK_EQUAL, BOOST_CHECK_GT etc. Usage is the most natural you can wish for:
BOOST_CHECKA( var1 - var2 >= 12 );
And the output will include as much information as we can get:
error: in "foo": check var1 - var2 >= 12 failed [23-15<12] I like this a lot. How it works? What about BOOST_TEST_CHECK (see above)?
II. New "data driven test case" subsystem
New data driven test case subsystem represent a generalization of parameterized test case and test case template and eventually will replace both. The idea is to allow user to specify an arbitrary (monomorphic or polymorphic) set of samples and run a test case on each of the samples. Samples can have different arity, thus you can have test cases with multiple parameters of different types. For now we support following dataset kinds (aside of generators none of dataset constructions routines performs *any* copying):
a) singleton - dataset constructed out of single sample
data::make(10) - singleton dataset with integer sample data::make("qwerty") - singleton dataset with string sample
b) array - dataset constructed out of C array
int a[] = {1,2,3,4,5,6}; data::make(a) - dataset with 6 integer values
c) collection - dataset constructed out of C++ forward iterable collection
std::vector<double> v{1.2, 2.3, 3.4, 5.6 }; data::make(v) - dataset with 6 double values
d) join - dataset constructed by joining 2 datasets of the same type
int a[] = {1,2,3}; int b[] = {7,8,9}; data::make(a) + data::make(b) - dataset with 6 integer values
e) zip - dataset constructed by zipping 2 datasets of the same size, but not necessarily the same type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd", "zxc"};
data::make(a) ^ data::make(b) dataset with 3 samples which are pairs of int and char*.
f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2};
data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double.
g) xrange generator dataset which produes samples in some range
data::xrange( 0., 3., 0.4 ) - dataset with 8 double samples data::xrange<int>((data::begin=9, data::end=15)) - dataset with 6 int samples data::xrange( 1., 7.5 ) - dataset with 7 double samples data::xrange( 5, 0, -1 ) - dataset with 5 int samples
h) random - generator dataset with unlimited number of random samples
data::random(data::distribution = std::normal_distribution<>(5.,2)) dataset with random double numbers following specified distribution
data::random(( data::engine = std::minstd_rand(), data::distribution = std::discrete_distribution<>(), data::seed = 20UL )); dataset with random int numbers following specified distribution
While all this interfaces can be used by itself to build complex datasets with various purposes, primary use case it was developped for is new data driven test case interface:
BOOST_DATA_TEST_CASE( test_name, dataset, parameter_names... )
Here are couple examples:
int samples1[] = {1,2,3}; BOOST_DATA_TEST_CASE( t1, samples1, sample ) { BOOST_CHECKA( foo(sample) > 0 ); }
Above test case is going to be executed 3 times with different sample values
char* strs[] = {"qwe", "asd", "zxc", "mkl" };
BOOST_DATA_TEST_CASE( t1, data::xrange(4) ^ strs ^ data::random(), intval, str, dblval ) { MyObj obj( dblval, str );
BOOST_CHECKA( obj.goo() == intval ); }
Above test case will be executed 4 times with different values of parameters intval, str, and dblval. Yes, this will be very useful.
Polymorphic datasets are still being developed, but should appear soon(ish).
III. Auto-registered test unit decorators.
Previously is was not possible and/or convenient to assign attributes to the automatically registered test units. To alleviate this I've introduced a notion of test unit decorator. These can be "attached" to any test unit similarly to like it is done in other languages Could you give some references?
Following decorators already implemented: label - adds labels to a test unit expected_failures - set expected failures for a test unit Could you give an example showing the utility of this decorator? timeout - sets timeout for a test unit Very useful. description - sets a test unit description depends_on - sets a test unit dependency What do you mean by depends_on? Is the execution of the test subject to
Which will be the values for intval? is the index of the tuple? What about adding something like a product test case that will execute the test with the Cartesian product? Is this already available. the success of its dependencies or whether the test is enabled/disabled?
enable_if/disable_if - facilitates a test unit status change fixture - assigns fixture to a test units
Test unit description is new test unit attribute, which is reported by new list_content command line argument described below. Usage of labels covered below as well. enable_if/disable_if allow new, much more flexible test management. By adding enable_if/disable_if decorators to the test unit you can conditionally select which test units to run at construction time based on some compile time or run-time parameters.
And we finally we have suite level fixtures which are set by attaching fixture decorator to a test suite (suite level fixtures executed once per test suite).
Attachment of decorator is facilitated by using of BOOST_TEST_DECORATOR. Note that you can use any of '+', '-', '*' symbols to attach decorator (and any number of '*'): Is there a difference between them? and if not why do you provide all of
I guess that this should be useful, but could you give a practical example? them?
IV Support for "run by label" plus more improvements for filtered runs
Previously you only had an ability to filter test unit to execute by name. Now you can attach labels to test units and create collections to run which are located in arbitrary positions in test tree. For example you can have special label for performance test cases, which is attached to all test units which are responsible for testing performance of your various components. Or you might want to collect exception safety tests etc. To filter test units by level you still use --run_test CLA. Labels are dented by @prefix:
test.exe --run=@performance
You can now repeat --run_test argument to specify multiple conditions:
I guess you mean --run.
test.exe --run=@performance,@exception_safety --run=prod/subsystem2
In addition run by name/label now recognizes dependencies. So if test unit A depends on test unit B and test unit B is disabled/is not part of current run test unit A will not run as well.
Finally you now have an ability to specify "negative" conditions, by prefixing name or label with !
test.exe --run=!@performance
This will run all test units which are not labeled with "performance".
V. Support for failure context
In many scenarios it is desirable to specify an additional information to a failure, but you do not need to see it if run is successful. Thus using regular print statements is undesirable. This becomes especially important when you develop common routine, which performs your testing and invoke it from multiple test units. In this case failure location is no help at all.
Yes this should be very useful.
To alleviate this problem two new tools are introduced: BOOST_TEST_INFO BOOST_TEST_CONTEXT
BOOST_TEST_INFO attaches context to next assertion which is executed. For example:
BOOST_TEST_INFO( "a=" << a ); BOOST_CHECKA( foo(a) == 0 ); Could the following be provided as well?
BOOST_CHECKA( foo(a) == 0, "a=" << a );
BOOST_CHECK_CONTEXT attaches context to all assertions within a scope:
BOOST_CHECK_CONTEXT( "Starting test foo from subset " << ss ) {
BOOST_CHECKA( foo(ss, a) == 1 ); BOOST_CHECKA( foo(ss, b) == 2 );
}
VI. Two new command line arguments: list_context, wait_for_debugger
Using list_context you can see a full or partial test tree of your test module. wait_for_debugger allows to force test module to wait till you attach debugger.
VII. Colored output.
Using new CLA --color_output you can turn on colored output for error and information messages
VIII New production testing tools interface introduced
This feature was covered in details in my presentation in BoostCon 2010. The gist if this is an ability to use Boost.Test testing tools interfaces in production (nothing to do with testing) code. User supplied implementation is plugged in Could you tell us more about the utilities? Couldn't these utilities be moved to existing libraries or a specific library? IX. Number of smaller improvements:
* Notion of framework shutdown. Allows to eliminate some fake memory leaks * Added checkpoints at fixture entry points, test case entry point and test case exit point for auto registered test cases * New FPE portable interfaces introduced and FPE handling is separated from system errors handling. You can detect FPE even if catch_system_error is false * Added an ability to erase registered exception translator * execution_monitor: new interface vexecute - to be used to monitor nullary functions with no result values * test_tree_visitor interface extended to facilitate visitors applying the same action to all test units * execution_monitor use typeid to report "real" exception type if possible. * New ability to redirect leaks report into a file
Wow, you have added a lot of new interesting features. I will check the documentation and start using it as soon as possible. About if a mini review is needed it is up to you, but it is clear that a review will provide you some feedback before you deliver all these features. Best, Vicente

"Vicente J. Botet Escriba" <vicente.botet@wanadoo.fr> writes:
Le 20/10/12 05:03, Gennadiy Rozenal a écrit :
Hi,
It's been a long while since I merged any changes into boost release and by now there are whole bunch of new features there. Some are small, but some others are big ones such that they can change the way people use the library. I'd like to list them here (excluding bug fixes and removal of deprecated interfaces) and ask what should we do with these. Specifically let me know if you think these require some kind of (mini) review from community, but any other comments are welcome as well. Hi,
thanks for taking the time to make this ready for release.
I would appreciate if all the new Boost.Test macros start by BOOST_TEST_ and you provide the equivalent macros for existing ones that don't follows this pattern. I understand that you want short names, but avoid possible collisions is better than having short names. I could accept also BOOST_TF_ as Boost Test Framework or BOOST_TL_ as Boost Test Library.
So here we go:
I. New testing tool BOOST_CHECKA
This tool is based on excellent idea from Kevlin Henney. I chose the name CHECKA for this tool due to the lack of better name, but I am open to suggestions. This testing tool capable of replacing whole bunch of existing other tools like BOOST_CHECK_EQUAL, BOOST_CHECK_GT etc. Usage is the most natural you can wish for:
BOOST_CHECKA( var1 - var2 >= 12 );
And the output will include as much information as we can get:
error: in "foo": check var1 - var2 >= 12 failed [23-15<12] I like this a lot. How it works? What about BOOST_TEST_CHECK (see above)?
Or just BOOST_TEST as that is now the default testing tool? TEST_CHECK an TEST_ASSERTION etc. are redundant as it says the same thing twice. Alex

Alexander Lamaison <awl03 <at> doc.ic.ac.uk> writes:
Or just BOOST_TEST as that is now the default testing tool? TEST_CHECK an TEST_ASSERTION etc. are redundant as it says the same thing twice.
I need 3 names: warn, check and require levels. What do you propose? Gennadiy

Gennadiy Rozenal <rogeeff@gmail.com> writes:
Alexander Lamaison <awl03 <at> doc.ic.ac.uk> writes:
Or just BOOST_TEST as that is now the default testing tool? TEST_CHECK an TEST_ASSERTION etc. are redundant as it says the same thing twice.
I need 3 names: warn, check and require levels. What do you propose?
I'd call the check version BOOST_TEST and the require version BOOST_TEST_REQUIRE as it's not the tool you immediately reach for. I must admit, I've no idea what the warning variant does. How does that differ from CHECK? I can't find it documented here [1]. [1] http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/utf/testing-tools/re... Alex

Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
Hi,
thanks for taking the time to make this ready for release.
I would appreciate if all the new Boost.Test macros start by BOOST_TEST_
I think the train already left on this one. Plus unlike many other macros which you are using here and there testing tools are used quite extensively as part of any test module. Making this longer would by an unwelcome change. I do try to make BOOST_TEST as a prefix whenever it makes sense. In this case I'd rather strive for consise name.
and you provide the equivalent macros for existing ones that don't follows this pattern. I understand that you want short names, but avoid possible collisions is better than having short names. I could accept also BOOST_TF_ as Boost Test Framework or BOOST_TL_ as Boost Test Library.
All these abbreviation heart my head a bit. There are bunch of new libraries, which even referred to by appbreviation. Sometimes they are fine and if there is a concesus that this is the best option, I can live with it, but my preference would be something less cryptic.
So here we go:
I. New testing tool BOOST_CHECKA I like this a lot. How it works?
A bit of expression template magic, plus some c++11 for advanced type deduction.
What about BOOST_TEST_CHECK (see above)?
Again this is 2 verbs stuck together. I can live with this, but this is not as good as BOOST_CHECK ;o) One option I thought about is to introduce a mode where this new tool is named BOOST_CHECK and old BOOST_CHECK is renamed into BOOST_CHECK_...something.
II. New "data driven test case" subsystem BOOST_DATA_TEST_CASE( t1, data::xrange(4) ^ strs ^ data::random(), intval, str, dblval ) { MyObj obj( dblval, str );
BOOST_CHECKA( obj.goo() == intval ); }
Above test case will be executed 4 times with different values of parameters intval, str, and dblval. Yes, this will be very useful.
Which will be the values for intval? is the index of the tuple?
The sample tuple is hidden and what you get an an actual argument with specified name. So intval is going to be an actual value of type int const&
What about adding something like a product test case that will execute the test with the Cartesian product? Is this already available.
Yes. This is what grid for.
III. Auto-registered test unit decorators.
Previously is was not possible and/or convenient to assign attributes to the automatically registered test units. To alleviate this I've introduced a notion of test unit decorator. These can be "attached" to any test unit similarly to like it is done in other languages Could you give some references?
Python for example.
expected_failures - set expected failures for a test unit
Could you give an example showing the utility of this decorator?
This is the same as an existing interface, but applied to auto test case. Yu just telling the framework an amount of assertion failures to expect in test unit, so these can be "ignored"
description - sets a test unit description depends_on - sets a test unit dependency
What do you mean by depends_on? Is the execution of the test subject to the success of its dependencies or whether the test is enabled/disabled?
One test unit can depend on another one. For example it makes no sense to test access methods if construction test case failed. Thus you might want to have introduce dependency of former on to later and if constructio test failed, the second test is going to be skipped.
Attachment of decorator is facilitated by using of BOOST_TEST_DECORATOR. Note that you can use any of '+', '-', '*' symbols to attach decorator (and any number of '*'): Is there a difference between them? and if not why do you provide all of them?
Only esthetic. I did not know what users would prefer and there were no problems maintaining all of them.
Could the following be provided as well?
BOOST_CHECKA( foo(a) == 0, "a=" << a );
BOOST_CHECK_MESSAGE does that already, but you might have a point. I can use variadic interface to combine them together. The only thing is that Paul insists on treating empty list as non empty one and I'll need to jump through some hoops to implement it.
Wow, you have added a lot of new interesting features. I will check the documentation and start using it as soon as possible.
I am only starting working on docs for these. Feel free to try and ask any questions here meanwhile. Gennadiy

So here we go:
I. New testing tool BOOST_CHECKA I like this a lot. How it works?
A bit of expression template magic, plus some c++11 for advanced type deduction.
What about BOOST_TEST_CHECK (see above)?
Again this is 2 verbs stuck together. I can live with this, but this is not as good as BOOST_CHECK ;o)
One option I thought about is to introduce a mode where this new tool is named BOOST_CHECK and old BOOST_CHECK is renamed into BOOST_CHECK_...something.
Sounds like a good option. It won't break anything, will it? -Thorsten

Thorsten Ottosen <thorsten.ottosen <at> dezide.com> writes:
One option I thought about is to introduce a mode where this new tool is named BOOST_CHECK and old BOOST_CHECK is renamed into BOOST_CHECK_...something.
Sounds like a good option. It won't break anything, will it?
Unfortunately it might. Two glaring examples which are not supported are: BOOST_CHECKA(a || b); BOOST_CHECKA(a && b); Following would work: BOOST_CHECKA((a||b)); BOOST_CHECKA(a|b); Ternary operator does not work either. Otherwise I could have just replaced implementation and be done with it.

Gennadiy Rozenal <rogeeff@gmail.com> writes:
Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
Attachment of decorator is facilitated by using of BOOST_TEST_DECORATOR. Note that you can use any of '+', '-', '*' symbols to attach decorator (and any number of '*'): Is there a difference between them? and if not why do you provide all of them?
Only esthetic. I did not know what users would prefer and there were no problems maintaining all of them.
I'd encourage you to choose just one. Most importantly to prevent confusion when someone who is used to using one symbol reads tests written by someone else and doesn't realise the different symbol they use means the same thing. Also, it leaves further symbols available should you ever want to use one to mean something else. Out of the three, my favourite is '+'. '-' looks like you're removing something and '***' looks like a comment at first glance. Another character that would be idea is '@' which Python and Java use but, if your implementation is based on operators, I guess that one isn't possible. Alex

Alexander Lamaison <awl03 <at> doc.ic.ac.uk> writes:
Out of the three, my favourite is '+'. '-' looks like you're removing something and '***' looks like a comment at first glance.
Let's see if anyone else express preference in this regard.
Another character that would be idea is '@' which Python and Java use but, if your implementation is based on operators, I guess that one isn't possible.
Yep. I'd like that as well. Gennadiy

Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
Hi,
thanks for taking the time to make this ready for release.
I would appreciate if all the new Boost.Test macros start by BOOST_TEST_ I think the train already left on this one. Plus unlike many other macros which you are using here and there testing tools are used quite extensively as part of any test module. Making this longer would by an unwelcome change. I do try to make BOOST_TEST as a prefix whenever it makes sense. In this case I'd rather strive for consise name. I disagree. Using BOOST_TEST_ as prefix even if longer make the code clearest. This is one of the Boost rules and any Boost library should follow them. and you provide the equivalent macros for existing ones that don't follows this pattern. I understand that you want short names, but avoid possible collisions is better than having short names. I could accept also BOOST_TF_ as Boost Test Framework or BOOST_TL_ as Boost Test Library. All these abbreviation heart my head a bit. There are bunch of new libraries, which even referred to by appbreviation. Sometimes they are fine and if there is a concesus that this is the best option, I can live with it, but my preference would be something less cryptic. It is up to you to choose the prefix.
So here we go:
I. New testing tool BOOST_CHECKA I like this a lot. How it works? A bit of expression template magic, plus some c++11 for advanced type deduction. Thanks, this helps me a lot :)
What about BOOST_TEST_CHECK (see above)? Again this is 2 verbs stuck together. I can live with this, but this is not as good as BOOST_CHECK ;o) Note that in BOOST_TEST_ TEST is not a verb, it is just the name of the
Le 22/10/12 02:55, Gennadiy Rozenal a écrit : library.
One option I thought about is to introduce a mode where this new tool is named BOOST_CHECK and old BOOST_CHECK is renamed into BOOST_CHECK_...something.
If you decide to change, the library prefix will be best choice ;-)
II. New "data driven test case" subsystem BOOST_DATA_TEST_CASE( t1, data::xrange(4) ^ strs ^ data::random(), intval, str, dblval ) { MyObj obj( dblval, str );
BOOST_CHECKA( obj.goo() == intval ); }
Above test case will be executed 4 times with different values of parameters intval, str, and dblval. Yes, this will be very useful.
Which will be the values for intval? is the index of the tuple? The sample tuple is hidden and what you get an an actual argument with specified name. So intval is going to be an actual value of type int const&
I see it now, I did understood that data::xrange(4)were a range generator. BTW, why do you prefix with x?
What about adding something like a product test case that will execute the test with the Cartesian product? Is this already available. Yes. This is what grid for. Could I find grid in the documentation?
III. Auto-registered test unit decorators.
Previously is was not possible and/or convenient to assign attributes to the automatically registered test units. To alleviate this I've introduced a notion of test unit decorator. These can be "attached" to any test unit similarly to like it is done in other languages Could you give some references? Python for example. Thanks.
expected_failures - set expected failures for a test unit
Could you give an example showing the utility of this decorator? This is the same as an existing interface, but applied to auto test case. Yu just telling the framework an amount of assertion failures to expect in test unit, so these can be "ignored" My question is why the user wants to check the failures, s/he can just change the assertion, isn't it? Does the test succeeds if there is less failures than expected?
description - sets a test unit description depends_on - sets a test unit dependency
What do you mean by depends_on? Is the execution of the test subject to the success of its dependencies or whether the test is enabled/disabled?
One test unit can depend on another one. For example it makes no sense to test access methods if construction test case failed. Thus you might want to have introduce dependency of former on to later and if constructio test failed, the second test is going to be skipped. Thanks, now it is clear.
Attachment of decorator is facilitated by using of BOOST_TEST_DECORATOR. Note that you can use any of '+', '-', '*' symbols to attach decorator (and any number of '*'): Is there a difference between them? and if not why do you provide all of them? Only esthetic. I did not know what users would prefer and there were no problems maintaining all of them. Humm ...
Could the following be provided as well?
BOOST_CHECKA( foo(a) == 0, "a=" << a ); BOOST_CHECK_MESSAGE does that already, but you might have a point. I can use variadic interface to combine them together. The only thing is that Paul insists on treating empty list as non empty one and I'll need to jump through some hoops to implement it. It would be great if you reach to provide it.
Wow, you have added a lot of new interesting features. I will check the documentation and start using it as soon as possible. I am only starting working on docs for these. Feel free to try and ask any questions here meanwhile.
Best, Vicente

On 22 October 2012 20:54, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
I disagree. Using BOOST_TEST_ as prefix even if longer make the code clearest. This is one of the Boost rules and any Boost library should follow them.
It isn't, the rule is:
Macro (gasp!) names all uppercase and begin with BOOST_.
That's from http://www.boost.org/development/requirements.html

Le 22/10/12 22:20, Daniel James a écrit :
On 22 October 2012 20:54, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
I disagree. Using BOOST_TEST_ as prefix even if longer make the code clearest. This is one of the Boost rules and any Boost library should follow them. It isn't, the rule is:
Macro (gasp!) names all uppercase and begin with BOOST_. That's from http://www.boost.org/development/requirements.html
You are right, but how we can ensure that two independent Boost libraries don't deliver the same macro? Vicente

On 10/22/2012 6:23 PM, Vicente J. Botet Escriba wrote:
Le 22/10/12 22:20, Daniel James a écrit :
On 22 October 2012 20:54, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
I disagree. Using BOOST_TEST_ as prefix even if longer make the code clearest. This is one of the Boost rules and any Boost library should follow them. It isn't, the rule is:
Macro (gasp!) names all uppercase and begin with BOOST_. That's from http://www.boost.org/development/requirements.html
You are right, but how we can ensure that two independent Boost libraries don't deliver the same macro?
I agree in general. We need something following BOOST_ in every library which is distinguishable from another library when macros are used. As an example the preprocessor library starts all macros with BOOST_PP_. My TTI library starts all macros with BOOST_TTI_. Not doing something like this will create a nightmare, which can only be relieved by very clever use of #define and #undef, if there occur the same names following BOOST_ in two libraries.

On 10/22/2012 7:52 PM, Edward Diener wrote:
On 10/22/2012 6:23 PM, Vicente J. Botet Escriba wrote:
You are right, but how we can ensure that two independent Boost libraries don't deliver the same macro?
I agree in general. We need something following BOOST_ in every library which is distinguishable from another library when macros are used. As an example the preprocessor library starts all macros with BOOST_PP_. My TTI library starts all macros with BOOST_TTI_. Not doing something like this will create a nightmare, which can only be relieved by very clever use of #define and #undef, if there occur the same names following BOOST_ in two libraries.
Not that I'm advocating this, but one could just include the entire set of Boost libraries all at once in a regression test to detect conflicting definitions. Personally, I think the boost namespace is too full already. In header declarations, you can't have using boost::abc, so you end up qualifying everything as with boost::abc::xyz which is really tedious and only "special" libraries get a plain boost::xyz. I'd rather just have abc::xyz which I think results in better modularization. In that sense, aside from a community and vetting process, Boost becomes a library distribution mechanism rather than a supposedly coherent framework. Regards, Paul Mensonides

On Mon, Oct 22, 2012 at 21:54:59 +0200, Vicente J. Botet Escriba wrote:
Le 22/10/12 02:55, Gennadiy Rozenal a écrit :
Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
expected_failures - set expected failures for a test unit
Could you give an example showing the utility of this decorator? This is the same as an existing interface, but applied to auto test case. Yu just telling the framework an amount of assertion failures to expect in test unit, so these can be "ignored" My question is why the user wants to check the failures, s/he can just change the assertion, isn't it? Does the test succeeds if there is less failures than expected?
Expected failures are for the case where the test is implemented, but the functionality is buggy. So you don't want to invert the assertion, you want to mark that you know that the assertion isn't satisfied so you notice when some other test fails. The tests succeed when the tests that were expected to fail pass. It would be nice if it printed some extra diagnostics (unexpected success), but I think it currently only prints number of failures and how many of them were expected, so the unexpected success is not so easy to notice. -- Jan 'Bulb' Hudec <bulb@ucw.cz>

Jan Hudec <bulb <at> ucw.cz> writes:
The tests succeed when the tests that were expected to fail pass. It would be nice if it printed some extra diagnostics (unexpected success), but I think it currently only prints number of failures and how many of them were expected, so the unexpected success is not so easy to notice.
If you set log level to "message" it will report this incident. Gennadiy

2012/10/20 Gennadiy Rozenal <rogeeff@gmail.com>:
BOOST_CHECKA( var1 - var2 >= 12 );
And the output will include as much information as we can get:
error: in "foo": check var1 - var2 >= 12 failed [23-15<12]
Great feature, but if it uses same error reporting mechanism as before it is totally useless (see ticket #7046). 512 bytes is not enough for such reports, please base error report buffer on std::string or some other class, that can auto extend it`s size. -- Best regards, Antony Polukhin

Antony Polukhin <antoshkka <at> gmail.com> writes:
2012/10/20 Gennadiy Rozenal <rogeeff <at> gmail.com>:
BOOST_CHECKA( var1 - var2 >= 12 ); Great feature, but if it uses same error reporting mechanism as before it is totally useless (see ticket #7046). 512 bytes is not enough for such reports, please base error report buffer on std::string or some other class, that can auto extend it`s size.
I know everyone have their own personal favorite ticket, but how is this particular one has anything to do with this tool? This tool does not report exceptions. And I wouldn't call an existing exception reporting mechanism totally useless as well, because it works fine for many many scenarios. We'll see how the one in the ticket can be addressed as well. Gennadiy

On Oct 19, 2012, at 11:03 PM, Gennadiy Rozenal wrote:
It's been a long while since I merged any changes into boost release and by now there are whole bunch of new features there. Some are small, but some others are big ones such that they can change the way people use the library. I'd like to list them here (excluding bug fixes and removal of deprecated interfaces) and ask what should we do with these. Specifically let me know if you think these require some kind of (mini) review from community, but any other comments are welcome as well.
Excellent stuff! Thanks for this work. I have a couple of comments. First, __please__ pay attention to documentation. I know you have said that documentation isn't something you like/want to do, but a library without good documentation is useful only to the writer. In other words, it is (in a general purpose case like this) simply a waste of time. Second, I have below suggested some terminology changes. (Not for BOOST_CHECKA -- you have plenty of input on that.) Some believe that a name is only a name, but I disagree. Choosing names carefully is, in my opinion, a powerful means of communication and documentation.
II. New "data driven test case" subsystem
<<snip>>
d) join - dataset constructed by joining 2 datasets of the same type
int a[] = {1,2,3}; int b[] = {7,8,9}; data::make(a) + data::make(b) - dataset with 6 integer values
This should be called "concatenation", not "joining". People with a database background will expect something called "join" to increase the arity of the result.
e) zip - dataset constructed by zipping 2 datasets of the same size, but not necessarily the same type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd", "zxc"};
data::make(a) ^ data::make(b) dataset with 3 samples which are pairs of int and char*.
Calling this "zipping" is odd (at least to me). Makes it sound like a compression facility. Perhaps "tupling" would be better. I also think the choice of operator here is not ideal. How does the xor operator evoke any notion of this operation? I would choose bitwise-or "|" because that is sometimes used as a flat-file column delimiter. (Actually, my favorite choice would be the comma operator to go along with the "tupling" terminology, but who am I to defy Scott Meyers' More Effective C++, Item 7?)
f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2};
data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double.
For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly. Also, I think you mean that the arity is the *product* of the argument dataset arities. Thanks, Ian Emmons

On Fri, Oct 26, 2012 at 5:41 AM, Ian Emmons <iemmons@bbn.com> wrote:
It's been a long while since I merged any changes into boost release and by now there are whole bunch of new features there. Some are small, but some others are big ones such that they can change the way people use the
On Oct 19, 2012, at 11:03 PM, Gennadiy Rozenal wrote: library.
I'd like to list them here (excluding bug fixes and removal of deprecated interfaces) and ask what should we do with these. Specifically let me know if you think these require some kind of (mini) review from community, but any other comments are welcome as well.
Excellent stuff! Thanks for this work. I have a couple of comments.
I haven't been following too closely the proposed Boost.Test changes, but I have some comments on your comments... :)
First, __please__ pay attention to documentation. I know you have said that documentation isn't something you like/want to do, but a library without good documentation is useful only to the writer. In other words, it is (in a general purpose case like this) simply a waste of time.
+1
Second, I have below suggested some terminology changes. (Not for BOOST_CHECKA -- you have plenty of input on that.) Some believe that a name is only a name, but I disagree. Choosing names carefully is, in my opinion, a powerful means of communication and documentation.
+1
II. New "data driven test case" subsystem
<<snip>>
d) join - dataset constructed by joining 2 datasets of the same type
int a[] = {1,2,3}; int b[] = {7,8,9}; data::make(a) + data::make(b) - dataset with 6 integer values
This should be called "concatenation", not "joining". People with a database background will expect something called "join" to increase the arity of the result.
"Join" is actually the terminology used by Boost.Range [1], so I think it's entirely appropriate here, even given the many uses of join (I mean, one could also argue that people with a math background might expect [2], but I don't think that precludes the use of "join" in this context). That said, "concatenation" sounds about as good.
e) zip - dataset constructed by zipping 2 datasets of the same size, but not
necessarily the same type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd", "zxc"};
data::make(a) ^ data::make(b) dataset with 3 samples which are pairs of int and char*.
Calling this "zipping" is odd (at least to me). Makes it sound like a compression facility. Perhaps "tupling" would be better.
Again, Boost.Iterator has adopted "zip" [3] to mean precisely this, so I think it's entirely appropriate. I don't know how zip conveys compression other than in reference to the zip compression format, and...really, to be honest, I'm not sure why "zip" was chosen for *that* name in the first place. "Tupling" has ambiguity with feature f) below, which also produces a dataset of tuples, but in a different way. I also think the choice of operator here is not ideal. How does the xor
operator evoke any notion of this operation? I would choose bitwise-or "|" because that is sometimes used as a flat-file column delimiter. (Actually, my favorite choice would be the comma operator to go along with the "tupling" terminology, but who am I to defy Scott Meyers' More Effective C++, Item 7?)
f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2};
data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double.
For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly.
Uh, this is Cartesian product [4], not a cross product [5], as far as I'm concerned. Cartesian product > grid > cross product. - Jeff [1] http://www.boost.org/doc/libs/1_51_0/libs/range/doc/html/range/reference/uti... [2] http://en.wikipedia.org/wiki/Join_%28mathematics%29 [3] http://www.boost.org/doc/libs/1_51_0/libs/iterator/doc/zip_iterator.html [4] http://en.wikipedia.org/wiki/Cartesian_product [5] http://en.wikipedia.org/wiki/Cross_product_%28disambiguation%29

Jeffrey Lee Hellrung, Jr. <jeffrey.hellrung <at> gmail.com> writes:
f) grid - dataset constructed by "multiplying" 2 datasets of the different sizes and types For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly.
Uh, this is Cartesian product [4], not a cross product [5], as far as I'm concerned. Cartesian product > grid > cross product.
Cartesian product is probably the right name, but it sounds too formal to me personally. I'd like this to be clear in "layman" terms. I think of grid dataset as nodes on a N dimensional grid, where each dimension represent a dataset we "multiplying". So each node of grid is sample in our grid dataset. Gennadiy

Gennadiy Rozenal <rogeeff@gmail.com> writes:
Jeffrey Lee Hellrung, Jr. <jeffrey.hellrung <at> gmail.com> writes:
f) grid - dataset constructed by "multiplying" 2 datasets of the different sizes and types For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly.
Uh, this is Cartesian product [4], not a cross product [5], as far as I'm concerned. Cartesian product > grid > cross product.
Cartesian product is probably the right name, but it sounds too formal to me personally. I'd like this to be clear in "layman" terms.
Even the laymen here will have sat through (and passed) an intermediate school maths course so will know what a cartesian product is. Grid, on the other hand, could mean all sorts of things so ends up meaning nothing. Alex

on Sun Nov 04 2012, Alexander Lamaison <awl03-AT-doc.ic.ac.uk> wrote:
Gennadiy Rozenal <rogeeff@gmail.com> writes:
Jeffrey Lee Hellrung, Jr. <jeffrey.hellrung <at> gmail.com> writes:
f) grid - dataset constructed by "multiplying" 2 datasets of the different sizes and types For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly.
Uh, this is Cartesian product [4], not a cross product [5], as far as I'm concerned. Cartesian product > grid > cross product.
Cartesian product is probably the right name, but it sounds too formal to me personally. I'd like this to be clear in "layman" terms.
Even the laymen here will have sat through (and passed) an intermediate school maths course so will know what a cartesian product is. Grid, on the other hand, could mean all sorts of things so ends up meaning nothing.
When understanding matters, a term for which web searching will produce a precise and unambiguous definition beats a casual but fuzzy "layman" term every single time. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On 5. Nov 2012 Dave Abrahams wrote:
on Sun Nov 04 2012, Alexander Lamaison wrote:
Gennadiy Rozenal writes:
Jeffrey Lee Hellrung, Jr. writes:
f) grid - dataset constructed by "multiplying" 2 datasets of the different sizes and types For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly.
Uh, this is Cartesian product [4], not a cross product [5], as far as I'm concerned. Cartesian product > grid > cross product.
Cartesian product is probably the right name, but it sounds too formal to me personally. I'd like this to be clear in "layman" terms.
Even the laymen here will have sat through (and passed) an intermediate school maths course so will know what a cartesian product is. Grid, on the other hand, could mean all sorts of things so ends up meaning nothing.
When understanding matters, a term for which web searching will produce a precise and unambiguous definition beats a casual but fuzzy "layman" term every single time.
How about "table"? Seems very intuitive to me... -Julian

On Oct 26, 2012, at 8:41 AM, Ian Emmons <iemmons@bbn.com> wrote:
On Oct 19, 2012, at 11:03 PM, Gennadiy Rozenal wrote:
First, __please__ pay attention to documentation. I know you have said that documentation isn't something you like/want to do, but a library without good documentation is useful only to the writer. In other words, it is (in a general purpose case like this) simply a waste of time.
He didn't suggest that there would be no documentation; only that he hadn't done it and would like help.
Some believe that a name is only a name, but I disagree. Choosing names carefully is, in my opinion, a powerful means of communication and documentation.
+1
d) join - dataset constructed by joining 2 datasets of the same type
int a[] = {1,2,3}; int b[] = {7,8,9}; data::make(a) + data::make(b) - dataset with 6 integer values
This should be called "concatenation", not "joining". People with a database background will expect something called "join" to increase the arity of the result.
Be careful about generalizing from the specific.
e) zip - dataset constructed by zipping 2 datasets of the same size, but not necessarily the same type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd", "zxc"};
data::make(a) ^ data::make(b) dataset with 3 samples which are pairs of int and char*.
Calling this "zipping" is odd (at least to me). Makes it sound like a compression facility. Perhaps "tupling" would be better.
Tupling sounds very odd to me. "Zipping" is well established for merging of this sort.
I also think the choice of operator here is not ideal. How does the xor operator evoke any notion of this operation?
The circumflex has two tails that merge at the top.
f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2};
data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double.
For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly.
Not only is "cross product" wrong, but you should not call the OP for having a different idea than you. ___ Rob

on Fri Oct 26 2012, Rob Stewart <robertstewart-AT-comcast.net> wrote:
On Oct 26, 2012, at 8:41 AM, Ian Emmons <iemmons@bbn.com> wrote:
On Oct 19, 2012, at 11:03 PM, Gennadiy Rozenal wrote:
First, __please__ pay attention to documentation. I know you have said that documentation isn't something you like/want to do, but a library without good documentation is useful only to the writer. In other words, it is (in a general purpose case like this) simply a waste of time.
He didn't suggest that there would be no documentation; only that he hadn't done it and would like help.
But if history is any guide, it might not get done if that help doesn't materialize. I don't think it's out-of-line for Ian to ask Gennadiy to be responsible for it.
d) join - dataset constructed by joining 2 datasets of the same type
int a[] = {1,2,3}; int b[] = {7,8,9}; data::make(a) + data::make(b) - dataset with 6 integer values
This should be called "concatenation", not "joining". People with a database background will expect something called "join" to increase the arity of the result.
Be careful about generalizing from the specific.
Maybe so, but I agree that one should use the already accepted terms for operations. "Concatenate" is the appropriate word here.
e) zip - dataset constructed by zipping 2 datasets of the same size, but not necessarily the same type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd", "zxc"};
data::make(a) ^ data::make(b) dataset with 3 samples which are pairs of int and char*.
Calling this "zipping" is odd (at least to me). Makes it sound like a compression facility. Perhaps "tupling" would be better.
Tupling sounds very odd to me. "Zipping" is well established for merging of this sort.
Yes, "zip" is a well-established concept.
I also think the choice of operator here is not ideal. How does the xor operator evoke any notion of this operation?
The circumflex has two tails that merge at the top.
:-) by that measure, >= would be a good choice too. That said, I have no problem with the circumflex.
f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2};
data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double.
For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly.
Not only is "cross product" wrong, but you should not call the OP for having a different idea than you.
From what's written here, it's hard to know whether "cross product" is wrong or not, but given the number of samples cited, I'm inclined to believe it's probably right. What result other than the cross product do you think that means?
-- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On Sat, Oct 27, 2012 at 19:22:23 -0400, Dave Abrahams wrote:
on Fri Oct 26 2012, Rob Stewart <robertstewart-AT-comcast.net> wrote:
On Oct 26, 2012, at 8:41 AM, Ian Emmons <iemmons@bbn.com> wrote:
On Oct 19, 2012, at 11:03 PM, Gennadiy Rozenal wrote:
f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2};
data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double.
For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly.
Not only is "cross product" wrong, but you should not call the OP for having a different idea than you.
From what's written here, it's hard to know whether "cross product" is wrong or not, but given the number of samples cited, I'm inclined to believe it's probably right. What result other than the cross product do you think that means?
Any operation that is written with × is called cross product. However that symbol is used for different operations in different contexts. Best known are two meanings, the "vector product" used in physics and "cartesian product" used in set theory and relational algebra. The kind of product we have here, set of tuples where first member is member of first operand and second member is member of second operand, is unambiguously called "cartesian product". So I would probably call it "cartesian product", but "cross product" would not be incorrect. -- Jan 'Bulb' Hudec <bulb@ucw.cz>

On 28-10-2012 17:36, Jan Hudec wrote:
From what's written here, it's hard to know whether "cross product" is wrong or not, but given the number of samples cited, I'm inclined to believe it's probably right. What result other than the cross product do you think that means?
Any operation that is written with × is called cross product. However that symbol is used for different operations in different contexts. Best known are two meanings, the "vector product" used in physics and "cartesian product" used in set theory and relational algebra.
The kind of product we have here, set of tuples where first member is member of first operand and second member is member of second operand, is unambiguously called "cartesian product".
So I would probably call it "cartesian product", but "cross product" would not be incorrect.
+1 for Cartesian product. For me at least, "cross product" sounded very confusing. -Thorsten

On Oct 29, 2012, at 6:12 AM, Thorsten Ottosen wrote:
On 28-10-2012 17:36, Jan Hudec wrote:
From what's written here, it's hard to know whether "cross product" is wrong or not, but given the number of samples cited, I'm inclined to believe it's probably right. What result other than the cross product do you think that means?
Any operation that is written with × is called cross product. However that symbol is used for different operations in different contexts. Best known are two meanings, the "vector product" used in physics and "cartesian product" used in set theory and relational algebra.
The kind of product we have here, set of tuples where first member is member of first operand and second member is member of second operand, is unambiguously called "cartesian product".
So I would probably call it "cartesian product", but "cross product" would not be incorrect.
+1 for Cartesian product. For me at least, "cross product" sounded very confusing.
My apologies -- I meant to say "Cartesian", not "cross". Database people do use the latter to mean the former, but it's ambiguous and clearly a violation of my desire to choose names carefully.

on Sun Oct 28 2012, Jan Hudec <bulb-AT-ucw.cz> wrote:
The kind of product we have here, set of tuples where first member is member of first operand and second member is member of second operand, is unambiguously called "cartesian product".
So I would probably call it "cartesian product", but "cross product" would not be incorrect.
+1: cartesion product is better. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On Oct 27, 2012, at 7:22 PM, Dave Abrahams wrote:
On Fri Oct 26 2012, Rob Stewart <robertstewart-AT-comcast.net> wrote:
On Oct 26, 2012, at 8:41 AM, Ian Emmons <iemmons@bbn.com> wrote:
On Oct 19, 2012, at 11:03 PM, Gennadiy Rozenal wrote:
e) zip - dataset constructed by zipping 2 datasets of the same size, but not necessarily the same type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd", "zxc"};
data::make(a) ^ data::make(b) dataset with 3 samples which are pairs of int and char*.
Calling this "zipping" is odd (at least to me). Makes it sound like a compression facility. Perhaps "tupling" would be better.
Tupling sounds very odd to me. "Zipping" is well established for merging of this sort.
Yes, "zip" is a well-established concept.
Great -- "zip" it is. (One more thing to add to the long list of stuff I've never heard of.)
I also think the choice of operator here is not ideal. How does the xor operator evoke any notion of this operation?
The circumflex has two tails that merge at the top.
:-) by that measure,
=
would be a good choice too. That said, I have no problem with the circumflex.
Agreed -- I like the comparison to an upside-down zipper. Perhaps mentioning that in the docs would be a useful thing.

On Oct 27, 2012, at 7:22 PM, Dave Abrahams <dave@boostpro.com> wrote:
on Fri Oct 26 2012, Rob Stewart <robertstewart-AT-comcast.net> wrote:
On Oct 26, 2012, at 8:41 AM, Ian Emmons <iemmons@bbn.com> wrote:
I also think the choice of operator here is not ideal. How does the xor operator evoke any notion of this operation?
The circumflex has two tails that merge at the top.
:-) by that measure,
=
would be a good choice too.
:)
f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2};
data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double.
For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly.
Not only is "cross product" wrong, but you should not call the OP for having a different idea than you.
I meant "call the OP silly" of course.
From what's written here, it's hard to know whether "cross product" is wrong or not, but given the number of samples cited, I'm inclined to believe it's probably right. What result other than the cross product do you think that means?
I think others have addressed this thoroughly. ___ Rob

Dave Abrahams <dave <at> boostpro.com> writes:
I also think the choice of operator here is not ideal. How does the xor operator evoke any notion of this operation?
The circumflex has two tails that merge at the top. by that measure, >= would be a good choice too. That said, I have no problem with the circumflex.
operator ^ preferable because it is symmetric (unlike >=). Genndiy

Ian Emmons <iemmons <at> bbn.com> writes:
First, __please__ pay attention to documentation. I know you have said that documentation isn't something you like/want to do
I do not believe I ever said something like this (not that I am big fan of it). And I do want to do docs for all the new features.
II. New "data driven test case" subsystem
<<snip>>
d) join - dataset constructed by joining 2 datasets of the same type
int a[] = {1,2,3}; int b[] = {7,8,9}; data::make(a) + data::make(b) - dataset with 6 integer values
This should be called "concatenation", not "joining". People with a database background will expect something called "join" to increase the arity of the result.
I believe join is used for this kind of operation quite commonly as well, but if there is a general consensus concat is better I can "rename" it. Keep in mind that in practice it only means the name of the header file for users of this feature (and may not be that if you include some "union" headers, which automatically include all necessary headers.
e) zip - dataset constructed by zipping 2 datasets of the same size, but not necessarily the same type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd", "zxc"};
data::make(a) ^ data::make(b) dataset with 3 samples which are pairs of int and char*.
Calling this "zipping" is odd (at least to me). Makes it sound like a compression facility. Perhaps "tupling" would be better.
Zipping is well established term for this kind of operation. Tupling does not sound good IMO.
I also think the choice of operator here is not ideal. How does the xor operator evoke any notion of this operation? I would choose bitwise-or "|" because that is sometimes used as a flat-file column delimiter. (Actually, my favorite choice would be the comma operator to go along with the "tupling" terminology, but who am I to defy Scott Meyers' More Effective C++, Item 7?)
My most closest analogy is zipper which zips 2 sides together in common things. ^ operator resembles this most closely (merge left and right together into something united) .
f) grid - dataset constructed by "multiplying" 2 datasets of the same different sizes and types type
This dataset has an arity which is sum of argument dataset arities.
int a[] = {1,2,3}; char* b[] = {"qwe", "asd"}; double c[] = {1.1, 2.2};
data::make(a) * data::make(b) * data::make(c) dataset with 12 samples which are tuples of int and char* and double.
For people with a database background, "cross product" is the obvious name for this. Calling it anything else is silly. Also, I think you mean that the arity is the *product* of the argument dataset arities.
By arity of the dataset I mean an arty of the samples inside of it. and this arity is indeed a sum of arities. Size of the grid dataset is indeed product of sizes. Gennadiy

On 20-10-2012 05:03, Gennadiy Rozenal wrote:
IX. Number of smaller improvements:
* Notion of framework shutdown. Allows to eliminate some fake memory leaks * Added checkpoints at fixture entry points, test case entry point and test case exit point for auto registered test cases * New FPE portable interfaces introduced and FPE handling is separated from system errors handling. You can detect FPE even if catch_system_error is false * Added an ability to erase registered exception translator * execution_monitor: new interface vexecute - to be used to monitor nullary functions with no result values * test_tree_visitor interface extended to facilitate visitors applying the same action to all test units * execution_monitor use typeid to report "real" exception type if possible. * New ability to redirect leaks report into a file
That seems like a lot of nice improvements. However, there is one thing that could save a lot of time for us, and that's the ability to run a specific test as the first one. Is that hard to add? kind regards Thorsten

on Mon Oct 29 2012, Thorsten Ottosen <thorsten.ottosen-AT-dezide.com> wrote:
On 20-10-2012 05:03, Gennadiy Rozenal wrote:
IX. Number of smaller improvements:
* Notion of framework shutdown. Allows to eliminate some fake memory leaks * Added checkpoints at fixture entry points, test case entry point and test case exit point for auto registered test cases * New FPE portable interfaces introduced and FPE handling is separated from system errors handling. You can detect FPE even if catch_system_error is false * Added an ability to erase registered exception translator * execution_monitor: new interface vexecute - to be used to monitor nullary functions with no result values * test_tree_visitor interface extended to facilitate visitors applying the same action to all test units * execution_monitor use typeid to report "real" exception type if possible. * New ability to redirect leaks report into a file
That seems like a lot of nice improvements. However, there is one thing that could save a lot of time for us, and that's the ability to run a specific test as the first one.
Is that hard to add?
What I always wanted was something that would automatically run all the tests that failed first and schedule all the tests that succeeded last, after any that didn't run at all. Obviously that requires keeping some state around between runs. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Dave Abrahams <dave <at> boostpro.com> writes:
What I always wanted was something that would automatically run all the tests that failed first and schedule all the tests that succeeded last, after any that didn't run at all. Obviously that requires keeping some state around between runs.
With a little effort of maintaining the state somewhere we can add 2 command line arguments: test.exe --save_state=<location> test.exe --run=@failing test.exe --run=!@failing Second is an existing command line argument, but we'll need to add handling to "special" label. Gennadiy

Thorsten Ottosen <thorsten.ottosen <at> dezide.com> writes:
That seems like a lot of nice improvements. However, there is one thing that could save a lot of time for us, and that's the ability to run a specific test as the first one.
Is that hard to add?
You can always run that test by name and do a second run which skips that test case: test.exe --run=my/special/test test.exe --run=!my/special/test More generically you can attach label to some specific test units and run them exclusively. Direct support within single test tree pass of priorities may not be trivial. Gennadiy

On 29-10-2012 18:33, Gennadiy Rozental wrote:
Thorsten Ottosen <thorsten.ottosen <at> dezide.com> writes:
That seems like a lot of nice improvements. However, there is one thing that could save a lot of time for us, and that's the ability to run a specific test as the first one.
Is that hard to add?
You can always run that test by name and do a second run which skips that test case:
test.exe --run=my/special/test test.exe --run=!my/special/test
More generically you can attach label to some specific test units and run them exclusively.
Direct support within single test tree pass of priorities may not be trivial.
Ok. Maybe I should define better what I have found a need for (it would save tons of time, FWIW). When I said "test" I meant a single test case, like defined by BOOST_AUTO_TEST_CASE. If "test" means test suite as defined by BOOST_AUTO_TEST_SUITE, I guess I can get most of what I want by making a dummy test suite and move my problematic code to that suite while debugging. regards -Thorsten

Thorsten Ottosen <thorsten.ottosen <at> dezide.com> writes:
Ok. Maybe I should define better what I have found a need for (it would save tons of time, FWIW). When I said "test" I meant a single test case, like defined by BOOST_AUTO_TEST_CASE.
The answer is the same. You get what you need by running test module twice: first run you include only the test case you are interested in and second run excludes that same test case: test.exe --run=testcase1 test.exe --run=!testcase1 Gennadiy

On 04-11-2012 10:48, Gennadiy Rozenal wrote:
Thorsten Ottosen <thorsten.ottosen <at> dezide.com> writes:
Ok. Maybe I should define better what I have found a need for (it would save tons of time, FWIW). When I said "test" I meant a single test case, like defined by BOOST_AUTO_TEST_CASE.
The answer is the same. You get what you need by running test module twice: first run you include only the test case you are interested in and second run excludes that same test case:
test.exe --run=testcase1 test.exe --run=!testcase1
Thanks! -Thorsten

On 29-10-2012 11:50, Thorsten Ottosen wrote:
That seems like a lot of nice improvements. However, there is one thing that could save a lot of time for us, and that's the ability to run a specific test as the first one.
Is that hard to add?
Another issue. It would be cool to be able to give access to private members in the unit test. So given BOOST_AUTO_TEST_CASE( testMyClass ) { ... } I would be able to test private functions with: class MyClass { friend void boost::test::testMyClass(); // or friend class boost::test::access; }; or something. Would this be possible? kind regards -Thorsten

Thorsten Ottosen <thorsten.ottosen <at> dezide.com> writes:
Another issue. It would be cool to be able to give access to private members in the unit test. So given
BOOST_AUTO_TEST_CASE( testMyClass ) { ... }
I would be able to test private functions with:
class MyClass { friend void boost::test::testMyClass(); // or friend class boost::test::access; };
I believe you can just name struct testMyClass a friend of MyClass and that's it. If your test case is within a test suite you'll name to mention it as well test_suite::test_case_name. Gennadiy

On 04-11-2012 10:57, Gennadiy Rozenal wrote:
Thorsten Ottosen <thorsten.ottosen <at> dezide.com> writes:
Another issue. It would be cool to be able to give access to private members in the unit test. So given
BOOST_AUTO_TEST_CASE( testMyClass ) { ... }
I would be able to test private functions with:
class MyClass { friend void boost::test::testMyClass(); // or friend class boost::test::access; };
I believe you can just name struct testMyClass a friend of MyClass and that's it. If your test case is within a test suite you'll name to mention it as well test_suite::test_case_name.
Ok. Great. -Thorsten
participants (21)
-
Alexander Lamaison
-
Antony Polukhin
-
Daniel James
-
Dave Abrahams
-
David Abrahams
-
Edward Diener
-
Evgeny Panasyuk
-
Gennadiy Rozenal
-
Gennadiy Rozental
-
Ian Emmons
-
Jan Hudec
-
Jeffrey Lee Hellrung, Jr.
-
Julian Gonggrijp
-
Lars Viklund
-
Paul A. Bristow
-
Paul Mensonides
-
Rob Stewart
-
Robert Ramey
-
Steven Watanabe
-
Thorsten Ottosen
-
Vicente J. Botet Escriba