
John Maddock <boost.regex <at> virgin.net> writes:
* Clear separation between components (execution monitor, from unit test call framework, from testing macros). Ideally each would be a separate mini
Isn't it already the case?
There's duplication of source between the different component libraries (execution monitor, test monitor, unit test).
Really? can you give an example (test monitor is not supported anymore so it is out of the picture)
Plus the headers seem to pull in a whole lot of stuff I never use
For example? Dependency is interesting thing. On every one developer asking for for less dependency you'll see five asking to better user experience, so that they can only include single header and be done with this.
library if that's possible, with the executable linking against just what it needs and no more.
Isn'r it already the case?
My gut feeling is that recent releases have got slower to compile and #include.
In "recent" releases Boost.Test did not change at all. In fact because of your complains I did not release it for like 3 years now (waiting for either boost moving to modularized setup or collecting all the changes in trunk so I can release them on one go and deal with some failures which might happened once).
* Easy debugging: if I step into a test case in the debugger the first thing I should see is *my code*. As it is I have to step in and out of dozens of Boost.Test functions before I get to my code. This one really annoys me.
I am not sure I follow. Setup break point in your test case it will stop right there. Do you setup break point in first line of main? Or you mean something completely different?
No I mean that if I break on a test case and then hit "step into" in the debugger I have to step through your code before I get to mine, so for example if I break on a BOOST_CHECK_CLOSE_FRACTION and then step, I hit:
[...]
Return and step - and finally hit my code!
1. Setup a break point in your code 2. Visual studio have a solution for avoiding stepping into the code you do not want to (I should probably include it (file) in docs) 3. There is a valid reason for these extra code. It collects some context info (no overhead) and makes sure the code under tests executed only once (thus this fwrd call)
OK, I rechecked this, and BOOST_CHECK_CLOSE_FRACTION appears to have next to no overhead now - excellent!
* Exemplary error messages when things fail - Boost.Test has improved in this area, but IMO not enough.
Specifically?
Testing:
double a = 1; double b = 2; BOOST_CHECK_CLOSE_FRACTION(a, b, 0.0);
Yields:
m:/data/boost/trunk/ide/libraries/scrap/scrap.cpp(41): error: in "test_main_caller( argc, argv )": difference{0} between a{1} and b{2} exceeds 0
Leaving aside the obvious bug (the difference is not zero!!), I would have printed this as something like:
m:/data/boost/trunk/ide/libraries/scrap/scrap.cpp(41): error: in "test_main_caller( argc, argv )": difference between a and b exceeds specified tolerance with: a = 1.0 b = 2.0 tolerance = 0.0 difference = 1.0
Which I'm sure will get mangled in email, but the idea is that the values are pretty printed so they all line up nicely - makes it much easier to see the problem compared to dumping them all on one line.
There are different opinions how output should look like. Some people prefer multiline detailed description, some prefer single line consistent output. We can probably provide an easier interface for output customization thus you can use some simple plugins in your test modules to see it the way you like.
* An easy way to tell if the last test has failed, and/or an easy way to print auxiliary information when the last test has failed. This is primarily for testing in loops, when iterating over tabulated test data.
This is addressed with trunk improvements. There is several tools introduced to help with context specification.
Such as? Docs?
BOOST_TEST_INFO BOOST_TEST_CONTEXT There is a unit test for these in test_tools_test.cpp. Used like these: BOOST_TEST_INFO( "info 1" ); BOOST_TEST_INFO( "info 2" ); BOOST_TEST_INFO( "info 3" ); BOOST_CHECK( false ); BOOST_TEST_CONTEXT( "some sticky context" ) { BOOST_CHECK( false ); BOOST_TEST_INFO( "more context" ); BOOST_CHECK( false ); BOOST_TEST_INFO( "different subcontext" ); BOOST_CHECK( false ); } context only reported if error occurred. Docs are pending.
* Ultra stable code. Exempting bug fixes, I'd like to see a testing library almost never change, or only change after very careful consideration, for example if a new C++ language feature requires special testing support.
Test library as any other library has users, bugs, feature requests etc. It has a life on it's own. It does indeed need to be more carefully maintained in comparison with other libs, but:
1. Proper component dependency helps. Your library needs to be built against released version of Test library (even trunk one). This way Test library can do it's own development in parallel.
The way Boost works at present is that if Trunk breaks, then so does my stuff in Trunk: as a result I can no longer see whether I'm able to merge to release or not if Boost.Test is broken on Trunk. Unfortunately this has happened to me a few times now.
Test library aside, are your expectations that either a) none of your dependencies ever change b) any changes in every revision will work on every platform c) your dependencies can only change when are not doing your own development
Again: proper component dependency. Depending on trunk version of your dependencies is the root cause of the issue here. One library should depend on specific released version of another library A.deps = B:1.2.3
And again that's not how Boost testing currently works, no matter what you may wish.
Well in this case we need to be a bit more patient when dependent library is changing.
The issue here is that breaking changes should not be made to trunk without checking to see what else in Boost is depending on those features. As you know that didn't happen with the last major commit.
I believe you overstate the issue. If there were any breakages these were addressed within couple testing cycles. Gennadiy