What Should we do About Boost.Test?

Hi All, I was just going through Boost.Test to try to figure out how to teach it, and while it looks to have substantial value, it is also in quite a mess. It contains loads of features that are exercised in the examples/ directory but neither included in any of the tests nor documented. There are facilities for command-line argument parsing! There are "decorators" that turn on/off features for test cases. There is support for mock objects! These are cool and sometimes necessary features, but who knew? The third tutorial page (http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/tutorials/new-year-r...) has a glaring typo in the code examples: "BOOST_AUTO_EST_CASE". There's no reference manual at all. There are nearly-identical files in the examples/ directory called "est_example1.cpp" and "test_example1.cpp" (Did the "t" key on someone's keyboard break?) I could go on, but where would I stop? I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like, as long as the code is incompletely / incorrectly documented and tested, it's just someone's personal coding project that we happen to keep shipping with Boost, and not really a library for general use. This situation reflects poorly on Boost as a whole and the fact that it centers around a _testing_ library, which is concerned with robustness... well, let's just say that the irony isn't lost on me. I don't mean this posting as an attack on Gennadiy in any way, but I think the situation is unacceptable and therefore am opening a discussion about what should happen. As a straw man, I'll make this suggestion: - Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism - The code is removed from Boost thereafter I am not at all attached to removing Boost.Test from Boost, but IMO rescuing it would require a significant new investment of time and energy from people who are committed to bringing the library up to par with the rest of what we do. (I seriously thought about volunteering for this myself, but realistically speaking, I don't have the time, and volunteering for something you can't actually do is worse than not volunteering at all.) Even if volunteers show up, I'd suggest proceeding with the plan above, subject to reversal at any time the work actually gets done. Thoughts? -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On 17/09/2012 10:40 AM, Dave Abrahams wrote:
I am not at all attached to removing Boost.Test from Boost, but IMO rescuing it would require a significant new investment of time and energy from people who are committed to bringing the library up to par with the rest of what we do. (I seriously thought about volunteering for this myself, but realistically speaking, I don't have the time, and volunteering for something you can't actually do is worse than not volunteering at all.) Even if volunteers show up, I'd suggest proceeding with the plan above, subject to reversal at any time the work actually gets done.
I agree with everything you have said but the library itself is gold. One of my favourites for testing. However, I can never figure out anything from the documentation and everything I know about it is from spelunking the source code. Is there some way we can get a student interested in technical writing to rewrite the documentation? It would be good for their resume. Thanks for bringing up this topic. Sohail

Hi Sohail, On Monday, 17. September 2012 10:50:52 Sohail Somani wrote:
On 17/09/2012 10:40 AM, Dave Abrahams wrote:
I am not at all attached to removing Boost.Test from Boost, but IMO rescuing it would require a significant new investment of time and energy from people who are committed to bringing the library up to par with the rest of what we do. (I seriously thought about volunteering for this myself, but realistically speaking, I don't have the time, and volunteering for something you can't actually do is worse than not volunteering at all.) Even if volunteers show up, I'd suggest proceeding with the plan above, subject to reversal at any time the work actually gets done.
I agree with everything you have said but the library itself is gold. One of my favourites for testing.
+1 on each.
However, I can never figure out anything from the documentation and everything I know about it is from spelunking the source code.
... and doing some experimenting.
Is there some way we can get a student interested in technical writing to rewrite the documentation? It would be good for their resume.
Please take a look at http://lists.boost.org/Archives/boost/2012/09/196050.php for Gennadiy's last statement on Boost.Test. I think that a major overhaul including the documentation is overdue. Allocating the needed resources seems to be the main problem. Yours, Jürgen -- * Dipl.-Math. Jürgen Hunold ! * voice: ++49 4257 300 ! Fährstraße 1 * fax : ++49 4257 300 ! 31609 Balge/Sebbenhausen * jhunold@gmx.eu ! Germany

On 17/09/2012 11:36 AM, Jürgen Hunold wrote:
Please take a look at
http://lists.boost.org/Archives/boost/2012/09/196050.php
for Gennadiy's last statement on Boost.Test. I think that a major overhaul including the documentation is overdue. Allocating the needed resources seems to be the main problem.
I think only Gennadiy can answer why the documentation is not maintained as well as the code as he is clearly adding new features on a regular basis. Maybe it's too tedious. I've asked around for a technical writer who might be interested. Sohail

Jürgen Hunold <jhunold <at> gmx.eu> writes:
However, I can never figure out anything from the documentation and everything I know about it is from spelunking the source code.
... and doing some experimenting.
Try trunk version of docs. They are somewhat improved.
Is there some way we can get a student interested in technical writing to rewrite the documentation? It would be good for their resume.
If anyone is up to writing docs, I am not going to protest ;).
Please take a look at
http://lists.boost.org/Archives/boost/2012/09/196050.php
for Gennadiy's last statement on Boost.Test. I think that a major overhaul including the documentation is overdue. Allocating the needed resources seems to be the main problem.
This is still the case. Gennadiy

On 17/09/2012 1:06 PM, Gennadiy Rozental wrote:
Jürgen Hunold <jhunold <at> gmx.eu> writes:
However, I can never figure out anything from the documentation and everything I know about it is from spelunking the source code.
... and doing some experimenting.
Try trunk version of docs. They are somewhat improved.
Actually, it is a lot better. Or maybe I just know exactly what to look for now :) Documentation for those missing features would be excellent. They sound very useful. Sohail

On Mon, Sep 17, 2012 at 6:40 PM, Dave Abrahams <dave@boostpro.com> wrote:
Thoughts?
I'd like to mention that the library has been in Boost for many years and it may have a considerable user base. I myself have written quite a few tests based on Boost.Test (in one degree or the other) and I wouldn't be happy to discover all these tests broken with a new Boost release. In my opinion, Boost needs a good testing library. Whether the current Boost.Test qualifies as such or not is a valid question but it is better than not having any. While there is no viable replacement for Boost.Test, the library should be retained at least for sake of backward compatibility. When the replacement appears we can declare a deprecation period to allow users (and Boost developers) to port their tests. IMHO.

On 17-09-2012 16:40, Dave Abrahams wrote:
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism - The code is removed from Boost thereafter
Well, that's a huge amount of work, removing something that works quite well. Then comes the cost for all users of Boost.Test, which must migrate to something else. So -1 for deprecation. -Thorsten

I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like, as long as the code is incompletely / incorrectly documented and tested, it's just someone's personal coding project that we happen to keep shipping with Boost, and not really a library for general use. This situation reflects poorly on Boost as a whole and the fact that it centers around a _testing_ library, which is concerned with robustness... well, let's just say that the irony isn't lost on me.
Just one other data point: major updates to Boost.Test have broken my stuff on more than one occation (actually it feels like *every* time there's been an update, but that's probably an exageration). As a result for the multiprecision library I decided not to use it, and wrote my own extensions to the lightweight test framework in /boost/detail/ that emulate (nearly) all the BOOST_CHECK* macros. It's not ideal, but at least I know it's stable and lightweight.
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release
OK. Though I'd note we don't have a mechanism for that... we should invent one though as Pool really should be depricated as well (as previously discussed somewhere around here).
- Its documentation, such as it is, is removed from the release after that
OK.
- Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism
Which is?
- The code is removed from Boost thereafter
That could be problematic for a lot of people IMO. Leaving Boost.Test aside for a moment, as a basic procedure, how about: * Deprecated libraries are moved to separate section in the library's index, along with a reason why they're deprecated (no maintainer, replaced by something better, in need of serious work etc). * Libraries can move off the deprecated list if their issues are addressed - this may entail a new maintainer (if there isn't one), and/or a mini review to ensure things are back in good order. * Libraries that are deprecated for more than a year without attracting support, are removed, as long as it's possible to do so without breaking too much in Boost (moving into /boost/deprecated/ would be another option). John.

On Mon, Sep 17, 2012 at 9:18 AM, John Maddock <boost.regex@virgin.net> wrote:
I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like, as long as the code is incompletely / incorrectly documented and tested, it's just someone's personal coding project that we happen to keep shipping with Boost, and not really a library for general use. This situation reflects poorly on Boost as a whole and the fact that it centers around a _testing_ library, which is concerned with robustness... well, let's just say that the irony isn't lost on me.
Just one other data point: major updates to Boost.Test have broken my stuff on more than one occation (actually it feels like *every* time there's been an update, but that's probably an exageration).
As a result for the multiprecision library I decided not to use it, and wrote my own extensions to the lightweight test framework in /boost/detail/ that emulate (nearly) all the BOOST_CHECK* macros. It's not ideal, but at least I know it's stable and lightweight.
I use LightweightTest often instead of Boost.Test (also because LightweightTest is hear-only and compiles faster). It'd be nice to reconcile these two libs--but that's even more work... --Lorenzo

Lorenzo Caminiti <lorcaminiti <at> gmail.com> writes:
I use LightweightTest often instead of Boost.Test (also because LightweightTest is hear-only and compiles faster). It'd be nice to reconcile these two libs--but that's even more work...
In your scenario how much faster LightweightTest compiles vs library (not single header) variant of Boost.Test? In my experience, Boost.Test has negligible (not detected with naked eye) overhead in library variant. Gennadiy

On 17/09/2012 12:18 PM, John Maddock wrote:
Just one other data point: major updates to Boost.Test have broken my stuff on more than one occation (actually it feels like *every* time there's been an update, but that's probably an exageration).
As a coping mechanism, I wrote my own wrapper macros to avoid this because it happened to me too. I think it only happened once but it made me paranoid enough to add a thin layer. Probably because I was frustrated trying to figure out what changed through the documentation. Backwards compatibility would be very welcome! Sohail

2012/9/17 Dave Abrahams:
(I seriously thought about volunteering for this myself, but realistically speaking, I don't have the time, and volunteering for something you can't actually do is worse than not volunteering at all.) Even if volunteers show up, I'd suggest proceeding with the plan above, subject to reversal at any time the work actually gets done.
Thoughts?
I think it is high time to move the documentation to Wiki, where everyone can contribute. Please note, for example, en.cppreference.com. A huge volume of documentation has been written in an incredibly short period of time, due to the ease of making changes, and with my help as well. I believe that in any other way it won't get a chance to be created a good, high-quality, timely updatable documentation. For example, about six months ago, I sent a patch consisting of 5-6 lines for boost.container documentation. It took 3-4 months to apply this patch. Such delays are unacceptable, and discourage any desire to make commits and/or send patches. This is my five cents. -- Regards, niXman ___________________________________________________ Dual-target(32 & 64 bit) MinGW compilers for 32 and 64 bit Windows: http://sourceforge.net/projects/mingwbuilds/

On 17.09.2012 07:40, Dave Abrahams wrote:
Hi All,
I was just going through Boost.Test to try to figure out how to teach it, and while it looks to have substantial value, it is also in quite a mess. It contains loads of features that are exercised in the examples/ directory but neither included in any of the tests nor documented. There are facilities for command-line argument parsing! There are "decorators" that turn on/off features for test cases. There is support for mock objects! These are cool and sometimes necessary features, but who knew?
I agree that this is a massive problem - I've been using Boost.Test for quite a while and the team I work in uses it for testing of our project's code, but I didn't know that half these features were available.
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism
As has been mentioned, is there any alternative in sight? And if there is, what is the migration path especially for people who might have a few thousand unit tests? There's also the question of infrastructure. Boost.Test has at least some support from some CI systems (for example, we're using Jenkins and thanks to an existing plugin, it can parse the XML output generated by Boost.Test. I would argue that a replacement probably either needs to support the same output format or at least an easy way to support this sort of post-processing.
- The code is removed from Boost thereafter
Unless there is a migration path that would make it easy to get off Boost.Test and onto another framework that would offer similar benefits (and integration with CI systems, etc etc), I'm not sure that this a good idea. Putting it into a deprecated section with a clear line drawn in the sand saying "no further development, if it breaks you're on your own" is probably a better approach.

Timo H. Geusch <timo <at> unix-consult.com> writes:
I agree that this is a massive problem - I've been using Boost.Test for quite a while and the team I work in uses it for testing of our project's code, but I didn't know that half these features were available.
What exactly is massive problem? There are number of new features in a trunk (not in a release). Do you miss any features or your teammates use them and you do not see documentation for it? Gennadiy

On 17.09.2012 10:34, Gennadiy Rozental wrote:
Timo H. Geusch <timo <at> unix-consult.com> writes:
I agree that this is a massive problem - I've been using Boost.Test for quite a while and the team I work in uses it for testing of our project's code, but I didn't know that half these features were available.
What exactly is massive problem? There are number of new features in a trunk (not in a release). Do you miss any features or your teammates use them and you do not see documentation for it?
It more that the discoverability of the new features isn't that great. What we figured out about the test framework mainly came from experimentation and ample use of Google. I should probably have a look at the docs in trunk, it sounds like they are better and it sounds like there are more features that might come in handy in the future. Don't get me wrong, I like Boost.Test and it works well for us, but I don't think it's that easy to use its full power unless you happen to work with someone who knows the framework really well.

On 17/09/12 17:49, Timo H. Geusch wrote:
On 17.09.2012 07:40, Dave Abrahams wrote: [snip]
- Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism
As has been mentioned, is there any alternative in sight? And if there is, what is the migration path especially for people who might have a few thousand unit tests?
There's also the question of infrastructure. Boost.Test has at least some support from some CI systems (for example, we're using Jenkins and thanks to an existing plugin, it can parse the XML output generated by Boost.Test. I would argue that a replacement probably either needs to support the same output format or at least an easy way to support this sort of post-processing.
We are the same - we've actually written some trac.bitten plugins to elegantly handle Boost.Test output that gives us quite impressive reports with full source linking for errors and test cases. Additionally we have taken the time to add support to our scons framework so we can have integrated handling of boost tests when building, giving pretty printed output on the console, green bars, timing info etc. These are all in the process of being open sourced but rely on some small patches to Boost.Test that we need to push back first.
- The code is removed from Boost thereafter
Unless there is a migration path that would make it easy to get off Boost.Test and onto another framework that would offer similar benefits (and integration with CI systems, etc etc), I'm not sure that this a good idea. Putting it into a deprecated section with a clear line drawn in the sand saying "no further development, if it breaks you're on your own" is probably a better approach.
+1
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Dave Abrahams <dave <at> boostpro.com> writes:
Hi All,
I was just going through Boost.Test to try to figure out how to teach it, and while it looks to have substantial value, it is also in quite a mess.
It contains loads of features that are exercised in the examples/ directory but neither included in any of the tests nor documented.
There are not that many of these in fact. They can be split into 2 kinds: * Implemented long time ago and never documented There couple of these, related to interaction based testing and mock object support * Brand new features There are series of new features implemented about a year ago. Almost ready for prime time, but not 100%. Specifically lacking documentation.
There are facilities for command-line argument parsing!
Yes, indeed and while I like these much better than official boost one, I do not insist it to be a public interface, thus it does not need to be documented.
There are "decorators" that turn on/off features for test cases.
This is new feature.
There is support for mock objects! These are cool and sometimes necessary features, but who knew?
The third tutorial page has a glaring typo in the code examples: "BOOST_AUTO_EST_CASE". There's no reference manual at all.
1. There is a significantly improved documentation in the trunk. I never got to release it, just to avoid rocking the boat (and I hoped just to do one big release with all new improvements) 2. There is no *formal* reference documentation, but I am not convinced there is a huge need for one. In majority of the cases Boost.Test unlike other boost libraries is not extended or accessed through class interface. There are few interfaces which are indeed used and they are documented.
There are nearly-identical files in the examples/ directory called "est_example1.cpp" and "test_example1.cpp" (Did the "t" key on someone's keyboard break?) I could go on, but where would I stop?
These are two completely different things: est is "exception safety test". Should have named it different I guess ;)
I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like,
I am not sure what you mean. There is an extensive self test modules. [...]
I am not at all attached to removing Boost.Test from Boost, but IMO rescuing it would require a significant new investment of time and energy from people who are committed to bringing the library up to par with the rest of what we do.
I an not quite convinced that anything is really in such a bad shape that it requires rescuing. That said if anyone is interested in helping to bring up latest release I am happy to share the load. Gennadiy

Sorry I went quiet for a while; I got sick. I figure I should respond directly to this message, even if I don't respond to any others... on Mon Sep 17 2012, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
Hi All,
I was just going through Boost.Test to try to figure out how to teach it, and while it looks to have substantial value, it is also in quite a mess.
It contains loads of features that are exercised in the examples/ directory but neither included in any of the tests nor documented.
There are not that many of these in fact. They can be split into 2 kinds:
* Implemented long time ago and never documented There couple of these, related to interaction based testing and mock object support * Brand new features There are series of new features implemented about a year ago. Almost ready for prime time, but not 100%. Specifically lacking documentation.
The question remains: "how do I learn/teach this library?" If I can't answer those questions, I also can't answer the question How do I use this library? I don't understand how other people have arrived at answers for themselves.
There are facilities for command-line argument parsing!
Yes, indeed and while I like these much better than official boost one, I do not insist it to be a public interface, thus it does not need to be documented.
That's fine, but in the presence of so many other problems and of a large suite of examples/ directed at this feature, it contributes to the sense of uncertainty about what this library *is*. BTW, also, it's completely unclear how CLA processing relates to the mission of the library.
There are "decorators" that turn on/off features for test cases.
This is new feature.
There is support for mock objects! These are cool and sometimes necessary features, but who knew?
The third tutorial page has a glaring typo in the code examples: "BOOST_AUTO_EST_CASE". There's no reference manual at all.
1. There is a significantly improved documentation in the trunk. I never got to release it, just to avoid rocking the boat (and I hoped just to do one big release with all new improvements)
Does that "improved documentation" apply to what's on the release branch, or only to what's available in trunk?
2. There is no *formal* reference documentation, but I am not convinced there is a huge need for one.
There most certainly is, *especially* in the presence of so much other uncertainty.
In majority of the cases Boost.Test unlike other boost libraries is not extended or accessed through class interface.
I don't see how that's relevant.
There are few interfaces which are indeed used and they are documented.
??? I can't even begin to understand how you can say that. Everything one does with a library, one does through an interface. Every interface needs to be documented so that users know how to use it correctly. Otherwise, it's just your private code.
There are nearly-identical files in the examples/ directory called "est_example1.cpp" and "test_example1.cpp" (Did the "t" key on someone's keyboard break?) I could go on, but where would I stop?
These are two completely different things: est is "exception safety test". Should have named it different I guess ;)
I guess. Is BOOST_AUTO_EST_CASE another example of "exception safety test," or is that a genuine typo?
I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like,
I am not sure what you mean. There is an extensive self test modules.
I mean *redundancy* between the tests and the documentation. The tests should check that the library does what the documentation says it does.
From the tests alone I can't even draw a conclusion about what you intend as a stable, supported, public interface.
I am not at all attached to removing Boost.Test from Boost, but IMO rescuing it would require a significant new investment of time and energy from people who are committed to bringing the library up to par with the rest of what we do.
I an not quite convinced that anything is really in such a bad shape that it requires rescuing. That said if anyone is interested in helping to bring up latest release I am happy to share the load.
Look, I teach classes on Boost. If Boost.Test is not learnable and teachable, I have to tell my students to stay away from it. That's embarrassing for me, and bad for Boost. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Dave Abrahams <dave <at> boostpro.com> writes:
The question remains: "how do I learn/teach this library?" If I can't answer those questions, I also can't answer the question
What do you need to be able to teach/learn?
How do I use this library?
While I never claim that docs are excellent, they do answer in length to this question. Did you read user guide sections (test organization for example)?
There are facilities for command-line argument parsing!
Yes, indeed and while I like these much better than official boost one, I do not insist it to be a public interface, thus it does not need to be documented.
That's fine, but in the presence of so many other problems and of a large suite of examples/ directed at this feature, it contributes to the sense of uncertainty about what this library *is*.
Examples left from the time when I thought I might consider submitting this component as an alternative to program_options. These days I use them sometimes when I need to develop some improvements in this area. Would you rather have them removed?
BTW, also, it's completely unclear how CLA processing relates to the mission of the library.
Boost.Test needs to parse CLAs. No more, nor less. It hs nothing to do with mission of the library.
Does that "improved documentation" apply to what's on the release branch, or only to what's available in trunk?
2. There is no *formal* reference documentation, but I am not convinced
Improved documentation (not sure why you put it in quotes) applies to what is already released. There is no documentation yet for newly developed features. there
is a huge need for one.
There most certainly is, *especially* in the presence of so much other uncertainty.
What uncertainty?
In majority of the cases Boost.Test unlike other boost libraries is not extended or accessed through class interface.
I don't see how that's relevant.
There are few interfaces which are indeed used and they are documented.
??? I can't even begin to understand how you can say that. Everything one does with a library, one does through an interface. Every interface needs to be documented so that users know how to use it correctly. Otherwise, it's just your private code.
This comment just shows that you did not try to read the docs (I know they are not perfect, but the answer to above is clear even from skimming them) Public interface of unit test framework is predominantly macro based. This includes test tree management and test tools (Well, new interfaces deviate from this, but they are beyond the point). And almost every public interface (with few exceptions which I already pointed out) ARE documented in details. There are few non-macro public interfaces, which are documented as well. There are couple interfaces one could use to extend library functionality, which we can add to documentation, but these are: * well above basic usage * very rarely needed So should we document these? Probably yes, but its absence does not affect majority of Boost.Test users and clearly does not preclude anyone from using it. So in summary, "every public interface needs to be documented" is true for the most part in Boost.Test case. Or can you show any glaring gap?
I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like,
I am not sure what you mean. There is an extensive self test modules.
I mean *redundancy* between the tests and the documentation. The tests should check that the library does what the documentation says it does.
They do. Self tests are comprehensive and cover all public interfaces described in documentation (unless I missed something). Do you have example of clear gaps?
From the tests alone I can't even draw a conclusion about what you intend as a stable, supported, public interface.
Well, it might be difficult to read unit test modules (they are not really developed as a replacement for documentation), but they all test some part of public interface (for example test_tools_test tests all public Test Tools)
Look, I teach classes on Boost. If Boost.Test is not learnable and teachable, I have to tell my students to stay away from it. That's embarrassing for me, and bad for Boost.
I'll be happy to help you prepare for this class (BTW, there are couple presentations I gave for Boost.Test, which can be used as basis for class curriculum). For now all I see is just some misunderstanding of terms and what constitutes public interface of the library. Regards, Gennadiy

on Fri Oct 05 2012, Gennadiy Rozenal <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
The question remains: "how do I learn/teach this library?" If I can't answer those questions, I also can't answer the question
What do you need to be able to teach/learn?
How do I use this library?
While I never claim that docs are excellent, they do answer in length to this question. Did you read user guide sections (test organization for example)?
I did read through the user guide.
There are facilities for command-line argument parsing!
Yes, indeed and while I like these much better than official boost one, I do not insist it to be a public interface, thus it does not need to be documented.
That's fine, but in the presence of so many other problems and of a large suite of examples/ directed at this feature, it contributes to the sense of uncertainty about what this library *is*.
Examples left from the time when I thought I might consider submitting this component as an alternative to program_options. These days I use them sometimes when I need to develop some improvements in this area. Would you rather have them removed?
Not necessarily, although it would be good to explain in the docs what they're doing there.
BTW, also, it's completely unclear how CLA processing relates to the mission of the library.
Boost.Test needs to parse CLAs. No more, nor less. It hs nothing to do with mission of the library.
Does that "improved documentation" apply to what's on the release branch, or only to what's available in trunk?
Improved documentation (not sure why you put it in quotes)
Just so you know I mean exactly what you were referring to. No more, no less.
applies to what is already released. There is no documentation yet for newly developed features.
OK, this makes no sense at all, IIUC. You have improved documentation in the trunk that applies to what's in the release branch, and no documentation for what's in the release branch? I can't browse the trunk docs on the web, AFAIK... oh, you've checked in HTML...
2. There is no *formal* reference documentation, but I am not convinced there is a huge need for one.
There most certainly is, *especially* in the presence of so much other uncertainty.
What uncertainty?
For example: how to use the library, what's actually in it, what's part of the official interface, whether it's being maintained... none of these things feel to me like they have solid answers. The uncertainty is my personal experience, but I sense from replies here that I am not alone.
In majority of the cases Boost.Test unlike other boost libraries is not extended or accessed through class interface.
I don't see how that's relevant.
There are few interfaces which are indeed used and they are documented.
??? I can't even begin to understand how you can say that. Everything one does with a library, one does through an interface. Every interface needs to be documented so that users know how to use it correctly. Otherwise, it's just your private code.
This comment just shows that you did not try to read the docs
Uh... No, it indicates that I misunderstood your statement to mean, "most of how you interact with Boost.Test is not through interfaces and therefore doesn't need to be documented." I now understand that you weren't saying that, sorry.
(I know they are not perfect, but the answer to above is clear even from skimming them)
Public interface of unit test framework is predominantly macro based. This includes test tree management and test tools (Well, new interfaces deviate from this, but they are beyond the point). And almost every public interface (with few exceptions which I already pointed out) ARE documented in details. There are few non-macro public interfaces, which are documented as well. There are couple interfaces one could use to extend library functionality, which we can add to documentation, but these are: * well above basic usage * very rarely needed
So should we document these? Probably yes, but its absence does not affect majority of Boost.Test users and clearly does not preclude anyone from using it.
So in summary, "every public interface needs to be documented" is true for the most part in Boost.Test case. Or can you show any glaring gap?
Let me go back, now that we're having this discussion, and try again to read the docs. I'll try to point at specific things that leave me bewildered. At this point in time, I only really remember the experience of bewilderment but not the details. Also, I'll look at the docs on the trunk, but please, move those docs to release immediately.
I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like,
I am not sure what you mean. There is an extensive self test modules.
I mean *redundancy* between the tests and the documentation. The tests should check that the library does what the documentation says it does.
They do. Self tests are comprehensive and cover all public interfaces described in documentation (unless I missed something). Do you have example of clear gaps?
Again, at this point the details have slipped from my brain. I'll try to get back to you. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

on Fri Oct 05 2012, Dave Abrahams <dave-AT-boostpro.com> wrote:
OK, this makes no sense at all, IIUC. You have improved documentation in the trunk that applies to what's in the release branch, and no documentation for what's in the release branch?
Oops; I meant "no documentation for what's in the trunk?"
I can't browse the trunk docs on the web, AFAIK... oh, you've checked in HTML...
-- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Hi, I'm trying to generate the Boost.Test trunk docs and I'm getting cd libs/test/doc pc6:doc viboes$ bjam warning: mismatched versions of Boost.Build engine and core warning: Boost.Build engine (bjam) is 2011.04.00 warning: Boost.Build core (at /Users/viboes/boost/trunk/tools/build/v2) is 2011.12-svn /Users/viboes/boost/trunk/tools/build/v2/util/path.jam:516: in make-UNIX from module path error: Empty path passed to 'make-UNIX' /Users/viboes/boost/trunk/tools/build/v2/util/path.jam:41: in path.make from module path /Users/viboes/boost/trunk/libs/test/doc/utf-boostbook.jam:20: in load from module utf-boostbook /Users/viboes/boost/trunk/tools/build/v2/kernel/modules.jam:289: in import from module modules /Users/viboes/boost/trunk/tools/build/v2/build/toolset.jam:39: in toolset.using from module toolset /Users/viboes/boost/trunk/tools/build/v2/build/project.jam:995: in using from module project-rules Jamfile.v2:12: in modules.load from module Jamfile</Users/viboes/boost/trunk/libs/test/doc> /Users/viboes/boost/trunk/tools/build/v2/build/project.jam:311: in load-jamfile from module project /Users/viboes/boost/trunk/tools/build/v2/build/project.jam:64: in load from module project /Users/viboes/boost/trunk/tools/build/v2/build/project.jam:145: in project.find from module project /Users/viboes/boost/trunk/tools/build/v2/build-system.jam:552: in load from module build-system /Users/viboes/boost/trunk/tools/build/v2/kernel/modules.jam:289: in import from module modules /Users/viboes/boost/trunk/tools/build/v2/kernel/bootstrap.jam:139: in boost-build from module /Users/viboes/boost/trunk/boost-build.jam:17: in module scope from module Do other shares the same error? Do we need a specific version of bjam? Best, Vicente

On Thursday 04 October 2012 13:20:00 Dave Abrahams wrote:
The question remains: "how do I learn/teach this library?" If I can't answer those questions, I also can't answer the question
How do I use this library?
I don't understand how other people have arrived at answers for themselves.
[snip]
Look, I teach classes on Boost. If Boost.Test is not learnable and teachable, I have to tell my students to stay away from it. That's embarrassing for me, and bad for Boost.
Although I'm not teaching students, I can understand the difficulties you're talking about. However, you have to admit by the answers in this thread that many people managed to learn the library and use it extensively. Boost is not exclusively about teachability and learnability; I see practical usefullness as a key feature of Boost libraries (and I'm not discarding teachability and learnability by that) and Boost.Test has been useful for years. You can't just throw it away with no fallback. As for me, I'm not using any advanced features of the library. I gathered the knowledge of the basic usage from the docs (mostly tutorial, I guess) and source code. I can't say this was easy but it was doable and enough for my needs. I know this is not the kind of learning one can recommend to studends, so clearly the library could be improved in this regard.

on Fri Oct 05 2012, Andrey Semashev <andrey.semashev-AT-gmail.com> wrote:
On Thursday 04 October 2012 13:20:00 Dave Abrahams wrote:
The question remains: "how do I learn/teach this library?" If I can't answer those questions, I also can't answer the question
How do I use this library?
I don't understand how other people have arrived at answers for themselves.
[snip]
Look, I teach classes on Boost. If Boost.Test is not learnable and teachable, I have to tell my students to stay away from it. That's embarrassing for me, and bad for Boost.
Although I'm not teaching students, I can understand the difficulties you're talking about. However, you have to admit by the answers in this thread that many people managed to learn the library and use it extensively.
Yes. How did they do it?
Boost is not exclusively about teachability and learnability; I see practical usefullness as a key feature of Boost libraries (and I'm not discarding teachability and learnability by that) and Boost.Test has been useful for years. You can't just throw it away with no fallback.
I don't desire to throw it away. My straw man proposal was just that: a straw man. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Dave Abrahams <dave@boostpro.com> writes:
on Fri Oct 05 2012, Andrey Semashev <andrey.semashev-AT-gmail.com> wrote:
On Thursday 04 October 2012 13:20:00 Dave Abrahams wrote:
The question remains: "how do I learn/teach this library?" If I can't answer those questions, I also can't answer the question
How do I use this library?
I don't understand how other people have arrived at answers for themselves.
[snip]
Look, I teach classes on Boost. If Boost.Test is not learnable and teachable, I have to tell my students to stay away from it. That's embarrassing for me, and bad for Boost.
Although I'm not teaching students, I can understand the difficulties you're talking about. However, you have to admit by the answers in this thread that many people managed to learn the library and use it extensively.
Yes. How did they do it?
I'm a bit puzzled why you're having problems. The info is all there in the docs. My theory is that you've gotten lost amongst the bloat; the docs certainly don't cut to the chase and go on about execution monitors, usage variants, test runners and manual test registration when all you want to know is how to write a test case. So, cutting to the chase, here is how you write a test case: http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/utf/user-guide/test-.... So simple. Here is a more advanced test case with a test fixture: http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/utf/user-guide/fixtu... And here are the tests you use in your test cases: http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/utf/testing-tools/re... IMHO, the docs should start with these and the rest should be removed or moved later. Almost all the docs should stick to the automatic registration versions as describing the manual versions first gives the misleading impression that you might want to use them. You don't. Or at least I never have in all the years I've been Boost.Testing. Also, a doc bug report: the Unary Function link navigates to the wrong page on this page: http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/utf/user-guide/test-... Alex

On 10/5/2012 5:15 PM, Dave Abrahams wrote:
on Fri Oct 05 2012, Andrey Semashev <andrey.semashev-AT-gmail.com> wrote:
On Thursday 04 October 2012 13:20:00 Dave Abrahams wrote:
The question remains: "how do I learn/teach this library?" If I can't answer those questions, I also can't answer the question
How do I use this library?
I don't understand how other people have arrived at answers for themselves.
[snip]
Look, I teach classes on Boost. If Boost.Test is not learnable and teachable, I have to tell my students to stay away from it. That's embarrassing for me, and bad for Boost.
Although I'm not teaching students, I can understand the difficulties you're talking about. However, you have to admit by the answers in this thread that many people managed to learn the library and use it extensively.
Yes. How did they do it?
The following link did it for us: http://legalizeadulthood.wordpress.com/2009/07/04/c-unit-tests-with-boost-te... Then the official boost docs provided the necessary reference info, although it seems I have to go hunting to find the "BOOST_REQUIRE_xxx" reference table every time. Jeff

[Please do not mail me a copy of your followup] Sorry to come late to this party, I usually do not read the developers group on gmane because.... well.... I'm not a boost developer. However, I am a heavy proponent of Boost.Test and a heavy user of it. boost@lists.boost.org spake the secret code <m24nmai27z.fsf@pluto.luannocracy.com> thusly:
Look, I teach classes on Boost. If Boost.Test is not learnable and teachable, I have to tell my students to stay away from it. That's embarrassing for me, and bad for Boost.
I wrote the 5-part tutorial on using Boost.Test for TDD: <http://legalizeadulthood.wordpress.com/2009/07/04/c-unit-tests-with-boost-test-part-1/> <http://legalizeadulthood.wordpress.com/2009/07/05/c-unit-tests-with-boost-test-part-2/> <http://legalizeadulthood.wordpress.com/2009/07/05/c-unit-tests-with-boost-test-part-3/> <http://legalizeadulthood.wordpress.com/2009/07/05/c-unit-tests-with-boost-test-part-4/> <http://legalizeadulthood.wordpress.com/2009/07/05/c-unit-tests-with-boost-test-part-5/>
From the URLs you can see that I wrote those tutorials in the summer of 2009 and it's been one of the most popular articles on my blog. I routinely share them in the C++ newsgroup and in the boost user's gmane newsgroup/list. I wrote them because I thought the documentation made it very difficult to get what you needed from it.
At some point after posting this on newsgroups, Gennadiy said he was going to include links to it in the Boost.Test documentation. As far as I know, that never happened. I would be happy to start working on improving the documentation; I always found it to be the weakest part of the library. It works for Gennadiy, but IMO he's too close to the library to see how the documentation doesn't work for newcomers. I've written large technical documents myself (500+ pages on Microsoft's Direct3D which you can read from the link in my signature) and it is difficult to step away from your own expertise and present material in a manner that makes sense to newcomers. I generally recommend people use google mock or some other mocking library in conjunction with Boost.Test. While browsing around in the code I did come across source files mentioning "mocks", but it was so unlike every other approach to mock objects I've seen that I concluded it was only for some sort of internal use to Boost.Test itself. Now I see that it was intended for general use, but even from reading the example source file, I can't say I'd recommend it over google mock or turtle mock from what I understand at this point. Reading this thread, I get the impression that Gennadiy is a little defensive of the library. I guess who wouldn't be, if the thread opened up with a statement to the effect of "let's dump this stuff because it's junk". It is very much the product of one person and maybe that's why it is suffering with so much impedence mismatch with other people's brains. Even though I like Boost.Test, I have gotten mixed signals about contirbutions towards improving it. I asked in the user's group if Gennadiy would accept a patch that would allow me to write my own assertions that supplied file/line information at the point the assertion was invoked instead of at the point the assertion was implemented. I asked on 23 Oct 2012 and didn't get an answer. I asked again a week later and didn't get an answer then, either. This left me without the feeling of "patches are welcome" that boost usually gives me. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Computer Graphics Museum <http://computergraphicsmuseum.org> The Terminals Wiki <http://terminals.classiccmp.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

[Please do not mail me a copy of your followup] boost@lists.boost.org spake the secret code <kkhg4t$4et$1@ger.gmane.org> thusly:
I would be happy to start working on improving the documentation [...]
I updated a checked out subversion repository to tags/release/boost_1_53_0 and looked at the Boost.Test documentation. I'm confused. The HTML files all claim they were generated by docbook, but there appear to be no docbook source files in the tree. So where, exactly, is the source to the documentation? I found the HTML files in libs/test/doc/html. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Computer Graphics Museum <http://computergraphicsmuseum.org> The Terminals Wiki <http://terminals.classiccmp.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

I updated a checked out subversion repository to tags/release/boost_1_53_0 and looked at the Boost.Test documentation.
I'm confused.
The HTML files all claim they were generated by docbook, but there appear to be no docbook source files in the tree.
So where, exactly, is the source to the documentation?
I found the HTML files in libs/test/doc/html.
I guess the sources are not merged to the release branch (which is what tags in tags/release are tags of). In the trunk (which serves as the development branch), the sources can be found in libs/test/doc/src. Regards, Nate

I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like, as long as the code is incompletely / incorrectly documented and tested, it's just someone's personal coding project that we happen to keep shipping with Boost, and not really a library for general use. This situation reflects poorly on Boost as a whole and the fact that it centers around a _testing_ library, which is concerned with robustness... well, let's just say that the irony isn't lost on me.
While we're on the subject, I confess I thought about writing something in this area, but hey, too much already! :-) Here's my wish list though: * Lightweight, header only if possible. * Clear separation between components (execution monitor, from unit test call framework, from testing macros). Ideally each would be a separate mini library if that's possible, with the executable linking against just what it needs and no more. * Thread safe from the start, testing from multiple threads should be a no-brainer. * Easy debugging: if I step into a test case in the debugger the first thing I should see is *my code*. As it is I have to step in and out of dozens of Boost.Test functions before I get to my code. This one really annoys me. * Rapid execution of each test case, a BOOST_CHECK(no-op) should be as near to a no-op as possible. I was unable to use Boost.Test for a lot of the Math lib tests for this reason - looping over thousands of tests was simply impractical from a time point of view (maybe this has improved since then, I haven't checked). * Exemplary error messages when things fail - Boost.Test has improved in this area, but IMO not enough. * An easy way to tell if the last test has failed, and/or an easy way to print auxiliary information when the last test has failed. This is primarily for testing in loops, when iterating over tabulated test data. * Relatively simple C++ code, with no advanced/poorly supported compiler features used. This is one library that should be usable anywhere and everywhere. * Ultra stable code. Exempting bug fixes, I'd like to see a testing library almost never change, or only change after very careful consideration, for example if a new C++ language feature requires special testing support. And what I don't want: * Breaking changes: Boost authors have absolutely no time to track breaking changes in their dependencies, since a successful testing library would be used universally by all of Boost, this is particularly important for this library. * No feature creep. Keep it small, focused, quick to compile. If new features are added they should be separate (i.e. only pay for what you use). John.

John Maddock <boost.regex <at> virgin.net> writes:
Here's my wish list though:
* Lightweight, header only if possible.
IMO, unless you are after something very trivial, there is no reason for that. Having prebuilt library does not make it heavyweight. More important is user experience.
* Clear separation between components (execution monitor, from unit test call framework, from testing macros). Ideally each would be a separate mini
Isn't it already the case?
library if that's possible, with the executable linking against just what it needs and no more.
Isn'r it already the case?
* Thread safe from the start, testing from multiple threads should be a no-brainer.
With C++11 threading support it couple be easily done in Boost.Test. Previously I did not want to introduce boost.thread dependency. This is in my plan
* Easy debugging: if I step into a test case in the debugger the first thing I should see is *my code*. As it is I have to step in and out of dozens of Boost.Test functions before I get to my code. This one really annoys me.
I am not sure I follow. Setup break point in your test case it will stop right there. Do you setup break point in first line of main? Or you mean something completely different?
* Rapid execution of each test case, a BOOST_CHECK(no-op) should be as near to a no-op as possible. I was unable to use Boost.Test for a lot of the Math lib tests for this reason - looping over thousands of tests was simply impractical from a time point of view (maybe this has improved since then, I haven't checked).
BOOST_CHECK is noop (well to some degree). There is some overhead probably. There was indeed improvement couple years ago and now we only do minimal amount of work necessary to pass context information about.
* Exemplary error messages when things fail - Boost.Test has improved in this area, but IMO not enough.
Specifically?
* An easy way to tell if the last test has failed, and/or an easy way to print auxiliary information when the last test has failed. This is primarily for testing in loops, when iterating over tabulated test data.
This is addressed with trunk improvements. There is several tools introduced to help with context specification.
* Relatively simple C++ code, with no advanced/poorly supported compiler features used. This is one library that should be usable anywhere and everywhere.
I do not believe Boost.Test uses in it's core any advanced C++ features. I am looking to add new component which might use one, but it is always going to be an extension.
* Ultra stable code. Exempting bug fixes, I'd like to see a testing library almost never change, or only change after very careful consideration, for example if a new C++ language feature requires special testing support.
Test library as any other library has users, bugs, feature requests etc. It has a life on it's own. It does indeed need to be more carefully maintained in comparison with other libs, but: 1. Proper component dependency helps. Your library needs to be built against released version of Test library (even trunk one). This way Test library can do it's own development in parallel. 2. Test library might need to be released not that frequently (that's why I actually holding on to releasing my changes, cause there is still a chance something somewhere will break) 3. There should be a period of time (short one) when testing library in release branch is updated. If there are few regressions/conflicts these can be fixed. Otherwise the change is reverted.
And what I don't want:
* Breaking changes: Boost authors have absolutely no time to track breaking changes in their dependencies, since a successful testing library would be used universally by all of Boost, this is particularly important for this library.
Again: proper component dependency. Depending on trunk version of your dependencies is the root cause of the issue here. One library should depend on specific released version of another library A.deps = B:1.2.3
* No feature creep. Keep it small, focused, quick to compile. If new features are added they should be separate (i.e. only pay for what you use).
This is mostly the case. Do you have any examples otherwise? Gennadiy

* Clear separation between components (execution monitor, from unit test call framework, from testing macros). Ideally each would be a separate mini
Isn't it already the case?
There's duplication of source between the different component libraries (execution monitor, test monitor, unit test). Plus the headers seem to pull in a whole lot of stuff I never use ;-)
library if that's possible, with the executable linking against just what it needs and no more.
Isn'r it already the case?
My gut feeling is that recent releases have got slower to compile and #include.
* Easy debugging: if I step into a test case in the debugger the first thing I should see is *my code*. As it is I have to step in and out of dozens of Boost.Test functions before I get to my code. This one really annoys me.
I am not sure I follow. Setup break point in your test case it will stop right there. Do you setup break point in first line of main? Or you mean something completely different?
No I mean that if I break on a test case and then hit "step into" in the debugger I have to step through your code before I get to mine, so for example if I break on a BOOST_CHECK_CLOSE_FRACTION and then step, I hit: scrap.exe!boost::unit_test::basic_cstring<char const >::basic_cstring<char const >() Line 163 C++ So I return from that, and step again and hit: scrap.exe!boost::unit_test::basic_cstring<char const >::basic_cstring<char const >(const char * s, unsigned int arg_size) Line 192 C++ Return and step and hit: scrap.exe!boost::unit_test::unit_test_log_t::set_checkpoint(boost::unit_test::basic_cstring<char const > file, unsigned int line_num, boost::unit_test::basic_cstring<char const > msg) Line 251 C++ Return and step and hit: scrap.exe!boost::unit_test::basic_cstring<char const >::basic_cstring<char const >(const char * s, unsigned int arg_size) Line 192 C++ Return and step and hit: scrap.exe!boost::unit_test::lazy_ostream::instance() Line 39 C++ Return and step and hit: scrap.exe!boost::unit_test::operator<<<char const [1]>(const boost::unit_test::lazy_ostream & prev, const char [1]& v) Line 83 C++ Return and step and hit: scrap.exe!boost::test_tools::tt_detail::check_frwd<boost::test_tools::check_is_close_t,double,double,double>(boost::test_tools::check_is_close_t P, const boost::unit_test::lazy_ostream & assertion_descr, boost::unit_test::basic_cstring<char const > file_name, unsigned int line_num, boost::test_tools::tt_detail::tool_level tl, boost::test_tools::tt_detail::check_type ct, const double & arg0, const char * arg0_descr, const double & arg1, const char * arg1_descr, const double & arg2, const char * arg2_descr) Line 293 C++ Return and step - and finally hit my code! So I have to step through 7 of your functions before I can finally debug a failing test case.
* Rapid execution of each test case, a BOOST_CHECK(no-op) should be as near to a no-op as possible. I was unable to use Boost.Test for a lot of the Math lib tests for this reason - looping over thousands of tests was simply impractical from a time point of view (maybe this has improved since then, I haven't checked).
BOOST_CHECK is noop (well to some degree). There is some overhead probably. There was indeed improvement couple years ago and now we only do minimal amount of work necessary to pass context information about.
OK, I rechecked this, and BOOST_CHECK_CLOSE_FRACTION appears to have next to no overhead now - excellent!
* Exemplary error messages when things fail - Boost.Test has improved in this area, but IMO not enough.
Specifically?
Testing: double a = 1; double b = 2; BOOST_CHECK_CLOSE_FRACTION(a, b, 0.0); Yields: m:/data/boost/trunk/ide/libraries/scrap/scrap.cpp(41): error: in "test_main_caller( argc, argv )": difference{0} between a{1} and b{2} exceeds 0 Leaving aside the obvious bug (the difference is not zero!!), I would have printed this as something like: m:/data/boost/trunk/ide/libraries/scrap/scrap.cpp(41): error: in "test_main_caller( argc, argv )": difference between a and b exceeds specified tolerance with: a = 1.0 b = 2.0 tolerance = 0.0 difference = 1.0 Which I'm sure will get mangled in email, but the idea is that the values are pretty printed so they all line up nicely - makes it much easier to see the problem compared to dumping them all on one line.
* An easy way to tell if the last test has failed, and/or an easy way to print auxiliary information when the last test has failed. This is primarily for testing in loops, when iterating over tabulated test data.
This is addressed with trunk improvements. There is several tools introduced to help with context specification.
Such as? Docs?
* Relatively simple C++ code, with no advanced/poorly supported compiler features used. This is one library that should be usable anywhere and everywhere.
I do not believe Boost.Test uses in it's core any advanced C++ features. I am looking to add new component which might use one, but it is always going to be an extension.
* Ultra stable code. Exempting bug fixes, I'd like to see a testing library almost never change, or only change after very careful consideration, for example if a new C++ language feature requires special testing support.
Test library as any other library has users, bugs, feature requests etc. It has a life on it's own. It does indeed need to be more carefully maintained in comparison with other libs, but:
1. Proper component dependency helps. Your library needs to be built against released version of Test library (even trunk one). This way Test library can do it's own development in parallel.
The way Boost works at present is that if Trunk breaks, then so does my stuff in Trunk: as a result I can no longer see whether I'm able to merge to release or not if Boost.Test is broken on Trunk. Unfortunately this has happened to me a few times now.
2. Test library might need to be released not that frequently (that's why I actually holding on to releasing my changes, cause there is still a chance something somewhere will break)
3. There should be a period of time (short one) when testing library in release branch is updated. If there are few regressions/conflicts these can be fixed. Otherwise the change is reverted.
And what I don't want:
* Breaking changes: Boost authors have absolutely no time to track breaking changes in their dependencies, since a successful testing library would be used universally by all of Boost, this is particularly important for this library.
Again: proper component dependency. Depending on trunk version of your dependencies is the root cause of the issue here. One library should depend on specific released version of another library A.deps = B:1.2.3
And again that's not how Boost testing currently works, no matter what you may wish. The issue here is that breaking changes should not be made to trunk without checking to see what else in Boost is depending on those features. As you know that didn't happen with the last major commit. Regards, John.

John Maddock wrote:
Again: proper component dependency. Depending on trunk version of your dependencies is the root cause of the issue here. One library should depend on specific released version of another library A.deps = B:1.2.3
And again that's not how Boost testing currently works, no matter what you may wish.
The issue here is that breaking changes should not be made to trunk without checking to see what else in Boost is depending on those features. As you know that didn't happen with the last major commit.
One small point: I had this problem many years ago which prompted me to (reluctantly) sever dependency of serialization library from that of Boost.Test. It didn't totally solve my problem because the same issue occurred to some extent with other libraries. It was unavoidable since I was testing the serialization library with other software in the boost trunk - which by definition/custom is experimental. I realized that the real solution was to test the serialization library changes against the rest of boost on the release branch. It permited developers of prerequisite libraries from having to deal with me. Robert Ramey
Regards, John.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Robert Ramey <ramey <at> rrsd.com> writes:
One small point: I had this problem many years ago which prompted me to (reluctantly) sever dependency of serialization library from that of Boost.Test. It didn't totally solve my problem because the same issue occurred to some extent with other libraries.
It appear that given your approach to test against releases, you can actually use Boost.Test if you opt to.
It was unavoidable since I was testing the serialization library with other software in the boost trunk - which by definition/custom is experimental.
I realized that the real solution was to test the serialization library changes against the rest of boost on the release branch. It permited developers of prerequisite libraries from having to deal with me.
My point exactly. Gennadiy

Gennadiy Rozental wrote:
Robert Ramey <ramey <at> rrsd.com> writes:
One small point: I had this problem many years ago which prompted me to (reluctantly) sever dependency of serialization library from that of Boost.Test. It didn't totally solve my problem because the same issue occurred to some extent with other libraries.
It appear that given your approach to test against releases, you can actually use Boost.Test if you opt to.
I realized this immediatly when I switched my testing setup to trunk - serialization / release - everything else. And I considered going back to boost test. But by that time, there was no incentive.
It was unavoidable since I was testing the serialization library with other software in the boost trunk - which by definition/custom is experimental.
I realized that the real solution was to test the serialization library changes against the rest of boost on the release branch. It permited developers of prerequisite libraries from having to deal with me.
My point exactly.
Halleluhah - so I've one more person on board with this. That makes 3 so far - only 97 more to go!!! It's been some time since I used Boost Test. My complaint was that it wasn't idiot proof enough. I think this is the crux of the current complaint. Calls for "re-doing" boost test are sort of naive in my opinion and don't account for the huge amount of effort it takes to make something like this. Having said that, I guessing that "re-factoring" and "re-doing the documention" might be feasible and practical. This could be made easier by upgrading boost tools and practices. Stay tuned as I will have a lot so say and demonstrate on ths topic in the near future. I'm sure you can all hardly wait. Robert Ramey

John Maddock <boost.regex <at> virgin.net> writes:
* Clear separation between components (execution monitor, from unit test call framework, from testing macros). Ideally each would be a separate mini
Isn't it already the case?
There's duplication of source between the different component libraries (execution monitor, test monitor, unit test).
Really? can you give an example (test monitor is not supported anymore so it is out of the picture)
Plus the headers seem to pull in a whole lot of stuff I never use
For example? Dependency is interesting thing. On every one developer asking for for less dependency you'll see five asking to better user experience, so that they can only include single header and be done with this.
library if that's possible, with the executable linking against just what it needs and no more.
Isn'r it already the case?
My gut feeling is that recent releases have got slower to compile and #include.
In "recent" releases Boost.Test did not change at all. In fact because of your complains I did not release it for like 3 years now (waiting for either boost moving to modularized setup or collecting all the changes in trunk so I can release them on one go and deal with some failures which might happened once).
* Easy debugging: if I step into a test case in the debugger the first thing I should see is *my code*. As it is I have to step in and out of dozens of Boost.Test functions before I get to my code. This one really annoys me.
I am not sure I follow. Setup break point in your test case it will stop right there. Do you setup break point in first line of main? Or you mean something completely different?
No I mean that if I break on a test case and then hit "step into" in the debugger I have to step through your code before I get to mine, so for example if I break on a BOOST_CHECK_CLOSE_FRACTION and then step, I hit:
[...]
Return and step - and finally hit my code!
1. Setup a break point in your code 2. Visual studio have a solution for avoiding stepping into the code you do not want to (I should probably include it (file) in docs) 3. There is a valid reason for these extra code. It collects some context info (no overhead) and makes sure the code under tests executed only once (thus this fwrd call)
OK, I rechecked this, and BOOST_CHECK_CLOSE_FRACTION appears to have next to no overhead now - excellent!
* Exemplary error messages when things fail - Boost.Test has improved in this area, but IMO not enough.
Specifically?
Testing:
double a = 1; double b = 2; BOOST_CHECK_CLOSE_FRACTION(a, b, 0.0);
Yields:
m:/data/boost/trunk/ide/libraries/scrap/scrap.cpp(41): error: in "test_main_caller( argc, argv )": difference{0} between a{1} and b{2} exceeds 0
Leaving aside the obvious bug (the difference is not zero!!), I would have printed this as something like:
m:/data/boost/trunk/ide/libraries/scrap/scrap.cpp(41): error: in "test_main_caller( argc, argv )": difference between a and b exceeds specified tolerance with: a = 1.0 b = 2.0 tolerance = 0.0 difference = 1.0
Which I'm sure will get mangled in email, but the idea is that the values are pretty printed so they all line up nicely - makes it much easier to see the problem compared to dumping them all on one line.
There are different opinions how output should look like. Some people prefer multiline detailed description, some prefer single line consistent output. We can probably provide an easier interface for output customization thus you can use some simple plugins in your test modules to see it the way you like.
* An easy way to tell if the last test has failed, and/or an easy way to print auxiliary information when the last test has failed. This is primarily for testing in loops, when iterating over tabulated test data.
This is addressed with trunk improvements. There is several tools introduced to help with context specification.
Such as? Docs?
BOOST_TEST_INFO BOOST_TEST_CONTEXT There is a unit test for these in test_tools_test.cpp. Used like these: BOOST_TEST_INFO( "info 1" ); BOOST_TEST_INFO( "info 2" ); BOOST_TEST_INFO( "info 3" ); BOOST_CHECK( false ); BOOST_TEST_CONTEXT( "some sticky context" ) { BOOST_CHECK( false ); BOOST_TEST_INFO( "more context" ); BOOST_CHECK( false ); BOOST_TEST_INFO( "different subcontext" ); BOOST_CHECK( false ); } context only reported if error occurred. Docs are pending.
* Ultra stable code. Exempting bug fixes, I'd like to see a testing library almost never change, or only change after very careful consideration, for example if a new C++ language feature requires special testing support.
Test library as any other library has users, bugs, feature requests etc. It has a life on it's own. It does indeed need to be more carefully maintained in comparison with other libs, but:
1. Proper component dependency helps. Your library needs to be built against released version of Test library (even trunk one). This way Test library can do it's own development in parallel.
The way Boost works at present is that if Trunk breaks, then so does my stuff in Trunk: as a result I can no longer see whether I'm able to merge to release or not if Boost.Test is broken on Trunk. Unfortunately this has happened to me a few times now.
Test library aside, are your expectations that either a) none of your dependencies ever change b) any changes in every revision will work on every platform c) your dependencies can only change when are not doing your own development
Again: proper component dependency. Depending on trunk version of your dependencies is the root cause of the issue here. One library should depend on specific released version of another library A.deps = B:1.2.3
And again that's not how Boost testing currently works, no matter what you may wish.
Well in this case we need to be a bit more patient when dependent library is changing.
The issue here is that breaking changes should not be made to trunk without checking to see what else in Boost is depending on those features. As you know that didn't happen with the last major commit.
I believe you overstate the issue. If there were any breakages these were addressed within couple testing cycles. Gennadiy

on Tue Sep 18 2012, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
No I mean that if I break on a test case and then hit "step into" in the debugger I have to step through your code before I get to mine, so for example if I break on a BOOST_CHECK_CLOSE_FRACTION and then step, I hit:
[...]
Return and step - and finally hit my code!
1. Setup a break point in your code 2. Visual studio have a solution for avoiding stepping into the code you do not want to (I should probably include it (file) in docs) 3. There is a valid reason for these extra code. It collects some context info (no overhead) and makes sure the code under tests executed only once (thus this fwrd call)
I haven't looked at the code, but I wonder if Gennadiy could improve the situation by putting the debug break in the dtor of a return value, so you gather up all this information in the nested function calls, and then only drop into the debugger where the test macro is invoked? -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Dave Abrahams <dave <at> boostpro.com> writes:
I haven't looked at the code, but I wonder if Gennadiy could improve the situation by putting the debug break in the dtor of a return value, so you gather up all this information in the nested function calls, and then only drop into the debugger where the test macro is invoked?
I do not follow what you are saying. What return value you refer to? And what kind of debug break? Macro resolves to something like: check_function( expression under test, some context,... ) Since function arguments are evaluated in unknown order, it it possible the context collection expressions executed first. Gennadiy

on Fri Oct 05 2012, Gennadiy Rozenal <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
I haven't looked at the code, but I wonder if Gennadiy could improve the situation by putting the debug break in the dtor of a return value, so you gather up all this information in the nested function calls, and then only drop into the debugger where the test macro is invoked?
I do not follow what you are saying. What return value you refer to? And what kind of debug break?
Macro resolves to something like:
check_function( expression under test, some context,... )
Since function arguments are evaluated in unknown order, it it possible the context collection expressions executed first.
IIUC the claim is that the call stack at the point of the breakpoint looks like: user_function1 user_function2 boost.test_function1 boost.test_function2 boost.test_function3 boost.test_function4 break I obviously don't understand the problem completely, but my suggestion was, instead of breaking in boost.test_function4, to return something from boost.test_function1 whose dtor contains a debug break. If valuable information is being collected in boost.test_functionX, store it on the heap if necessary so it can be available at the point of the debug break. Hope this is useful, but it might not be. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On Mon, 17 Sep 2012 16:40:31 +0200, Dave Abrahams <dave@boostpro.com> wrote:
[...]I am not at all attached to removing Boost.Test from Boost, but IMO rescuing it would require a significant new investment of time and energy from people who are committed to bringing the library up to par with the rest of what we do. (I seriously thought about volunteering for this myself, but realistically speaking, I don't have the time, and volunteering for something you can't actually do is worse than not volunteering at all.) Even if volunteers show up, I'd suggest proceeding with the plan above, subject to reversal at any time the work actually gets done.
Thoughts?
I used Boost.Test for years until I stumbled over Google Test (<http://code.google.com/p/googletest/>) and Google Mock (<http://code.google.com/p/googlemock/>). I switched to those frameworks about two years ago mainly because of Google Mock (Boost.Test is very similar to Google Test; but there is nothing in Boost comparable to Google Mock). While I think it's possible to use Boost.Test with Google Mock, I felt that the Google test frameworks gave an overall better impression than Boost.Test. Now I only use Boost.Test for the Boost libraries. If I had a choice I'd prefer the Google test frameworks though. Even if someone spends a lot of time updating Boost.Test and maybe adding a mocking library, I'm not sure whether that would be enough to make me switch from the Google test frameworks again. From my point of view it would make more sense to spend resources in Boost somewhere else than creating the kind of up-to-date and full-featured C++ testing framework which exists already (but then of course people decide themselves what they'd like to work on :). Boris

On 17/09/12 15:40, Dave Abrahams wrote:
Hi All,
I was just going through Boost.Test to try to figure out how to teach it, and while it looks to have substantial value, it is also in quite a mess. It contains loads of features that are exercised in the examples/ directory but neither included in any of the tests nor documented. There are facilities for command-line argument parsing! There are "decorators" that turn on/off features for test cases. There is support for mock objects! These are cool and sometimes necessary features, but who knew? The third tutorial page (http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/tutorials/new-year-r...) has a glaring typo in the code examples: "BOOST_AUTO_EST_CASE". There's no reference manual at all. There are nearly-identical files in the examples/ directory called "est_example1.cpp" and "test_example1.cpp" (Did the "t" key on someone's keyboard break?) I could go on, but where would I stop?
Yes I agree there are some documentation issues but addressing these is I am sure a much lesser effort than the following suggestions made.
I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like, as long as the code is incompletely / incorrectly documented and tested, it's just someone's personal coding project that we happen to keep shipping with Boost, and not really a library for general use. This situation reflects poorly on Boost as a whole and the fact that it centers around a _testing_ library, which is concerned with robustness... well, let's just say that the irony isn't lost on me.
FWIW - we've been using Boost.Test for years in for testing production code in an unforgiving environment. I don't see a better option out there (and yes I am aware of the alternatives). We have recently created a few local patches that we'll push back out shortly for hopeful inclusion in Boost.Test that facilitate better test output generation.
I don't mean this posting as an attack on Gennadiy in any way, but I think the situation is unacceptable and therefore am opening a discussion about what should happen.
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release
I cannot agree this is the correct approach - I'd rather we all put effort into tidying up the docs and helping Gennadiy out. I'm sure he'd be receptive to anything that helps make this important library better.
- Its documentation, such as it is, is removed from the release after that
-1
- Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism
Now that's a huge effort, much greater than simply addressing documentation or feature issues.
- The code is removed from Boost thereafter
-1
I am not at all attached to removing Boost.Test from Boost, but IMO rescuing it would require a significant new investment of time and energy from people who are committed to bringing the library up to par with the rest of what we do. (I seriously thought about volunteering for this myself, but realistically speaking, I don't have the time, and volunteering for something you can't actually do is worse than not volunteering at all.) Even if volunteers show up, I'd suggest proceeding with the plan above, subject to reversal at any time the work actually gets done.
Let me put it this way. Boost.Test works and it is very good at what it does. It isn't perfect but I don't know of a test library that is. We are lucky to have an active and dedicated maintainer of Boost.Test. I'd much rather effort was invested in trying to address issues that may exist with how Boost.Test is maintained and updated to minimise breakages given the special importance of this library. I mean - the source code is there - it's not the prettiest perhaps but that can be said for quite a few libraries in Boost. If someone really, really thinks they can do a better job then please, by all means write a test library for boost, put it up for review and try to get it accepted. Boost needs a test library now and it has one. If we end up with another in the future and people prefer it, sure we could look at migrating to that but until then I think we should do our best to improve what is there. That's a two way street of course and Gennadiy I'm sure will be responsive to any efforts to help him with that, though I can't of course speak for him, I'm hoping he will. Given his responses to this thread so far I'm positive. Jamie
Thoughts?

on Tue Sep 18 2012, Jamie Allsop <ja11sop-AT-yahoo.co.uk> wrote:
On 17/09/12 15:40, Dave Abrahams wrote:
Hi All,
I was just going through Boost.Test to try to figure out how to teach it, and while it looks to have substantial value, it is also in quite a
mess. It contains loads of features that are exercised in the examples/ directory but neither included in any of the tests nor documented. There are facilities for command-line argument parsing! There are "decorators" that turn on/off features for test cases. There is support for mock objects! These are cool and sometimes necessary features, but who knew? The third tutorial page (http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/tutorials/new-year-r...) has a glaring typo in the code examples: "BOOST_AUTO_EST_CASE". There's no reference manual at all. There are nearly-identical files in the examples/ directory called "est_example1.cpp" and "test_example1.cpp" (Did the "t" key on someone's keyboard break?) I could go on, but where would I stop?
Yes I agree there are some documentation issues but addressing these is I am sure a much lesser effort than the following suggestions made.
Really? Which ones in particular do you think would be harder than fixing Boost.Test's documentation, which has been in essentially this state for years without substantial improvement?
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release
I cannot agree this is the correct approach - I'd rather we all put effort into tidying up the docs and helping Gennadiy out. I'm sure he'd be receptive to anything that helps make this important library better.
If "we all" doesn't have to include me, I'm all for it. I personally have too many other projects to work on. Rescue is the preferred option if it's possible. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Boost.Test is officially deprecated in the next release That's sort of too fast and reckless, I agree that there are issues in Boost.Test but I'd offer to delay with it at least until there is a good replacement in the library. The point is if it happens this way and Boost will have no a Test framework for next 5-10 releases until a replacement is implemented, the folks will completely migrate to other test frameworks and will never get back, so the replacement itself will become pointless. I'd be better to stay patient for some time and keep working on it.
In my opinion, Boost needs a good testing library. Whether the current Boost.Test qualifies as such or not is a valid question but it is better than not having any. While there is no viable replacement for Boost.Test, the library should be retained at least for sake of backward >compatibility. When the replacement appears we can declare a deprecation period to allow users (and Boost developers) to port their tests. IMHO. Absolutely agree.
============================== Never lose heart! Regards, Alexander Stoyan http://alexander-stoyan.blogspot.com -----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Dave Abrahams Sent: Monday, September 17, 2012 5:41 PM To: boost; Gennadiy Rozental Subject: [boost] What Should we do About Boost.Test? Hi All, I was just going through Boost.Test to try to figure out how to teach it, and while it looks to have substantial value, it is also in quite a mess. It contains loads of features that are exercised in the examples/ directory but neither included in any of the tests nor documented. There are facilities for command-line argument parsing! There are "decorators" that turn on/off features for test cases. There is support for mock objects! These are cool and sometimes necessary features, but who knew? The third tutorial page (http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/tutorials/new-year- resolution.html) has a glaring typo in the code examples: "BOOST_AUTO_EST_CASE". There's no reference manual at all. There are nearly-identical files in the examples/ directory called "est_example1.cpp" and "test_example1.cpp" (Did the "t" key on someone's keyboard break?) I could go on, but where would I stop? I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like, as long as the code is incompletely / incorrectly documented and tested, it's just someone's personal coding project that we happen to keep shipping with Boost, and not really a library for general use. This situation reflects poorly on Boost as a whole and the fact that it centers around a _testing_ library, which is concerned with robustness... well, let's just say that the irony isn't lost on me. I don't mean this posting as an attack on Gennadiy in any way, but I think the situation is unacceptable and therefore am opening a discussion about what should happen. As a straw man, I'll make this suggestion: - Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism - The code is removed from Boost thereafter I am not at all attached to removing Boost.Test from Boost, but IMO rescuing it would require a significant new investment of time and energy from people who are committed to bringing the library up to par with the rest of what we do. (I seriously thought about volunteering for this myself, but realistically speaking, I don't have the time, and volunteering for something you can't actually do is worse than not volunteering at all.) Even if volunteers show up, I'd suggest proceeding with the plan above, subject to reversal at any time the work actually gets done. Thoughts? -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi, just to give my 0.02CHF. I was trying to teach unit testing in C++ years ago and found Boost.Test unteachable then and Google-Test to be too much 90s style C++. Therefore, I created a modern library that was simple and easy to teach, but lacking many features you might expect from a testing library. It is called CUTE(C++ Unit Testing Easier) and comes with an accompanying Eclipse CDT plug-in at http://cute-test.com. However, CUTE itself is header only and works independently from Eclipse. (CUTE has tests for its functionality, but not always as nice as I'd like them to be. The tests aren't yet publicly available.) In the last year one of my students created also a very simple Mock Object library that also comes within an Eclipse plug-in. That plug-in however, is helping heavily to get existing code bases under test, by providing refactorings toward seams (see M. Feathers "Working Effectively with Legacy Code") and generating test-stub and mock-class frames. (->http://mockator.com) There are other modern C++ unit testing frameworks around. But I care a lot enough about unit testing to have the following potential (alternative) ideas: 0. release CUTE under Boost license, including its test cases 1. extend CUTE and its plug-in to support more features, that users like about Boost.Test (if any) that CUTE doesn't support in its current form 2. provide Eclipse CDT-based tool support for semi-automatically migrate existing test cases from Boost.Test to CUTE (someone might volunteer for a tool using libclang) (at least for those tests that do not rely on Boost.Tests' "advanced" features) All of this, even the libclang approach, I might have students doing some of the work (for free), but that might take its time and the quality might not be good enough. (Sponsorship would allow it to do it with employees :-) I might be asking too much from your people to abandon Boost.Test in favor to something much simpler and less powerful. May be, one can accept it as an alternative and I would love to have boost as a home for CUTE, if that isn't blasphemy. Regards Peter. On 17.09.2012, at 16:40, Dave Abrahams wrote:
Hi All,
I was just going through Boost.Test to try to figure out how to teach it, and while it looks to have substantial value, it is also in quite a mess. It contains loads of features that are exercised in the examples/ directory but neither included in any of the tests nor documented. There are facilities for command-line argument parsing! There are "decorators" that turn on/off features for test cases. There is support for mock objects! These are cool and sometimes necessary features, but who knew? The third tutorial page (http://www.boost.org/doc/libs/1_51_0/libs/test/doc/html/tutorials/new-year-r...) has a glaring typo in the code examples: "BOOST_AUTO_EST_CASE". There's no reference manual at all. There are nearly-identical files in the examples/ directory called "est_example1.cpp" and "test_example1.cpp" (Did the "t" key on someone's keyboard break?) I could go on, but where would I stop?
I don't know what to do about this. Because of the lack of redundancy (i.e. tests and documentation), it's hard to tell whether this library is correct or even to define what "correct" should mean. It seems like, as long as the code is incompletely / incorrectly documented and tested, it's just someone's personal coding project that we happen to keep shipping with Boost, and not really a library for general use. This situation reflects poorly on Boost as a whole and the fact that it centers around a _testing_ library, which is concerned with robustness... well, let's just say that the irony isn't lost on me.
I don't mean this posting as an attack on Gennadiy in any way, but I think the situation is unacceptable and therefore am opening a discussion about what should happen.
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism - The code is removed from Boost thereafter
I am not at all attached to removing Boost.Test from Boost, but IMO rescuing it would require a significant new investment of time and energy from people who are committed to bringing the library up to par with the rest of what we do. (I seriously thought about volunteering for this myself, but realistically speaking, I don't have the time, and volunteering for something you can't actually do is worse than not volunteering at all.) Even if volunteers show up, I'd suggest proceeding with the plan above, subject to reversal at any time the work actually gets done.
Thoughts?
-- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Prof. Peter Sommerlad Institut für Software: Bessere Software - Einfach, Schneller! HSR Hochschule für Technik Rapperswil Oberseestr 10, Postfach 1475, CH-8640 Rapperswil http://ifs.hsr.ch http://cute-test.com http://linticator.com http://includator.com tel:+41 55 222 49 84 == mobile:+41 79 432 23 32 fax:+41 55 222 46 29 == mailto:peter.sommerlad@hsr.ch

Le 18/09/2012 17:41, Peter Sommerlad a écrit :
Hi,
just to give my 0.02CHF.
I was trying to teach unit testing in C++ years ago and found Boost.Test unteachable then and Google-Test to be too much 90s style C++.
Hello, Could you please expand it a little bit what were the problems with teaching boost.test? From the different unit testing libs I've seen, boost.test seemed to me to be the easiest to teach, especially to people learning the language, and therefore having no knowledge of classes, collections... Thank you, --- Loïc Joly

Peter Sommerlad <peter.sommerlad <at> hsr.ch> writes:
Hi,
just to give my 0.02CHF.
I was trying to teach unit testing in C++ years ago and found Boost.Test unteachable then and Google-Test to be too much 90s style C++. Therefore, I created a modern library that was simple and easy to teach, but
I appreciate that you obviously prefer your own library, but before we start attaching labels, what exactly you mean by Boost.Test being unteachable. Gennadiy

on Tue Sep 18 2012, Peter Sommerlad <peter.sommerlad-AT-hsr.ch> wrote:
I was trying to teach unit testing in C++ years ago and found Boost.Test unteachable
I should point out that Peter and I had no communication about Boost.Test before this conversation, and we reached the same conclusion. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Dave Abrahams <dave <at> boostpro.com> writes:
on Tue Sep 18 2012, Peter Sommerlad <peter.sommerlad-AT-hsr.ch> wrote:
I was trying to teach unit testing in C++ years ago and found Boost.Test unteachable
I should point out that Peter and I had no communication about Boost.Test before this conversation, and we reached the same conclusion.
As it stands neither you nor Peter presented any specific problems about teach-ability or learn-ability, but just hide behind long pretty words. In fact number of people even in this thread expressed different opinion and in my experience Boost.Test have pretty low entry threshold. There were complains about some advanced features, but very few people had troubles learning basic interfaces. Gennadiy

On 09/17/2012 08:40 AM, Dave Abrahams wrote:
Hi All,
[...]
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism - The code is removed from Boost thereafter
Please no. I understand your reasoning, and agree with many critiques of the library. However, we've been using Boost.Test for years now (I think we were one of the early adopters) and have hundreds of test suites with thousands of test cases. Changing to a different unit test system is simply not possible right now (nor, I suspect, in the near future); we're too heavily invested, and we're extremely developer-limited right now. Like it or not, Boost.Test is part of Boost, and we (for one) are relying on it to stay that way. I don't think you can deprecate it until you've got a replacement *and* a halfway-decent migration path. (Granted, that replacement might be in a different library Yes, Boost.Test has its blemishes. Most of the big problems have already been mentioned -- mainly the documentation, also the interface changes (but those were a long time ago). IIRC, four or five years ago Gennadiy and I agreed to disagree about the floating point comparison macros. We've added our share of customizations and tweaks. But we like the library very much. It does what we need it to do, and at this point we don't think about it much -- which is exactly what you want your unit test library to be like. If someone is going to write a replacement, we'd be very interested in providing ideas, feedback, and possibly even programmer hours. But, please don't take it away without having something else to take its place. (If that 'something else' is in a completely different library, e.g. googletest, fine -- but Boost.Test users still need a good migration path). If the biggest problem is documentation, it seems that fixing the documentation, or even living with bad documentation, is a lesser evil (or at least less extreme) than throwing it out completely. -- Dave Steffen Software Engineer NUMERICA CORPORATION www.numerica.us (970) 461-2000 main (970) 612-2327 direct

On 09/17/2012 04:40 PM, Dave Abrahams wrote:
I don't mean this posting as an attack on Gennadiy in any way, but I think the situation is unacceptable and therefore am opening a discussion about what should happen.
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism - The code is removed from Boost thereafter
Isn't Boost.Test's only problem the fact that other Boost libraries are tested against the trunk rather than the release version? Isn't that something the modularized Boost is supposed to fix?

Mathias Gaunard wrote:
Isn't Boost.Test's only problem the fact that other Boost libraries are tested against the trunk rather than the release version?
Isn't that something the modularized Boost is supposed to fix?
There's no need to wait for "modularized Boost" to start doing this. This could be done now with relatively modest (though of course tricky) adjustments in the test script. Robert Ramey

Le 19/09/12 01:35, Robert Ramey a écrit :
Isn't Boost.Test's only problem the fact that other Boost libraries are tested against the trunk rather than the release version?
Isn't that something the modularized Boost is supposed to fix? There's no need to wait for "modularized Boost" to start doing this. This could be done now with relatively modest (though of course
Mathias Gaunard wrote: tricky) adjustments in the test script.
Hi Robert, IIUC, your strategy to tests has some advantages, but it has some liabilities also: * every developer should use the same strategy, * the trunk will surely be broken as no one is testing with the trunk of the others libraries, * it delays the day conflicts are evident to the day the developer merges to release, as no one has tested with the new changes. I think the authors of the Boost libraries should try to avoid the introduction of breaking changes. Breaking changes should be manage using versions and deprecated periods. If this were the case, new features could be added in trunk without any problem, and the authors could have enough time to move to the new breaking ones. Even when we will have a modularized Boost, we should manage with breaking changes in the same way (versions and deprecated periods). Of course we are all subject to making some errors, but at least we should have in mind the consequence of unexpected breaking changes. Best, Vicente

Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
Hi Robert,
IIUC, your strategy to tests has some advantages, but it has some liabilities also:
* every developer should use the same strategy,
This is not the case I believe. You can opt to use whatever strategy you want. If you develop library A and it depend on library B you can: * depend on trunk version of B: A.deps = B:TRUNK This way you get notified as soon as any changes in dependent library occurred and more importantly if they causing a conflict with A * depend on latest released version of B: A.deps = B:LATEST This way you know when something breaks as soon as B is released and you and/or library B developer can arrange for correct resolution at that time * depend on specific version of B: A.deps = B:1.50 This way you never need to worry about B changes, but you will get in trouble if new version of B is released. In this case released version of A is not usable anymore with released boost package. We can use this mode for deprecated libraries, which we may not wont to remove from release immediately.
* the trunk will surely be broken as no one is testing with the trunk of the others libraries,
There will not be a "boost trunk". Each library has it's own trunk. And each of these trunks will work/test fine if dependency rules are set correctly.
* it delays the day conflicts are evident to the day the developer merges to release, as no one has tested with the new changes.
This is not necessarily a bad thing: * It allow to delay conflict resolution to the time when B is stable and only things which remains unresolved is conflict with A * It does not require library A and B developers to be in sync time-wise. There are two scenarios: = Either library A developer at the time when he or she is ready to make a release will run a full test of boost with B's LATEST replaced with TRUNK (we need to have a way to do this), find out about all conflicts and resolve. After that B can be released and we should not expect many problems (aside of potential race conditions) = Either library A developer wants to use TRUNK version of B. He or She runs the test against B:TRUNK find out all conflicts resolve them either by changing A's code or y communicating the issue to the B library developer and fixing it there. After that B need to go through the step above to ensure that there is no other conflicts (unless A is the only user of B). This can be initiated by library A developer. B is released and A can now depend on B:LATEST. Alternatively A's developer can encourage B's developer to make a release (see above) and meanwhile continue to develop against TRUNK version of B, by setting a dependency on B:TRUNK
I think the authors of the Boost libraries should try to avoid the introduction of breaking changes. Breaking changes should be manage
I believe correct statement is boost author should avoid releasing of breaking changes. If one need to run a test with some potentially breaking changes ( even just to be able to see if they are breaking or not or because these are new interfaces and backward compatibility is not implemented yet or because one wants to create ea special version which is intended to be used by some other library C)
using versions and deprecated periods. If this were the case, new features could be added in trunk without any problem, and the authors could have enough time to move to the new breaking ones. Even when we will have a modularized Boost, we should manage with breaking changes in the same way (versions and deprecated periods).
I do not believe breaking changes is such a big/widespread problem among boost libraries. BAU platform differences, errors and small oversights are much more common. Regards, Gennadiy

Vicente J. Botet Escriba wrote:
Le 19/09/12 01:35, Robert Ramey a écrit :
Isn't Boost.Test's only problem the fact that other Boost libraries are tested against the trunk rather than the release version?
Isn't that something the modularized Boost is supposed to fix? There's no need to wait for "modularized Boost" to start doing this. This could be done now with relatively modest (though of course
Mathias Gaunard wrote: tricky) adjustments in the test script.
Hi Robert,
IIUC, your strategy to tests has some advantages, but it has some liabilities also:
* every developer should use the same strategy,
nope: I've been testing against the release versions of other libraries for years. MUCH less problem than testing against the trunk/experimental version.
* the trunk will surely be broken as no one is testing with the trunk of the others libraries,
* it delays the day conflicts are evident to the day the developer merges to release, as no one has tested with the new changes.
This presumes that using other libraries to test one's own library is a useful strategy. Discovering a bug this way puts the burden of tracking it down on other library authors. This encourages them from using other libraries and sometimes makes them want to "roll their own" (as I and others have for boost test).
I think the authors of the Boost libraries should try to avoid the introduction of breaking changes. Breaking changes should be manage using versions and deprecated periods.
we're all agreed on that.
If this were the case, new features could be added in trunk without any problem, and the authors could have enough time to move to the new breaking ones.
Nope. a) Suppose I'm the author of Boost Test. b) a user requests a new feature or bug fix. c) I make the change on my machine and test it. - looks good. d) I check into the trunk. e) turns out that the changes have subtle error which shows up in half the compilers/os tested. f) Now all the libraries using boost test start to fail their tests on half their platforms. g) fifty developers now have to check their code for errors and track it down to boost test. h) finally it get's sorted out and the author of boost test is on the hot seat for hanging up all of boost. This is a common scenario when happened on more than one occasion. Now the author of boost test has learned his lesson. He doesn't make any but the most minor changes. If he makes a minor change, he has to go to extraordinary lengths to avoid the above. He can't de-couple his work from the rest of boost and work at his own pace. Of course the same applies to boost tools as well.
Even when we will have a modularized Boost, we should manage with breaking changes in the same way (versions and deprecated periods).
no question.
Of course we are all subject to making some errors, but at least we should have in mind the consequence of unexpected breaking changes.
of course. I urge anyone who doubts my position on this to try a simple experiment. a) Start with the release version of boost on your local machine. b) switch your directories header and lib to the trunk via SVN c) Run the bjam for your libraries It works great! And it's much faster since you only have to sync the release branch once in a while rather than every day. If anyone trys this out and doesn't see the value in it, I would like to hear about it. Robert Ramey

Le 19/09/12 08:08, Robert Ramey a écrit :
Vicente J. Botet Escriba wrote:
Le 19/09/12 01:35, Robert Ramey a écrit :
Isn't Boost.Test's only problem the fact that other Boost libraries are tested against the trunk rather than the release version?
Isn't that something the modularized Boost is supposed to fix? There's no need to wait for "modularized Boost" to start doing this. This could be done now with relatively modest (though of course
Mathias Gaunard wrote: tricky) adjustments in the test script.
Hi Robert,
IIUC, your strategy to tests has some advantages, but it has some liabilities also:
* every developer should use the same strategy, nope: I've been testing against the release versions of other libraries for years. MUCH less problem than testing against the trunk/experimental version. See below my comment to your point.
If this were the case, new features could be added in trunk without any problem, and the authors could have enough time to move to the new breaking ones. Nope.
a) Suppose I'm the author of Boost Test. b) a user requests a new feature or bug fix. c) I make the change on my machine and test it. - looks good. With which compilers? I would expect that authors are checking using several compilers that could be freely available. d) I check into the trunk. e) turns out that the changes have subtle error which shows up in half the compilers/os tested. Yes, this could occur, but less if the author test with several compilers. f) Now all the libraries using boost test start to fail their tests on half their platforms. g) fifty developers now have to check their code for errors and track it down to boost test. h) finally it get's sorted out and the author of boost test is on the hot seat for hanging up all of boost. The impact of such a situation could be minimized if new features are added subject to a new version compilation flag. I'm not saying I'm doing it every time, but it reduces dependencies.
The impact could be minimized if the author do little commits as the rollback is easier.
This is a common scenario when happened on more than one occasion.
Now the author of boost test has learned his lesson. He doesn't make any but the most minor changes. If he makes a minor change, he has to go to extraordinary lengths to avoid the above. He can't de-couple his work from the rest of boost and work at his own pace.
Of course the same applies to boost tools as well.
Of course we are all subject to making some errors, but at least we should have in mind the consequence of unexpected breaking changes. of course.
I urge anyone who doubts my position on this to try a simple experiment.
Don't misunderstand me. I have no doubts.
a) Start with the release version of boost on your local machine. b) switch your directories header and lib to the trunk via SVN c) Run the bjam for your libraries
d) do some modifications and test e) switch to trunk f) verify that everything is OK g) commit If you test using the release of the other libraries only, the trunk could be broken more easily. This is why I said that every author should use the same strategy.
It works great!
And it's much faster since you only have to sync the release branch once in a while rather than every day.
I guess you will need to sync at least every time you commit, isn't it? Best, Vicente

Vicente J. Botet Escriba wrote:
Le 19/09/12 08:08, Robert Ramey a écrit :
I urge anyone who doubts my position on this to try a simple experiment. Don't misunderstand me. I have no doubts.
a) Start with the release version of boost on your local machine. b) switch your directories header and lib to the trunk via SVN c) Run the bjam for your libraries d) do some modifications and test e) switch to trunk f) verify that everything is OK g) commit
If you test using the release of the other libraries only, the trunk could be broken more easily.
I don't believe this. I might believe that testing against the trunk might help detect bugs in other developer's libraries. BUT as I've said before this imposes huge costs on all the other developers and is not efficient in any case. If a developer is depending on the tests of everyone else's libraries to catch his own bugs he should just write more tests.
This is why I said that every author should use the same strategy.
I understand the point - I just disagree with it. It's a grossly inefficient way to detect bugs and test software.
It works great!
And it's much faster since you only have to sync the release branch once in a while rather than every day.
I guess you will need to sync at least every time you commit, isn't it?
If you've got nothing else to do you could sync with the release everytime you run tests. In theory you should be doing that now with the trunk on your own machine. I know no one is doing that since it takes forever. I practice - since the release branch changes much less frequently and the interface on the release branch hardly ever changes, I just sync once in a while - basically when boost emits a new release. When you do sync, it's much faster since the release branch changes a lot less. Robert Ramey

On Sep 18, 2012 11:35 PM, "Robert Ramey" <ramey@rrsd.com> wrote:
There's no need to wait for "modularized Boost" to start doing this. This could be done now with relatively modest (though of course tricky) adjustments in the test script.
I think this is a case of "patches welcome".

Daniel James wrote:
On Sep 18, 2012 11:35 PM, "Robert Ramey" <ramey@rrsd.com> wrote:
There's no need to wait for "modularized Boost" to start doing this. This could be done now with relatively modest (though of course tricky) adjustments in the test script.
I think this is a case of "patches welcome".
lol - no good deed goes unpunished. So, you're agreeing with my suggestion? That would make 4 onboard. So far, Robert Ramey

on Tue Sep 18 2012, Mathias Gaunard <mathias.gaunard-AT-ens-lyon.org> wrote:
On 09/17/2012 04:40 PM, Dave Abrahams wrote:
I don't mean this posting as an attack on Gennadiy in any way, but I think the situation is unacceptable and therefore am opening a discussion about what should happen.
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism - The code is removed from Boost thereafter
Isn't Boost.Test's only problem the fact that other Boost libraries are tested against the trunk rather than the release version?
No; for me it's about teachability, learnability, and certainty.
Isn't that something the modularized Boost is supposed to fix?
Yes. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Am 17.09.2012 16:40, schrieb Dave Abrahams:
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release
-1
- Its documentation, such as it is, is removed from the release after that -1
- Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism +1
- The code is removed from Boost thereafter -1
If deprecation and removal are considered, the first step should be implementing "a different mechanism" and a clear guidance how to port, potentially by an automatic tool. There is a lot of code out there that would break by removing boost test. For our code base, this would essentially lock us into the last version that still has Boost.Test. Should we be forced to rewrite all the tests, this would mean a huge amount of money invested for no improvement.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Dave Abrahams Sent: Monday, September 17, 2012 3:41 PM To: boost; Gennadiy Rozental Subject: [boost] What Should we do About Boost.Test?
I was just going through Boost.Test to try to figure out how to teach it, and while it looks to > have substantial value, it is also in quite a mess.
I've always thought this and, in retrospect, I'm surprised it was accepted at review. I found the design and documentation bewildering (and Gennady obviously found my bewilderment bewildering!). I disliked the MACROphilia. But expectations were lower then, and we very badly needed a test system. And Boost.Test has proved immensely useful - our joint investment in testing with it is so massive that I think we are stuck with it for many years hence. I would go so far as to say that the success of Boost is partly due to its testing system (including of course the battery of testers). For many people, the documentation can be ignored in favour of copying someone else's examples, and most often, IT JUST WORKS. So the idea that we can just deprecate it is madness. It seems significant to me that nobody has strongly claimed to have a better system. I'd like some changes (output layout is annoying), but I accept that even minor modifications to any testing system is almost certain to cause trouble (as I have found to my, and others, cost for an apparently trivial change to Boost.Test output). As also with bjam/b2, we need a better mechanism to flag up loudly that a change has been made that may well cause trouble, and to fix it pronto. I would prefer a Boost.Test2 that was much more lightweight and preferably header-only. And I think that any replacement will fail to catch unless it has some automation of the upgrade path. The prospect of changing all BOOST_CHECK_* macros appals me. My 2p. Paul I'm willing to help with documentation (having 'mastered' the Quickbook toolchain for Boost.Math etc) but I don't think that is really the main issue. --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

Paul A. Bristow <pbristow <at> hetp.u-net.com> writes:
I'd like some changes (output layout is annoying),
What about it? And what changes?
I would prefer a Boost.Test2 that was much more lightweight and preferably header-only.
All these statements about "lightweight" makes me wonder: * What exactly in your opinion makes Boost.Test not "lightweight"? * What exactly is wrong with Boost.Test header only solution? * What exactly would you throw out to make it more lightweight?
I'm willing to help with documentation (having 'mastered' the Quickbook toolchain for Boost.Math etc) but I don't think that is really the main issue.
Will quickbook be able to produce the same output current boostbook files do? Gennadiy

Gennadiy, On Sep 27, 2012, at 4:56 PM, Gennadiy Rozental wrote:
Paul A. Bristow <pbristow <at> hetp.u-net.com> writes:
I would prefer a Boost.Test2 that was much more lightweight and preferably header-only.
All these statements about "lightweight" makes me wonder:
* What exactly in your opinion makes Boost.Test not "lightweight"? * What exactly is wrong with Boost.Test header only solution? * What exactly would you throw out to make it more lightweight?
I can't speak for Paul, of course, but I would like it if I could build Boost.Test libraries (libboost_unit_test_framework*) without the monitor stuff (libboost_test_exec_monitor* and libboost_prg_exec_monitor*). Thanks, Ian

Ian Emmons <iemmons <at> bbn.com> writes:
I can't speak for Paul, of course, but I would like it if I could build Boost.Test libraries (libboost_unit_test_framework*) without the monitor stuff (libboost_test_exec_monitor* and libboost_prg_exec_monitor*).
Is this a joke? Former is deprecated long ago and later has nothing to do with unit testing. And either one has nothing to do with unit test framework. Gennadiy

On Fri, 28 Sep 2012 04:24:53 +0000 (UTC) Gennadiy Rozenal <rogeeff@gmail.com> wrote:
Ian Emmons <iemmons <at> bbn.com> writes:
I can't speak for Paul, of course, but I would like it if I could build Boost.Test libraries (libboost_unit_test_framework*) without the monitor stuff (libboost_test_exec_monitor* and libboost_prg_exec_monitor*).
Is this a joke? Former is deprecated long ago and later has nothing to do with unit testing. And either one has nothing to do with unit test framework. The point here that both libs are built by ./b2 --with-test. And considering what you said above it would be nice if they were considered a separate lib for purposes of --with-whatever..

Sergey Popov <loonycyborg <at> gmail.com> writes:
I would like it if I could build Boost.Test libraries (libboost_unit_test_framework*) without the monitor stuff (libboost_test_exec_monitor* and libboost_prg_exec_monitor*).
Is this a joke? Former is deprecated long ago and later has nothing to do with unit testing. And either one has nothing to do with unit test framework. The point here that both libs are built by ./b2 --with-test. And considering what you said above it would be nice if they were considered a separate lib for purposes of --with-whatever..
I know nothing about b2 or with-test, but whomever is responsible feel free to stop building these (especially test execution monitor). Gennadiy

Hi Gennadiy, On Friday, 28. September 2012 23:54:41 Gennadiy Rozental wrote:
Sergey Popov <loonycyborg <at> gmail.com> writes:
I would like it if I could build Boost.Test libraries (libboost_unit_test_framework*) without the monitor stuff (libboost_test_exec_monitor* and libboost_prg_exec_monitor*).
Is this a joke? Former is deprecated long ago and later has nothing to do with unit testing. And either one has nothing to do with unit test framework.
That is news for me.
The point here that both libs are built by ./b2 --with-test. And considering what you said above it would be nice if they were considered a separate lib for purposes of --with-whatever..
I know nothing about b2 or with-test, but whomever is responsible feel free to stop building these (especially test execution monitor).
That is easy, see the attached patch. Okay to commit? Should this be forced into 1.52 ? This is only possible if those two are officially deprecated. Else we have to add deprecation for now. Any other files which need removal? Yours, Jürgen -- * Dipl.-Math. Jürgen Hunold ! * voice: ++49 4257 300 ! Fährstraße 1 * fax : ++49 4257 300 ! 31609 Balge/Sebbenhausen * jhunold@gmx.eu ! Germany

Jürgen Hunold <jhunold <at> gmx.eu> writes:
That is news for me.
Which part? test_execution monitor deprecation was announced quite loudly (was it year ago?) prg_execution_monitor has nothing to do with testing, whic hvery clear from the documentation.
The point here that both libs are built by ./b2 --with-test. And considering what you said above it would be nice if they were considered a separate lib for purposes of --with-whatever..
I know nothing about b2 or with-test, but whomever is responsible feel free to stop building these (especially test execution monitor).
That is easy, see the attached patch. Okay to commit?
No. I do need Jamfile to build prg_exec_monitor and test_exec_monitor was never properly cleaned up from other QLib libs. I would do it myself (it is very strait-forward exercise to replace its usage with unit test framework), but I never seem to have time to do this. That said, unit test framework does not require either one of them and I do not agree with claims stating otherwise. b2 and with-test argument is out of my control and frankly I do not know where and why one uses it.
Should this be forced into 1.52 ? This is only possible if those two are officially deprecated. Else we have to add deprecation for now.
test_execution_monitor IS officially deprecated. prg_exec_monitor is not, but it have nothing to do with testing.

AMDG On 10/04/2012 12:11 AM, Gennadiy Rozental wrote:
Jürgen Hunold <jhunold <at> gmx.eu> writes:
I know nothing about b2 or with-test, but whomever is responsible feel free to stop building these (especially test execution monitor).
That is easy, see the attached patch. Okay to commit?
No. I do need Jamfile to build prg_exec_monitor and test_exec_monitor was never properly cleaned up from other QLib libs. I would do it myself (it is very strait-forward exercise to replace its usage with unit test framework), but I never seem to have time to do this. That said, unit test framework does not require either one of them and I do not agree with claims stating otherwise. b2 and with-test argument is out of my control and frankly I do not know where and why one uses it.
--with-test is part of the user-level build process. prg_exec_monitor does need to be built, since it is a documented component. It could, however, be split out into a separate --with-xxx option. (--with-prg_exec_monitor?) In Christ, Steven Watanabe

Steven Watanabe <watanabesj <at> gmail.com> writes:
--with-test is part of the user-level build process. prg_exec_monitor does need to be built, since it is a documented component. It could, however, be split out into a separate --with-xxx option. (--with-prg_exec_monitor?)
That's fine by me. Gennadiy

On Thu, Sep 27, 2012 at 1:43 PM, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
For many people, the documentation can be ignored in favour of copying someone else's examples, and most often, IT JUST WORKS.
So the idea that we can just deprecate it is madness.
Second that. We are also using it extensively because:
It seems significant to me that nobody has strongly claimed to have a better system.
It simply is one of the best testing frameworks out there and whatever it might be lacking of being the very best it makes up for by conveniently being part of boost that you use anyway. It just saves the trouble of pulling in another 3rd party dependency just for testing. You simply already have it because of course you are using boost.
As also with bjam/b2, we need a better mechanism to flag up loudly that a change has been made that may well cause trouble, and to fix it pronto.
nuff said. Cheers, Stephan

On 29/09/2012 9:46 AM, Stephan Menzel wrote:
It simply is one of the best testing frameworks out there and whatever it might be lacking of being the very best it makes up for by conveniently being part of boost that you use anyway. It just saves the trouble of pulling in another 3rd party dependency just for testing. You simply already have it because of course you are using boost.
I just want to say that after I read the thread and heard about Google Test/Mock, I wanted to check them out. It was so easy to add Google test to the build. It literally took me 10 minutes: 1. Unzip Google Mock (which includes Google Test) 2. Add g*-all.cc to the build 3. Wonder why it doesn't compile 4. Add proper include paths 5. Compiles and links 6. Wrote first test I'm not sure if it's as easy to do this with Boost Test because I can't approach it as a newbie. Anyway, I haven't looked back yet and (sorry) I'm not sure I will. Google Mock itself is unbelievably useful. You can't fault Boost Test for this though as Google apparently has a paid team to work on this as part of their "software infrastructure" team or something similar. But their documentation is a little meh. Sohail

Sohail Somani <sohail <at> taggedtype.net> writes:
Anyway, I haven't looked back yet and (sorry) I'm not sure I will. Google Mock itself is unbelievably useful.
Frankly, I can't see what the fuss is all about. An approach taken by Boost.Test is marginally better in my opinion. Mocks are deterministic and test case should not need to spell out expectations. Writing mocks is just as easy. You can see an example here: .../libs/test/example/logged_exp_example.cpp There is a potential for some improvement, but it is already better than anything else I know (IMO obviously). Gennadiy

On 29/09/2012 20:31, Gennadiy Rozental wrote:
Sohail Somani <sohail <at> taggedtype.net> writes:
Anyway, I haven't looked back yet and (sorry) I'm not sure I will. Google Mock itself is unbelievably useful. Frankly, I can't see what the fuss is all about. An approach taken by Boost.Test is marginally better in my opinion. Mocks are deterministic and test case should not need to spell out expectations. Writing mocks is just as easy. You can see an example here:
.../libs/test/example/logged_exp_example.cpp
There is a potential for some improvement, but it is already better than anything else I know (IMO obviously).
Hi Gennadiy, I'm a bit puzzled by the kitchen_robot example (and not just because it grill chicken without any chicken ! :p). MockMicrowave::get_max_power looks hard-coded to return a value of 1000, so to me this looks like a stub rather than a mock object, or am I missing something ? How would you test the robot calls set_power_level properly in respect to different max power values ? What if the max power could change at any time and you would like to 'program' the mock object to test the robot (first time return 1000, second time return 2000, etc..) ? And what now about a malfunctioning oven which would throw exceptions ? Would you write a new MockMicrowave implementation for every test case ? Then there seem to be some kind of trace logging involved which if I understand correctly can be seen as describing the expectations a posteriori. From my understanding this logs on the first run then reload the expectations and use them as a base to validate new runs. I see a number of problems with this (despite the serialization requirement on arguments to be checked), the major one being to not allow TDD. Also another common usage of mock objects in test is to document how the object under test reacts to the outside world. Moving the expectations outside the test makes this difficult (although not impossible, I suppose the test cases and expectations log files could be post-processed to produce sequence diagrams or something similar). Actually we (my team and company) have attempted this approach in the past. It works nicely on small use cases, but quickly tends to get in the way of refactoring. When each code change fails a dozen test cases which then have to be manually checked only to discover that a small variation to the algorithm still produced a perfectly valid expected output but with a slightly different resolution path, it tends to be very counter-productive. Therefore we started to add ways to relax the expectations in order to minimize the false positive test failures : the number of times and the order in which expectations happen, that some arguments sometimes are to be verified and sometimes not, etc.. In the end spelling out the expectations started to look more like a solution and less like a problem. Do you have any experience on this matter ? Did you manage to overcome this issue ? Regards, MAT.

Mathieu Champlon <m.champlon <at> free.fr> writes:
Hi Mathieu, This is a big subject. And a riddles with a lot of confusion IMO.
I'm a bit puzzled by the kitchen_robot example (and not just because it grill chicken without any chicken ! :p).
OK. Let me first reply to the comments specific to my example and later on we tackle the problem in general.
MockMicrowave::get_max_power looks hard-coded to return a value of 1000, so to me this looks like a stub rather than a mock object, or am I missing something ?
1000 is indeed hardcoded in this particular mock example, but note that it is never tested against, so we are not testing the state. In a sense it is part of mock behavior that this particular value is returned. We could have written mock differently, where this value is initialized in constructor (or multiple possible values). In many scenarios such mocks with hardcoded values will properly represent reality and satisty our needs from interaction testing standpoint.
How would you test the robot calls set_power_level properly in respect to different max power values ?
We can actually implement the check in this method and throw exception if it fails. I believe the framework may not log exceptions properly yet. It is something to be improved.
What if the max power could change at any time and you would like to 'program' the mock object to test the robot (first time return 1000, second time return 2000, etc..) ? And what now about a malfunctioning oven which would throw exceptions ? Would you write a new MockMicrowave implementation for every test case ?
1. I can indeed write a separate subclass with varying behavior in some methods. If this mock is end up being used in 10 different test scenarios it is much better than repeating mock setup in each test case with all this configuration 2. We can write mock class in a such a way that it can be configured through some mock specific interface to tailor to specific behavior you like. This is most flexible and will allow you reuse the same class and the same time implement arbitrary change in behavior no framework will ever be able provide for you. 3. Finally you possibly can implement some support in mock library to implement support to specifying some subset of possible behaviors through some template magic and compiler specific hacks. I am somewhat doubtful one can provide a nice generic interface for this. More over I an mot convince it is worth the effort. Most importantly though either of these approaches does NOT imply specifying your expectations in a test case. It is all about specifying mock behavior. So either way it can be used within a bound of my approach.
Then there seem to be some kind of trace logging involved which if I understand correctly can be seen as describing the expectations a posteriori.
Yes. Indeed.
From my understanding this logs on the first run then reload the expectations and use them as a base to validate new runs.
Yes. Basic record/reply approach.
I see a number of problems with this (despite the serialization requirement on arguments to be checked), the major one being to not allow TDD.
Why is this?
Also another common usage of mock objects in test is to document how the object under test reacts to the outside world. Moving the expectations outside the test makes this difficult (although not impossible, I
Actually IMO it is the other way around. Having these expectations in a plain text log file and not in a source code does not decrease a test value from any prospective and: * the actual source code does not interfere with log, so you can read it easier * there is no way "unaware" person can peek into your test case and understand what exactly interaction expectations are. Looking at log file it is much easier. Look at any example of these test cases and tell me this is not the case.
suppose the test cases and expectations log files could be post-processed to produce sequence diagrams or something similar).
Yes. Indeed. And given fixed format of these you can generate proper documentation out of them or visa verse you can generate these "pattern" files on your requirement phase by people who are not developers at all. It is common to describe interaction expectation in requirements (not so much with specific behavior of your class, which you test using state expectations).
Actually we (my team and company) have attempted this approach in the past. It works nicely on small use cases, but quickly tends to get in the way of refactoring. When each code change fails a dozen test cases which then have to be manually checked only to discover that a small variation to the algorithm still produced a perfectly valid expected output but with a slightly different resolution path, it tends to be very counter-productive.
This only indicates that you maybe testing wrong thing. With interaction based testing, while it is useful in some scenarios, it is also very easy to fall into a trap of testing implementation. This is the nature of the whole process. Either the algorithms above are better of being tested by checking a state of produced values or it is possible that your tests where valid, but how can you be sure algorithms changes are valid? At the very least your test cases notified you about these changes. All you need to do now (at least in my approach) is to go through new logs and "accept" them as new pattern files. This is also very common and valid approach to interaction based testing.
Therefore we started to add ways to relax the expectations in order to minimize the false positive test failures : the number of times and the order in which expectations happen, that some arguments sometimes are to be verified and sometimes not, etc..
And more complicated behavior variations you accept the less valuable your test becomes. Somehow you do not write your test cases like this: ok, this function sometimes returns 5, sometimes 6 and some rare cases 10. You can do this, but this is bad test. The same applies to interaction based testing. You are much better of with specific interaction being expected given the same input.
In the end spelling out the expectations started to look more like a solution and less like a problem.
IMO spelling out non trivial expectation of interactions indicates tool misuse. Some behavior configuration of the mocks is fine. Non deterministic behavior expectations are not.
Do you have any experience on this matter ? Did you manage to overcome this issue ?
Well I can go as far as admitting that there is a potential for improvement where you can implement some relaxed rules for matching pattern files, but misuse of these will make your test case useless. As I said above, I still believe record/replay approach is more preferable to any other one and I do not see a big issues with it as is. Regards, Gennadiy

On 29/09/2012 3:31 PM, Gennadiy Rozental wrote:
Anyway, I haven't looked back yet and (sorry) I'm not sure I will. Google Mock itself is unbelievably useful. Frankly, I can't see what the fuss is all about. An approach taken by Boost.Test is marginally better in my opinion. Mocks are deterministic and test case should not need to spell out expectations. Writing mocks is just as easy. You can see an example here:
.../libs/test/example/logged_exp_example.cpp
There is a potential for some improvement, but it is already better than anything else I know (IMO obviously).
http://svn.boost.org/svn/boost/trunk/libs/test/example/logged_exp_example.cp... I looked at the example and I couldn't really understand it. However, here are the reasons I prefer Google Mock to writing my own mocks over and over: 1. The psychological barrier of creating yet-another-class. Maybe this is PTSD from that one time I had to use Java or something. 2. The main things in my mock tests that change are the values, not the types. Google Mock makes it easy for me to write quick tests for when values change: MockThingy mock; EXPECT_CALL(mock,something(100)) .WillOnce(Return(32)); The above will take a *generic* mocked class and give it behaviour. This particular test expects a call to the "something" function with a parameter of 100 and will return 32 once. The important thing is that I can re-use the mocked class for other tests as well. I can't immediately see how this is possible with the example you have shown. As for Boost Test vs Google Test, I don't really prefer one over the other. I'm only using Google Test because it was easy in this particular case. I would be happy if you could explain your example though because I don't understand what it's doing. Sohail

Sohail Somani <sohail <at> taggedtype.net> writes:
I looked at the example and I couldn't really understand it. However, here are the reasons I prefer Google Mock to writing my own mocks over and over:
1. The psychological barrier of creating yet-another-class. Maybe this is PTSD from that one time I had to use Java or something.
Don't you need to specify mock class? WE can probably improve Boost.Test's macros to have similar amount of typing as what gmock would require, but difference is not that big as it is, while IMO we Boost.Test have is more flexible.
2. The main things in my mock tests that change are the values, not the types.
I do not understand this.
Google Mock makes it easy for me to write quick tests for when values change:
MockThingy mock;
EXPECT_CALL(mock,something(100)) .WillOnce(Return(32));
The above will take a *generic* mocked class and give it behaviour. This particular test expects a call to the "something" function with a parameter of 100 and will return 32 once.
Are you saying you do not need to write MockThingy ahead of time and miraculously you can write mock.something(100) somewhere below? Somehow I do not believe it. More over, what if you need to test 10 different, but similar scenarios in 10 different test cases. Now each one of them will have to spell out above EXPECT_CALL... (probably because it is common part). I'd rather spell it out once in MockThingy definition or even better does not spell it at all. Instead I just write mock.something(100); and that's it. No expectation needs to spelled out in test case with Boost.Test approach.
The important thing is that I can re-use the mocked class for other
What to reuse? So you did write it once, right?
tests as well. I can't immediately see how this is possible with the example you have shown.
What stopping you to reuse mock classes from my example? Gennadiy

on Sat Sep 29 2012, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Sohail Somani <sohail <at> taggedtype.net> writes:
Anyway, I haven't looked back yet and (sorry) I'm not sure I will. Google Mock itself is unbelievably useful.
Frankly, I can't see what the fuss is all about.
The fuss is about teachability and learnability. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

Dave Abrahams <dave <at> boostpro.com> writes: on Sat Sep 29 2012, Gennadiy Rozental wrote:
Google Mock itself is unbelievably useful. Frankly, I can't see what the fuss is all about.
The fuss is about teachability and learnability.
I understand you have point to make, but I was talking about gmock. Did you? (talk about it) Gennadiy

on Fri Oct 05 2012, Gennadiy Rozenal <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes: on Sat Sep 29 2012, Gennadiy Rozental wrote:
Google Mock itself is unbelievably useful. Frankly, I can't see what the fuss is all about.
The fuss is about teachability and learnability.
I understand you have point to make, but I was talking about gmock. Did you? (talk about it)
Ah, no... except to note that Sohail was able to learn to use it in no time flat. Thanks for clarifying. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On 29/09/2012 18:33, Sohail Somani wrote:
(...) Anyway, I haven't looked back yet and (sorry) I'm not sure I will. Google Mock itself is unbelievably useful.
Hi Sohail, There is this mock object library I have been working on for a few years which can be used with Boost.Test (but I believe could be adapted to any test framework quite easily) : http://turtle.sf.net I am slowly converting it to boost (I'm working on the documentation these days) and ultimately plan on submitting it. For now I really could use feedback, especially from users of Google Mock ! Regards, MAT.

On 29/09/2012 4:54 PM, Mathieu Champlon wrote:
On 29/09/2012 18:33, Sohail Somani wrote:
(...) Anyway, I haven't looked back yet and (sorry) I'm not sure I will. Google Mock itself is unbelievably useful.
Hi Sohail,
There is this mock object library I have been working on for a few years which can be used with Boost.Test (but I believe could be adapted to any test framework quite easily) : http://turtle.sf.net I am slowly converting it to boost (I'm working on the documentation these days) and ultimately plan on submitting it.
For now I really could use feedback, especially from users of Google Mock !
Regards, MAT.
Hey, when I get some free time I will try and use it. It looks pretty good from the docs so far. Thanks for letting me know about it. Sohail

On 2012-09-17 16:40, Dave Abrahams wrote:
Hi As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism - The code is removed from Boost thereafter
First of all: We are happy users of Boost.Test! I haven't counted the number of test cases we handle with Boost.Test in my team, but unless * there is a replacement * there is tool support for migrating we would be stuck with the latest version of Boost that offers Boost.Test. But again: We are happy users of Boost.Test! Of course, there is always room for improvement, especially on the documentation side, but that is true for many parts of boost. And you would not want to deprecate all boost libraries that feature a "glaring typo" in a tutorial, would you? @Gennadiy: You wrote in one of your many replies that you would want to share the load. If you need support in testing/improving/documenting certain features of Boost.Test: I suggest you send out a call for help to this list. I am sure there are lots of people willing to help. Especially after this discussion :-) Regards, Roland

On 28/09/2012 07:30, Roland Bock wrote:
On 2012-09-17 16:40, Dave Abrahams wrote:
Hi As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release - Its documentation, such as it is, is removed from the release after that - Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism - The code is removed from Boost thereafter
First of all: We are happy users of Boost.Test!
I haven't counted the number of test cases we handle with Boost.Test in my team, but unless
* there is a replacement * there is tool support for migrating
we would be stuck with the latest version of Boost that offers Boost.Test.
+1. We are currently using Boost.Test for large project and don't plan migrating to anything else. I don't see any major problems with Boost.Test. Futhermore, for simple test cases that we have, its documentation is sufficient. There are few test frameworks with good community support and widespread use, however since many of those are too intrusive there really aren't that many alternatives to Boost.Test, for us. Of course Boost.Test could be improved. e.g. I would not object to being able to write a unit test, with a proper execution monitor, which does not need to link to anything. Perhaps it's just a question of documenting better what Boost.Test already does. If an alternative was provided in some future version of Boost, with nearly-compatible interface (i.e. one which would work with our existing test fixtures and checks), we could consider switch. Without such alternative, we will be stuck on the last version of boost where Boost.Test is available. Simple as that. Also I'd like to raise another point: I rarely follow Boost discussion so am not aware how many times it's been suggested that some library be dropped from boost, but unless such a library is "for internal use only", it would be clearly breaking change which should not be proposed lightly. Boost.Test is not "for internal use only" and removing it would break Boost for many of its users. If Boost was modular library with users able to pick-and-choose the libraries they want to use that would not be much of a problem, yet despite all of the discussions around the topic - it still isn't. Perhaps if new, incremental version of Boost.Test was written and allowed to stabilize its own interfaces for the next few releases (while maintaining interface compatibility for all documented use cases with Boost.Test), we could switch to it. Perhaps only by means of changing few lines where we #include and/or link, or even better by just picking a new version from available boost modules. Lastly, my impression from reading this whole thread is that Boost.Test development has been severly constrained, thus very little improvement has taken place even when it was obviously needed. Documentation has become a casualty to requirement of keeping the code stable, since there is very little motivation for developers to commit changes meant for documentation only. Other changes were rarely released, "to avoid rocking the boat" in Giennadiy's own words. I don't believe Boost.Test is the only library under such constraints. Clearly some work is needed to enable better development model of such "infra-structure" libraries, because even if new test library found its place in boost, it would be subject to the same constraints, and we would be on square one in few years time - when it's widely adopted, in the need of maintenance but severly constrained again! B.

Dave Abrahams <dave@boostpro.com> writes:
Hi All,
I was just going through Boost.Test to try to figure out how to teach it, and while it looks to have substantial value, it is also in quite a mess. It contains loads of features that are exercised in the examples/ directory but neither included in any of the tests nor documented. There are facilities for command-line argument parsing! There are "decorators" that turn on/off features for test cases. There is support for mock objects! These are cool and sometimes necessary features, but who knew?
[snip]
As a straw man, I'll make this suggestion:
- Boost.Test is officially deprecated in the next release
Woah. Stop the train. Are we seriously suggesting deprecating Boost.Test because, wait ... it has too many features? o_O
- Its documentation, such as it is, is removed from the release after that
It's documentation isn't perfect. But it's better than half the libraries in Boost that we use and love.
- Meanwhile, other tests in Boost that use this library are rewritten to use a different mechanism
Like what? I've tried them all. Boost.Test is unlike any other. It completely changed how I code for the better. Every time I find myself wishing it had a feature, I root around a little and find that it already did! It's a true gem. Could it be better? Sure. Which library couldn't.
- The code is removed from Boost thereafter
-1 This terrifies me. We have thousands of test cases written for Boost.test. Roughly a third of our code for any given project is unit test code. And, unlike just some other library dependency, unit test code is pervasive. In other words, of the third of our code that is unit tests, almost every single line depends on Boost.Test. We could kiss goodbye to our projects (or to ever upgrading Boost) if Boost.Test were removed. Alex
participants (34)
-
Alexander Lamaison
-
Alexander Stoyan
-
Andrey Semashev
-
Boris Schaeling
-
Bronek Kozicki
-
Daniel James
-
Dave Abrahams
-
Dave Steffen
-
Gennadiy Rozenal
-
Gennadiy Rozental
-
Ian Emmons
-
Jamie Allsop
-
Jeff Flinn
-
John Maddock
-
Jürgen Hunold
-
legalize+jeeves@mail.xmission.com
-
Lorenzo Caminiti
-
Loïc Joly
-
Mathias Gaunard
-
Mathieu Champlon
-
Nathan Ridge
-
niXman
-
Paul A. Bristow
-
Peter
-
Peter Sommerlad
-
Robert Ramey
-
Roland Bock
-
Sergey Popov
-
Sohail Somani
-
Stephan Menzel
-
Steven Watanabe
-
Thorsten Ottosen
-
Timo H. Geusch
-
Vicente J. Botet Escriba