Re: [boost] [Release 1.34] Supported Compilers - another view

I've come to believe that the huge amount of discussion regarding how compilers should be classified is another symptom of the (excessively?) close coupling of boost libraries. The first symptom is the disproportionate amount of effort required to make a new "release" which has been discussed on another thread. With a couple of caveats noted below I see no reason why all libraries have to support the exact same set of compilers at the same level. As an example consider a couple of libaries. spirite and lambda - these absolutley depend upon high compiler conformance. Anyone seriously interested in using tools like this and the techniques they embody will not be using msvc 6.0 . Demand and utility of msvc 6.0 for these libraries approaches nil. smart_ptr, iostreams and others. These have wide applicabilty and probably demand for those using MSVC. I'm sure that the developer of smart_ptr suffered with MSVC 6.0 - but its over now and I doubt that just keeping it up is a big problem. I suspect that msvc 6.0 compatiblity for iostreams issn't too bad. All in all I think its fine that each author decide what level of support to give each compiler. The current test system permits a compiler to be marked as supported/unsupported on a library by library basis. This is just fine. Of course library authors and the review process will have to resolve what level of support is appropriate for each combination of library and compiler. I just don't see how there is anyway to agree blanket policy "compiler X is supported/deprecated" etc. So now it comes down to individual libraries. The only time this comes up as a boost wide issue is for libraries which are used by other boost components. The current case is that of Boost Test. Lets consider this library in particular. Boost Test is used by ALL libraries in boost to test themselves. For it to be an effective in this role, it must be usable with any compiler that is supported by any library. This is not a suggestion or normative statement. Its just a recognition of the fact that it can't do the job it has been doing if it doesn't support older compilers. So Boost Test should be structured so that it doesn't break old tests. It can add new features that are supported only in new compilers but it should be able to continue to provide the support it has in the past. I realise that this is an extra burden that other libraries don't have to shoulder and maybe its not "fair" but it is part of the requirement for Boost Test to continue in its fundamental role in boost. A couple of random observations re boost test. It has been a fundamental contributer to a big change in the way I do my programming. It deserves full credit for that. I'd read the "continuous testing" mantra and believed it in theory - but without a fully implemented test sytsem with good documents to teach me how to do it and make it easy to do I wouldn't have been converted. I suspect that my experience is common in this regard. Boost Test is special it has to be almost universal in order or full fill its extremely important and special role in boost and in C++ development in general. I appreciate that the author of Boost Test thinks its a pain in the rear to address the "old compilers" but he is too modest in appreciation of his own work and I'm sure that if knew how important it really has been and he would just say "$%%&%&*" OK and accept his lot and keep it widely compatible. I think that boost test is plenty sophisticated an complete feature wise. Maybe too much - which is what might be creating the compatiblity issues. The main obstacle to "selling and promoting" boost test is that the documents need work to make boost test more obvious to a new user - (and to me as well). The perspective should be to "increase market penetration" of boost test rather than "increase functionality" of boost test. In my view, lack of ease of use is currently a larger obstacle to "increasing market penetration" than limited feature set and this is where effort should be invested. An and another thing. It damn annoying to find that all my tests suddenly fail on msvc because of a change in the test system. Oh I'm sure it was announced somewhere and I don't care - its annoying none the less. Now what am I to do? Stop supporting msvc? Shouldn't that be my decision? Re write my tests to not use boost test? I don't want to do that! Finally, I managed to get the serialization libraries test to work with comeau by commenting out some of boost test. unit_test_parameters.ipp // const_string rs_str = retrieve_framework_parameter( RANDOM_SEED, argc, argv ); // s_random_seed = rs_str.is_empty() ? 0 : lexical_cast<unsigned int>( rs_str ); This apparently instantiates some basic_stream template that the serialiation library also instantiates. The comeau 4.3.3 prelinker complains about this - which I don't thnk it should - and build of serialization library fails. Commenting this out permits the serialization libary to be tested with comeau. Robert Ramey

spirite and lambda - these absolutley depend upon high compiler conformance. Anyone seriously interested in using tools like this and the techniques they embody will not be using msvc 6.0 . Demand and utility of msvc 6.0 for these libraries approaches nil.
Correct.
smart_ptr, iostreams and others. These have wide applicabilty and probably demand for those using MSVC. I'm sure that the developer of smart_ptr suffered with MSVC 6.0 - but its over now and I doubt that just keeping it up is a big problem. I suspect that msvc 6.0 compatiblity for iostreams issn't too bad.
Right, and smart_ptr is used by so many other Boost libraries that if msvc support was withdrawn that would create a big problem if those library authors wish to continue supporting msvc.
Boost Test is used by ALL libraries in boost to test themselves. For it to be an effective in this role, it must be usable with any compiler that is supported by any library. This is not a suggestion or normative statement. Its just a recognition of the fact that it can't do the job it has been doing if it doesn't support older compilers. So Boost Test should be structured so that it doesn't break old tests. It can add new features that are supported only in new compilers but it should be able to continue to provide the support it has in the past. I realise that this is an extra burden that other libraries don't have to shoulder and maybe its not "fair" but it is part of the requirement for Boost Test to continue in its fundamental role in boost.
100% violent agreement.
An and another thing. It damn annoying to find that all my tests suddenly fail on msvc because of a change in the test system. Oh I'm sure it was announced somewhere and I don't care - its annoying none the less. Now what am I to do? Stop supporting msvc? Shouldn't that be my decision? Re write my tests to not use boost test? I don't want to do that!
No reason why you should IMO. If the "new" Boost.Test can't be made to work with msvc (bound to happen at some point), there's no reason why the last good version can't be placed in a sub-directory, and automatically included whenever a deprecated compiler comes along and dares to try and include it. Just my 2c worth.... John.

Right, and smart_ptr is used by so many other Boost libraries that if msvc support was withdrawn that would create a big problem if those library authors wish to continue supporting msvc.
Do we really have anyone interrested to support this compiler? In this case we need to reestablish regression tests run IMO. Gennadiy

"Robert Ramey" <ramey@rrsd.com> writes:
It damn annoying to find that all my tests suddenly fail on msvc because of a change in the test system. Oh I'm sure it was announced somewhere and I don't care - its annoying none the less. Now what am I to do? Stop supporting msvc? Shouldn't that be my decision? Re write my tests to not use boost test? I don't want to do that!
It might be a good idea anyway. In my experience, Boost.Test is overpowered for the purposes of Boost regression testing, and on Windows it tends to stand in the way of debugging by "handling" crashes as exceptions rather than invoking JIT or the debugger. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:874q3kz0am.fsf@boost-consulting.com...
"Robert Ramey" <ramey@rrsd.com> writes:
It damn annoying to find that all my tests suddenly fail on msvc because of a change in the test system. Oh I'm sure it was announced somewhere and I don't care - its annoying none the less. Now what am I to do? Stop supporting msvc? Shouldn't that be my decision? Re write my tests to not use boost test? I don't want to do that!
It might be a good idea anyway. In my experience, Boost.Test is overpowered for the purposes of Boost regression testing, and on Windows it tends to stand in the way of debugging by "handling" crashes as exceptions rather than invoking JIT or the debugger.
And as we discussed this is just a default that could be easily changed for manual testing (for example by defining environment variable if you tired to pass cla every time). Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:874q3kz0am.fsf@boost-consulting.com...
"Robert Ramey" <ramey@rrsd.com> writes:
It damn annoying to find that all my tests suddenly fail on msvc because of a change in the test system. Oh I'm sure it was announced somewhere and I don't care - its annoying none the less. Now what am I to do? Stop supporting msvc? Shouldn't that be my decision? Re write my tests to not use boost test? I don't want to do that!
It might be a good idea anyway. In my experience, Boost.Test is overpowered for the purposes of Boost regression testing, and on Windows it tends to stand in the way of debugging by "handling" crashes as exceptions rather than invoking JIT or the debugger.
And as we discussed this is just a default that could be easily changed for manual testing (for example by defining environment variable if you tired to pass cla every time).
It's just another thing to remember and manage. And then I have to manage linking with the right library, and read the Boost.Test documentation to figure out which calls and macros to use, etc. Oh, and I also have to wait for Boost.Test to build before I can run my own tests, and if Boost.Test breaks I am stuck. So there are lots of little pitfalls for me. I'm sure Boost.Test is great for some purposes, but why should I use it when BOOST_ASSERT does everything I need (**)? It seems like a lot of little hassles for no particular gain, and I think that's true for 99% of all Boost regression tests. I'd actually love to be convinced otherwise, but I've tried to use it, and it hasn't ever been my experience that it gave me something I couldn't get from lighter-weight facilities. It's really important that the barrier to entry for testing be very low; you want to make sure there are no disincentives. For me that means reaching for BOOST_ASSERT and the facilities of <boost/mpl/assert.hpp> until it is demonstrated that I need more. (**) I actually need the Windows JIT debugging trick that does "throw;" inside the structured exception handler, but Boost.Test doesn't give me that either IIUC. -- Dave Abrahams Boost Consulting www.boost-consulting.com

and on Windows it tends to stand in the way of debugging by "handling" crashes as exceptions rather than invoking JIT or the debugger.
And as we discussed this is just a default that could be easily changed for manual testing (for example by defining environment variable if you tired to pass cla every time).
It's just another thing to remember and manage.
No need to remember anything or mange. Just setup environment variable once.
And then I have to manage linking with the right library
Again you either set it up once in your project file or even better rely on autolinking
and read the Boost.Test documentation to figure out which calls and macros to use, etc
I am sorry: you do need to read documentation to use a library. Though I believe 2-3 most frequently used tools you would learn quite quickly.
Oh, and I also have to wait for Boost.Test to build
Why? You could build library once and reuse it or you could use inlined components.
before I can run my own tests,
Even if you are using inlined version you still need to wait for it to be parsed and compiled. And this is true for Boost.Test as well as for any other tool.
and if Boost.Test breaks I am stuck.
And if Boost.<any other component you depend on> breaks you are not? Actually Boost.Test is quite stable for a while now.
So there are lots of little pitfalls for me.
It feels like some negative predisposition speaks here.
I'm sure Boost.Test is great for some purposes, but why should I use it when BOOST_ASSERT does everything I need (**)?
It's just mean that you have very limited testing needs from both construction and organization standpoints. And even in such trivial cases Boost.Test would fire better: BOOST_ASSERT stops at first failure (is it?) - BOOST_CHECK don't; if expression throw an exception you need to start a debugger to figure out what is going on - using Boost.Test in majority of the cases it's clear from test output. And I am not talking of much more convenient other tools available.
It seems like a lot of little hassles for no particular gain,
I think it's subjective at best.
and I think that's true for 99% of all Boost regression tests.
And I think you are seriously mistaken.
I'd actually love to be convinced otherwise, but I've tried to use it, and it hasn't ever been my experience that it gave me something I couldn't get from lighter-weight facilities.
Boost.Test was enhanced significantly in last two releases from usability standpoint. Would you care to take another look?
It's really important that the barrier to entry for testing be very low; you want to make sure there are no disincentives.
With latest Boost.Test all that you need to start is: #define BOOST_TEST_MAIN #include <boost/test/unit_test.hpp> BOOST_AUTO_TEST_CASE( t ) { // here you go: } Is this a hi barrier? Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
and on Windows it tends to stand in the way of debugging by "handling" crashes as exceptions rather than invoking JIT or the debugger.
And as we discussed this is just a default that could be easily changed for manual testing (for example by defining environment variable if you tired to pass cla every time).
It's just another thing to remember and manage.
No need to remember anything or mange. Just setup environment variable once.
Unless you don't like having your environment cluttered with settings whose purpose you can't recall. I'm *still* trying to figure out how to set up environment variables consistently across all the shells on my *nix systems. Call me incompetent if you like but to get that worked out requires some investment.
And then I have to manage linking with the right library
Again you either set it up once in your project file
Yes, a small thing to manage, but a thing nonetheless.
or even better rely on autolinking
Autolinking is nonportable, and you have to set up _some_ kind of path so that the libraries can be found by the linker.
and read the Boost.Test documentation to figure out which calls and macros to use, etc
I am sorry: you do need to read documentation to use a library. Though I believe 2-3 most frequently used tools you would learn quite quickly.
Yes, a small thing to manage, but a thing nonetheless.
Oh, and I also have to wait for Boost.Test to build
Why? You could build library once and reuse it or you could use inlined components.
It has to build once for each toolset, and then again each time the test library changes. Yes, a small inconvenience, but an inconvenience nonetheless.
before I can run my own tests,
Even if you are using inlined version you still need to wait for it to be parsed and compiled. And this is true for Boost.Test as well as for any other tool.
Yep. BOOST_ASSERT is small and easily included.
and if Boost.Test breaks I am stuck.
And if Boost.<any other component you depend on> breaks you are not?
I can usually fix those, or workaround the problem. With Boost.Test my workaround for any problem is to fall back on BOOST_ASSERT and wonder why I bother.
Actually Boost.Test is quite stable for a while now.
So there are lots of little pitfalls for me.
It feels like some negative predisposition speaks here.
It's not a predisposition; it's borne of experience. Every time I try to use the library, thinking it's probably the right thing to do, and wanting to like it, I find myself wondering what I've gained for my investment. Until you can hear that and respond accordingly -- instead of dismissing it as the result of predisposition -- Boost.Test is going to continue to be a losing proposition for me.
I'm sure Boost.Test is great for some purposes, but why should I use it when BOOST_ASSERT does everything I need (**)?
It's just mean that you have very limited testing needs from both construction and organization standpoints.
Maybe so; I never claimed otherwise.
And even in such trivial cases Boost.Test would fire better: BOOST_ASSERT stops at first failure (is it?) -
Yeah; that's fine for me. Either the test program fails or it passes.
BOOST_CHECK don't; if expression throw an exception you need to start a debugger to figure out what is going on - using Boost.Test in majority of the cases it's clear from test output.
It's hard to imagine what test output could allow me to diagnose the cause of an exception. Normally, the cause is contained in the context (e.g. backtrace, etc.) and that information is lost during exception unwinding.
And I am not talking of much more convenient other tools available.
It seems like a lot of little hassles for no particular gain,
I think it's subjective at best.
Of course it is subjective.
and I think that's true for 99% of all Boost regression tests.
And I think you are seriously mistaken.
That may be so. Maybe you should point me at some Boost regression tests that benefit heavily from Boost.Test so I can get a feeling for how it is used effectively.
I'd actually love to be convinced otherwise, but I've tried to use it, and it hasn't ever been my experience that it gave me something I couldn't get from lighter-weight facilities.
Boost.Test was enhanced significantly in last two releases from usability standpoint. Would you care to take another look?
I have used it in the past 6 months. It didn't seem to buy me much. Admittedly, my testing needs were not complicated, but that seems to be the case much of the time.
It's really important that the barrier to entry for testing be very low; you want to make sure there are no disincentives.
With latest Boost.Test all that you need to start is:
#define BOOST_TEST_MAIN #include <boost/test/unit_test.hpp>
BOOST_AUTO_TEST_CASE( t ) { // here you go: }
Is this a hi barrier?
It depends. Do I have to link with another library? If so, then add the lines of the Jamfile (and Jamfile.v2) to what I need to start with. What about allowing JIT debugging? Will this trap all my failures or can I get it to launch a debugger? -- Dave Abrahams Boost Consulting www.boost-consulting.com

Autolinking is nonportable, and you have to set up _some_ kind of path so that the libraries can be found by the linker.
Not if you are using bjam
It has to build once for each toolset, and then again each time the test library changes. Yes, a small inconvenience, but an inconvenience nonetheless.
1. If you are doing a lot of testing it's tiny work in comparison with all the test you are building/running 2. You could always choose to use inlined version. On powerful box compilation time difference is almost negligable
And if Boost.<any other component you depend on> breaks you are not?
I can usually fix those, or workaround the problem. With Boost.Test my workaround for any problem is to fall back on BOOST_ASSERT and wonder why I bother.
So essencially you don't use any tools you dont have direct control over. And again see next statement
Actually Boost.Test is quite stable for a while now.
It's not a predisposition; it's borne of experience. Every time I try to use the library, thinking it's probably the right thing to do, and wanting to like it, I find myself wondering what I've gained for my investment. Until you can hear that and respond accordingly -- instead of dismissing it as the result of predisposition -- Boost.Test is going to continue to be a losing proposition for me.
I am not quite sure what you want me to hear. How should I enhance the library for you to find it worth your investements.
BOOST_CHECK don't; if expression throw an exception you need to start a debugger to figure out what is going on - using Boost.Test in majority of the cases it's clear from test output.
It's hard to imagine what test output could allow me to diagnose the cause of an exception. Normally, the cause is contained in the context (e.g. backtrace, etc.) and that information is lost during exception unwinding.
It depends how you organize you program. My exception classes frquently report failure location along with erro cause. So instead of stepping throw whole stack inside the debugger I jump directly into source code.
And I think you are seriously mistaken.
That may be so. Maybe you should point me at some Boost regression tests that benefit heavily from Boost.Test so I can get a feeling for how it is used effectively.
Not a partucular test. But in a amoung current test modules under /libs about 200 modules using test tools different from simple BOOST_CHECK/BOOST_ASSERT
#define BOOST_TEST_MAIN #include <boost/test/unit_test.hpp>
BOOST_AUTO_TEST_CASE( t ) { // here you go: }
Is this a hi barrier?
It depends. Do I have to link with another library? If so, then add the lines of the Jamfile (and Jamfile.v2) to what I need to start with. What about allowing JIT debugging? Will this trap all my failures or can I get it to launch a debugger?
You could use #include <boost/test/included/unit_test.hpp> BOOST_AUTO_TEST_CASE( t ) { // here you go: } No need for linking. Catching system error you could disable in either Jamfile on in environment. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
Actually Boost.Test is quite stable for a while now.
It's not a predisposition; it's borne of experience. Every time I try to use the library, thinking it's probably the right thing to do, and wanting to like it, I find myself wondering what I've gained for my investment. Until you can hear that and respond accordingly -- instead of dismissing it as the result of predisposition -- Boost.Test is going to continue to be a losing proposition for me.
I am not quite sure what you want me to hear. How should I enhance the library for you to find it worth your investements.
You could make the minimal facility more usable.
BOOST_CHECK don't; if expression throw an exception you need to start a debugger to figure out what is going on - using Boost.Test in majority of the cases it's clear from test output.
It's hard to imagine what test output could allow me to diagnose the cause of an exception. Normally, the cause is contained in the context (e.g. backtrace, etc.) and that information is lost during exception unwinding.
It depends how you organize you program. My exception classes frquently report failure location along with erro cause. So instead of stepping throw whole stack inside the debugger I jump directly into source code.
The failure location is often not very interesting, and the more encapsulated and highly factored my program gets, the more that's true.
And I think you are seriously mistaken.
That may be so. Maybe you should point me at some Boost regression tests that benefit heavily from Boost.Test so I can get a feeling for how it is used effectively.
Not a partucular test. But in a amoung current test modules under /libs about 200 modules using test tools different from simple BOOST_CHECK/BOOST_ASSERT
Yes, but it's the ones that just use BOOST_CHECK that I claim are not gaining a whole lot from the library.
#define BOOST_TEST_MAIN #include <boost/test/unit_test.hpp>
BOOST_AUTO_TEST_CASE( t ) { // here you go: }
Is this a hi barrier?
It depends. Do I have to link with another library? If so, then add the lines of the Jamfile (and Jamfile.v2) to what I need to start with. What about allowing JIT debugging? Will this trap all my failures or can I get it to launch a debugger?
You could use
#include <boost/test/included/unit_test.hpp>
BOOST_AUTO_TEST_CASE( t ) { // here you go: }
No need for linking. Catching system error you could disable in either Jamfile on in environment.
Oh? That's an improvement in convenience, to be sure. Maybe I'll try again. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
In my experience, Boost.Test is overpowered for the purposes of Boost regression testing,
Now that is an interesting observation that I would tend to agree with. It really points to the design of boost test itself. I understand the the problem - library development gets driven be feature comparisons with other libraries, wish lists, "cool" features, and I think more mundane aspects like a good "user directed" manual, conceptual transparency, idiot-proofness etc seem to lose importance. Actually, maybe my concerns can be addressed by going back into boost test and using the "minimal" option. If its not possible now it should be easy for boost test to be jiggered around so that functionality is of several levels - for example a) minimal - see below b) standard - sufficient facilities for simple regression testing - works on all concievable compilers - LOL c) better - more features - compiler support at descretion of the boost test library author. Would require a table of which features are supported by which compilers. This would be free as the table is already generated by the boost regression testing. d) delux - every bell and whistle ever suggested that the library author wants to implement. As I said before, I think that Boost Test could be much, much more important than it is in drawing new users to boost. For this to happen I would like to see: a) an easier to read and use manual b) a minimal level implemented as header only so that users in need of immediate gratification could get it. This would address the user who turns to boost because he's under huge pressure to find a bug. Solving someone's problem in less than two hours is going to turn anyone into a boost booster. I realize that the above is my own personal wish list so we have a lot of conceptual recursion here.
Windows it tends to stand in the way of debugging by "handling" crashes as exceptions rather than invoking JIT or the debugger.
Hmmm - I don't we want to invoke the debugger in the regression tests. I practice I fire up my debugger on any failed test and set my VC debugger to trap on any exception - this works great for me. Robert Ramey

It really points to the design of boost test itself. I understand the the problem - library development gets driven be feature comparisons with other libraries, wish lists, "cool" features, and I think more mundane aspects like a good "user directed" manual
Group of volunteers and me at the moment working on Boost.Test documentation update. If you have any particular requests and/or willing to participate - please let us hear it.
conceptual transparency, idiot-proofness etc seem to lose importance.
You need to be more clear for me to understand what exactly you are "missing"
Actually, maybe my concerns can be addressed by going back into boost test and using the "minimal" option. If its not possible now it should be easy for boost test to be jiggered around so that functionality is of several levels - for example
a) minimal - see below
b) standard - sufficient facilities for simple regression testing - works on all conceivable compilers - LOL
c) better - more features - compiler support at discretion of the boost test library author. Would require a table of which features are supported by which compilers. This would be free as the table is already generated by the boost regression testing.
d) delux - every bell and whistle ever suggested that the library author wants to implement.
Essentially the state of affairs is like this already. There is the Minimal Testing Facility component (header only). There is the Unit Test Framework that works on most compilers. And there are new features that ifdefed out for old compilers. But my position is that we need to consciously move away from old compilers to make code base healthier. Requirement for feature compiler support is also scaring. I would really prefer not to do this.
As I said before, I think that Boost Test could be much, much more important than it is in drawing new users to boost. For this to happen I would like to see: a) an easier to read and use manual b) a minimal level implemented as header only so that users in need of immediate gratification could get it. This would address the user who turns to boost because he's under huge pressure to find a bug. Solving someone's problem in less than two hours is going to turn anyone into a boost booster.
This looks like two contradictory goals: you may not be able to solve someone problem in 2 hour using facility with minimal functionality Gennadiy

Gennadiy Rozental wrote:
It really points to the design of boost test itself. I understand the the problem - library development gets driven be feature comparisons with other libraries, wish lists, "cool" features, and I think more mundane aspects like a good "user directed" manual
Group of volunteers and me at the moment working on Boost.Test documentation update. If you have any particular requests and/or willing to participate - please let us hear it.
conceptual transparency, idiot-proofness etc seem to lose importance.
You need to be more clear for me to understand what exactly you are "missing"
This is not a critism of Boost Test but an observation on how things get prioritised in all boost libraries. I thnk that adding the "new features" is more interesting than making a better manual and examples so the latter tends to lag. But to answer your question about what I'm "missing" What I would like to see is: Introduction Tutorials small commented example at narrative Min Test facility Unit Test framwork Program Test ... Implementation. Or maybe Introduction Minimal Test facility Tutorial - commented example and nararative Refererence- Implementation notes, etc UnitTest Framework - includes all the faciltiy of the Minimal Test facility + ? Tutorial - commented example and nararative Implementation notes, etc ? Other stuff Usage advice specific compiler issues etc. ... Basically I would love to copy the the tutorial to my own project and edit it to make my test. That might sound ridiculous - but that's would I would like to be able it do.
Actually, maybe my concerns can be addressed by going back into boost test and using the "minimal" option. If its not possible now it should be easy for boost test to be jiggered around so that functionality is of several levels - for example
a) minimal - see below
b) standard - sufficient facilities for simple regression testing - works on all conceivable compilers - LOL
c) better - more features - compiler support at discretion of the boost test library author. Would require a table of which features are supported by which compilers. This would be free as the table is already generated by the boost regression testing.
d) delux - every bell and whistle ever suggested that the library author wants to implement.
Essentially the state of affairs is like this already. There is the Minimal Testing Facility component (header only). There is the Unit Test Framework that works on most compilers.
I suspected as much but it doesn't jump out from a cursory examination of the manual.
And there are new features that ifdefed out for old compilers.
Which is ok by me.
But my position is that we need to consciously move away from old compilers to make code base healthier.
Requirement for feature compiler support is also scaring. I would really prefer not to do this.
I'm not sure what this refers to. I would be just happy to know that boost test won't be improved in such a way that it breaks my old stuff that doesn't use the new features.
As I said before, I think that Boost Test could be much, much more important than it is in drawing new users to boost. For this to happen I would like to see: a) an easier to read and use manual b) a minimal level implemented as header only so that users in need of immediate gratification could get it. This would address the user who turns to boost because he's under huge pressure to find a bug. Solving someone's problem in less than two hours is going to turn anyone into a boost booster.
This looks like two contradictory goals: you may not be able to solve someone problem in 2 hour using facility with minimal functionality
LOL - again I think you're underestimating the impact even a minimal facility can have. Remeber that the usual alternative is to do no unit testing at all. I envision a common situation - (In fact, at this very moment I'm stuck on another project and I find my self in this exact situation). I'm working on my next wizbang project and I'm under huge pressure to fix a bug. I've been working on it for days with no luck. Now I realize that its much deeper than I thought and that it could be anywhere. In desperation I look to boost and find Boost Test. The introduction shows me the joys of unit testing which I havn't been using. I'm really desperate and will try anything that only takes two hours to try. I down load boost headers. Copy the tutorial example from boost test an make a test for one mof my routines. 2 hours. Still havn't found my bug but since I still don't know what to do I repeat the processes for the rest of the program. The bug turns out to be pretty stupid and easy to spot - if I had only thought to look at the right place. I leave the boost test stuff in becuase now its "free". I've been kidnapped into the boost community in spite of myself. Contrast that the current situation. I look into boost test. Well reading the documents is a couple of hours. Then there is bjam and library building and linking. Right away I'm on to something else. Now you can argue that the world shouldn't be like this and maybe your right - but I think that boost test could work as described above - just by making the manual easier to read - so I have high hopes for the new manual. Robert Ramey PS fyi - my current situation is programming a gameboy color to implement a hangglidng flight instrument. This thing is a bitch to program. I could sure use boost test here - any chance of a straight "C" version? RR

Basically I would love to copy the tutorial to my own project and edit it to make my test. That might sound ridiculous - but that's would I would like to be able it do.
Could you be more specific: what kind of tutorial, what topics should cover?
But my position is that we need to consciously move away from old compilers to make code base healthier.
Requirement for feature compiler support is also scaring. I would really prefer not to do this.
I'm not sure what this refers to.
For example you just commented out random order of test cases feature support Now I need to mark in docs this particular compiler doesn't support this feature. This is some work to maintain such a table.
This looks like two contradictory goals: you may not be able to solve someone problem in 2 hour using facility with minimal functionality
LOL - again I think you're underestimating the impact even a minimal facility can have. Remember that the usual alternative is to do no unit testing at all.
In my personal opinion from usability/learning curve stand point there is no reasons to use anything but complete Unit Test Framework. It's just as easy and in a long term much more powerful.
I envision a common situation - (In fact, at this very moment I'm stuck on another project and I find my self in this exact situation). I'm working on my next wizbang project and I'm under huge pressure to fix a bug. I've been working on it for days with no luck. Now I realize that its much deeper than I thought and that it could be anywhere. In desperation I look to boost and find Boost Test. The introduction shows me the joys of unit testing which I haven't been using. I'm really desperate and will try anything that only takes two hours to try. I down load boost headers. Copy the tutorial example from boost test an make a test for one of my routines. 2 hours. Still haven't found my bug but since I still don't know what to do I repeat the processes for the rest of the program. The bug turns out to be pretty stupid and easy to spot - if I had only thought to look at the right place. I leave the boost test stuff in because now its "free". I've been kidnapped into the boost community in spite of myself.
It's all good and interesting but would you prefer minimal testing component? You stuck with a single BOOST_CHECK tool and couldn't figure out why particular assertion fails. You could use debugger but using BOOST_CHEKK_EQUAL would give you much more change to figure it you quicker without one.
Contrast that the current situation. I look into boost test. Well reading the documents is a couple of hours. Then there is bjam and library building and linking. Right away I'm on to something else.
I could reinforce existence of inlined components that allow to skip library building. Also just reading getting started page should give you enough to start in your scenario above.
fyi - my current situation is programming a gameboy color to implement a hangglidng flight instrument. This thing is a bitch to program. I could sure use boost test here - any chance of a straight "C" version?
I doubt it. ;) Gennadiy

On Tue, 31 Jan 2006 17:26:29 -0500 "Gennadiy Rozental" <gennadiy.rozental@thomson.com> wrote:
It's all good and interesting but would you prefer minimal testing component? You stuck with a single BOOST_CHECK tool and couldn't figure out why particular assertion fails. You could use debugger but using BOOST_CHEKK_EQUAL would give you much more change to figure it you quicker without one.
Without a doubt, Boost.Test singlehandedly eliminated one of my biggest objections to test-first coding. I still don't do it near as much as I would like, but Boost.Test surely eliminated one of my biggest obstacles. I really like what it provide. However... I still can't figure out how to use the macro that checks equivalence between floating point numbers. The interface is just too strange, and the docs are not very clear. I played around with it for a while, and then just stopped trying to use it. That's really my biggest complaint with the test library (so, it can't be too bad for me at least ;-).

Jody Hagins writes:
On Tue, 31 Jan 2006 17:26:29 -0500 "Gennadiy Rozental" <gennadiy.rozental@thomson.com> wrote:
It's all good and interesting but would you prefer minimal testing component? You stuck with a single BOOST_CHECK tool and couldn't figure out why particular assertion fails. You could use debugger but using BOOST_CHEKK_EQUAL would give you much more change to figure it you quicker without one.
Without a doubt, Boost.Test singlehandedly eliminated one of my biggest objections to test-first coding. I still don't do it near as much as I would like, but Boost.Test surely eliminated one of my biggest obstacles.
I really like what it provide.
FWIW, Boost Test is now _the_ framework for our unit tests. It's far, far better than anything else we've tried.
However... I still can't figure out how to use the macro that checks equivalence between floating point numbers. The interface is just too strange, and the docs are not very clear.
It's not perfect, and yes, the floating point stuff is a little odd. I ended up wrapping it into something that was a little easier for the other coders to deal with. Out of curiosity, what sort of interface would you like? (I'm looking for ideas.) ---------------------------------------------------------------------- Dave Steffen, Ph.D. Nowlan's Theory: He who hesitates is not Software Engineer IV only lost, but several miles from the Numerica Corporation next freeway exit. ph (970) 419-8343 x27 fax (970) 223-6797 The shortest distance between two points dgsteffen@numerica.us is under construction. -- Noelie Alito ___________________ Numerica Disclaimer: This message and any attachments are intended only for the individual or entity to which the message is addressed. It is proprietary and may contain privileged information. If you are neither the intended recipient nor the agent responsible for delivering the message to the intended recipient, you are hereby notified that any review, retransmission, dissemination, or taking of any action in reliance upon, the information in this communication is strictly prohibited, and may be unlawful. If you feel you have received this communication in error, please notify us immediately by returning this Email to the sender and deleting it from your computer.

On Tue, 31 Jan 2006 17:05:28 -0700 Dave Steffen <dgsteffen@numerica.us> wrote:
It's not perfect, and yes, the floating point stuff is a little odd. I ended up wrapping it into something that was a little easier for the other coders to deal with. Out of curiosity, what sort of interface would you like? (I'm looking for ideas.)
I'm not sure... I just know that the current interface is too unwieldly. A large number of my use cases can be done with a target value, and and epsilon on either side (or a range). I like the ideas of "closeness" as defined in the Boost.Test stuff, but I just don't get the interface.

However... I still can't figure out how to use the macro that checks equivalence between floating point numbers. The interface is just too strange, and the docs are not very clear.
I played around with it for a while, and then just stopped trying to use it. That's really my biggest complaint with the test library (so, it can't be too bad for me at least ;-).
I am working on docs. Once it's done you could read it again and let me know how clear it is. Gennadiy

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Jody Hagins | Sent: 31 January 2006 23:11 | To: boost@lists.boost.org | Subject: Re: [boost] [Release 1.34] Supported Compilers - another view | | Without a doubt, Boost.Test singlehandedly eliminated one of | my biggest objections to test-first coding. | | I really like what it provide. Me too - but I found getting started VERY confusing and almost gave up. But Gennadiy now has a whole TEAM of writers on the job of producing better documentation - and they don't have the crippling disadvantage (for this task) of having written the code. With his new macros, I am confident we can produce something very simple. But either bjam has to work a lot better than it has for me, or better, we have to ship pre-built library files for the major users. Given those, it's a doddle. And it gives you a nice record of what you've tested - and what you haven't ;-) | However... I still can't figure out how to use the macro that checks | equivalence between floating point numbers. The interface is just too | strange, and the docs are not very clear. I am working on this as we speak. Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

Jody Hagins <jody-boost-011304@atdesk.com> writes:
On Tue, 31 Jan 2006 17:26:29 -0500 "Gennadiy Rozental" <gennadiy.rozental@thomson.com> wrote:
It's all good and interesting but would you prefer minimal testing component? You stuck with a single BOOST_CHECK tool and couldn't figure out why particular assertion fails. You could use debugger but using BOOST_CHEKK_EQUAL would give you much more change to figure it you quicker without one.
Without a doubt, Boost.Test singlehandedly eliminated one of my biggest objections to test-first coding.
Which objection and how does Boost.Test eliminate it? -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Wed, 01 Feb 2006 09:34:14 -0500 David Abrahams <dave@boost-consulting.com> wrote:
It's all good and interesting but would you prefer minimal testing component? You stuck with a single BOOST_CHECK tool and couldn't figure out why particular assertion fails. You could use debugger but > using BOOST_CHEKK_EQUAL would give you much more change to figure it > you quicker without one.
Without a doubt, Boost.Test singlehandedly eliminated one of my biggest objections to test-first coding.
Which objection and how does Boost.Test eliminate it?
Early attempts were stopped because creating and running the tests were just too difficult and time consuming. Later attempts resulted in a mix of C/C++ with test scripts written in python, perl, ruby, etc. This was better, but there was still way too much going on, and it took a long time to use the system, write tests, run scripts, and such. The test framework was not clean and easy to use. I experienced a fair amount of effort in startup of using Boost.Test (about a week or so), but once I started using it and got a handle on the parts that seemed most use to me, it became very easy to write tests and integrate them into the build process. The framework allows for easy test-writing. I use the library, but part of my Boost install is to build all libs anyway, so it's not a big deal. I can run "make tests" at the top of my build tree, and it will build all the code, and run the tests. When I need more tests, I can simply create a new .cpp file with the basic few lines of template, and start inserting my tests. For more complex tests, I use the unit test classes, but for almost everything else, the basic stuff is plenty. Most of the TEST macros are very easy to use, though the documentation can be a bit confusing at times. I was quite surprised at the depth of this library after using it for a while. Thus, after integrating Boost.Test into my process, I find writing and maintaining good tests is not so difficult (I still have to write proxy objects for complex integration testing... someone needs to come up with a good solution for that). I really like the flexible error reporting. I do not set any environment variables in my shell. Instead, my makefiles handle all of that (by default, they do nothing, I think, which means the defaults are reasonable --- for me at least), and I can override it when I call make. When I am doing things the right way, I can quickly write up some tests, then develop until they all pass. Sometimes, as I write the implementation, I think of a strange scenario that I have to code around. When that happens I SHOULD go write the test. Before using Boost.Test I NEVER did that because it was too cumbersome, and often, the test needed to change as well causign more problems. The Boost.Test tools make writing them much simpler than my previous tools. I really don't have an excuse to do otherwise (at least for unit tests). I am still lacking good integration testing tools. Lots of my code is related to distributed systems, so I have to write lots of proxy objects. Blech.

On Wed, 1 Feb 2006 14:08:19 -0500 Jody Hagins <jody-boost-011304@atdesk.com> wrote:
Early attempts were stopped because creating and running the tests were just too difficult and time consuming. Later attempts resulted in a mix of C/C++ with test scripts written in python, perl, ruby, etc. This was better, but there was still way too much going on, and it took a long time to use the system, write tests, run scripts, and such. The test framework was not clean and easy to use.
FWIW, I tried CPPUnit and other similar tools, but I think Boost.Test is more flexible and easier to use (at least from the last time I looked at the other tools)...

Jody Hagins <jody-boost-011304@atdesk.com> writes:
On Wed, 01 Feb 2006 09:34:14 -0500 David Abrahams <dave@boost-consulting.com> wrote:
It's all good and interesting but would you prefer minimal testing component? You stuck with a single BOOST_CHECK tool and couldn't figure out why particular assertion fails. You could use debugger but > using BOOST_CHEKK_EQUAL would give you much more change to figure it > you quicker without one.
Without a doubt, Boost.Test singlehandedly eliminated one of my biggest objections to test-first coding.
Which objection and how does Boost.Test eliminate it?
Early attempts were stopped because creating and running the tests were just too difficult and time consuming. Later attempts resulted in a mix of C/C++ with test scripts written in python, perl, ruby, etc. This was better, but there was still way too much going on, and it took a long time to use the system, write tests, run scripts, and such. The test framework was not clean and easy to use.
I experienced a fair amount of effort in startup of using Boost.Test (about a week or so), but once I started using it and got a handle on the parts that seemed most use to me, it became very easy to write tests and integrate them into the build process.
The framework allows for easy test-writing.
Easier than BOOST_ASSERT?
I use the library, but part of my Boost install is to build all libs anyway, so it's not a big deal.
I can run "make tests" at the top of my build tree, and it will build all the code, and run the tests.
That's all build system stuff and has nothing to do with the library. I do bjam test and get the same result.
When I need more tests, I can simply create a new .cpp file with the basic few lines of template, and start inserting my tests.
#include <boost/assert.hpp> int main() { ... BOOST_ASSERT( whatever ); ... BOOST_ASSERT( whatever ); }
For more complex tests, I use the unit test classes,
What classes, please?
but for almost everything else, the basic stuff is plenty.
I think that might be my point.
Most of the TEST macros are very easy to use, though the documentation can be a bit confusing at times. I was quite surprised at the depth of this library after using it for a while.
Yes. What do you get from those macros that's very useful beyond what BOOST_ASSERT supplies? I really want to know. Some people I'll be consulting with next week want to know about testing procedures for C++, and if there's a reason to recommend Boost.Test, I'd like to do that.
Thus, after integrating Boost.Test into my process, I find writing and maintaining good tests is not so difficult (I still have to write proxy objects for complex integration testing... someone needs to come up with a good solution for that).
Proxy objects? Oh, I think someone described this to me: a proxy object is basically a stub that you use as a stand-in for the real thing? There was a guy I met at SD Boston who was all fired up with ideas for such a library -- I think he had written one. I encouraged him to post about it on the Boost list so the domain experts and people who really cared could respond, and he said he would, but... well, as in about 75% of such scenarios, he never did.
I really like the flexible error reporting. I do not set any environment variables in my shell. Instead, my makefiles handle all of that (by default, they do nothing, I think, which means the defaults are reasonable --- for me at least), and I can override it when I call make.
When I am doing things the right way, I can quickly write up some tests, then develop until they all pass. Sometimes, as I write the implementation, I think of a strange scenario that I have to code around. When that happens I SHOULD go write the test. Before using Boost.Test I NEVER did that because it was too cumbersome, and often, the test needed to change as well causign more problems.
I don't see why that was hard before Boost.Test. I have absolutely no problem writing new tests using BOOST_ASSERT. Having the Boost.Build primitives to specify tests is a big help, but as I've said, that's independent of any language-level testing facility.
The Boost.Test tools make writing them much simpler than my previous tools. I really don't have an excuse to do otherwise (at least for unit tests). I am still lacking good integration testing tools. Lots of my code is related to distributed systems, so I have to write lots of proxy objects. Blech.
Blech. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Wed, 01 Feb 2006 15:47:46 -0500 David Abrahams <dave@boost-consulting.com> wrote: Sorry for the length, but I thought I should at least try to give you my opinion in more than "I like it because I like it" terminology.
The framework allows for easy test-writing.
Easier than BOOST_ASSERT?
Everything is relative. I think it is as easy to use, and provides more flexibility. When I start a new test file, I can hit a macro key in vim and I get my bost-test starter code inserted immediately. From then on, I just write tests. There are lots of different "checks" that can be made, and a ton of extra stuff for those complex cases.
That's all build system stuff and has nothing to do with the library. I do
bjam test
and get the same result.
Yes. I was just saying that because I have said before that I do not use bjam, except to build boost, and I wanted to emphasize that setting up the make environment was more complex than using Boost.Test.
When I need more tests, I can simply create a new .cpp file with the basic few lines of template, and start inserting my tests.
#include <boost/assert.hpp>
int main() { ... BOOST_ASSERT( whatever ); ... BOOST_ASSERT( whatever ); }
If all your tests can be expressed in BOOST_ASSERT, then you may not have need for Boost.Test. However, to use the Boost.Tests, it is not much more work... #define BOOST_AUTO_TEST_MAIN #include <boost/test/auto_unit_test.hpp> BOOST_AUTO_UNIT_TEST(test_whatever) { ... BOOST_CHECK( whatever ); ... BOOST_CHECK( whatever ); } I can then separate each related test into an individual test function, and get reports based on the status of each test. For each related test, I can just add another function... BOOST_AUTO_UNIT_TEST(test_whomever) { ... BOOST_CHECK( whomever ); ... BOOST_CHECK( whomever ); } I default to use the linker option. There used to be a header-only option, but since I never used it I'm not sure if it still exists. If not, you will have to link against the compiled Boost.Test library.
For more complex tests, I use the unit test classes,
What classes, please?
There are many useful features to automatically test collections, sets of items, templates, etc. The one I've used most often is test_suite (boost/libs/test/doc/components/utf/components/test_suite/index.html), which provides extra flexibility to run and track multiple test cases. One interesting feature I played with was using the dynamic test suites to select test suites based upon the results of other tests. This is really useful for running some distributed tests when some portions may fail because of reasons external to the test. In these cases, the suites can configure which tests run based on dynamic constraints.
but for almost everything else, the basic stuff is plenty.
I think that might be my point.
Sure, but even with Boost.Test, you get lots of stuff with the "basic" tests, such as reporting success/failure, and isolating test conditions. Further, you have the additional features to handle more complex conditions. I have lots of tests that were written with BOOST_ASSERT before I started using BOOST_TEST... there was a long time where I was aprehensive about diving in, but once I did, I found it very helpful. If I want simple, I can still get it.
Yes. What do you get from those macros that's very useful beyond what BOOST_ASSERT supplies? I really want to know. Some people I'll be consulting with next week want to know about testing procedures for C++, and if there's a reason to recommend Boost.Test, I'd like to do that.
In Boost.Test terminology, you would replace BOOST_ASSERT with BOOST_REQUIRE. If that's all you need, then you can do a simple replacement. One immediate advantage is the logging and reporting features of Boost.Test. It generates nice output, and can generate XML of the test results as well. If you have to process test output, that is an immediate advantage. Beyond that, there are three "levels" CHECK, WARN, and REQUIRE. Output and error reporting can be tailored for each level. Also, the levels determine if testing should continue. For some tests, you want to halt immediately upon failure. For others, you want to do as much as possible. It is often beneficial to know that test A failed, while tests B and C passed. The "tools" come in many varieties. I use the EQUAL tools very frequently, since when a test fails the log reports that it failed, and it also reports the value of each item being tested. For example, BOOST_CHECK_EQUAL(x, y); will check to make sure that x and y are equal. If they are not, then the test failure will print (among other stuff), the values of x and y. This is a dont'care when tests pass, but when they fail, it is extremely helpful since the failing state is readily available in the failure reports. There are also special tools to check for exceptions being thrown/not thrown. It is trivial to write code that either expects an exception or must not receive an exception. Writing tests for exceptional conditions is usually difficult, but the tools in Boost.Test makes it much easier. One of the things I like is the runtime test checks for macro definitions and values. All of those are VERY easy to use. The ones I have a hard time with are the tools that provide checking for floating point numbers (BOOST_CHECK_CLOSE and friends). It checks two numbers to see if they are "close enough" to each other. Unfortunately, I couldn't get it working how I understood, so I don't use it -- though I'd like to use it. I actually think that a big advantage of Boost.Test is the tools that make specific testing much easier. Unfortunately, the one that just about everyone gets wrong is testing the closeness of floats. I'm sure I'm the only one with this problem, because I posted about it a while back, and the response seemed pretty clear, but I still didn't understand it. You ought to skim over the reference documents: boost/libs/test/doc/components/test_tools/reference/index.html In addition, there are some nifty tools that allow you to easily write tests for templatized functions, classes, and metatypes. I've not seen anything like that, and to duplicate it with BOOST_ASSERT would require a lot of test code.
Thus, after integrating Boost.Test into my process, I find writing and maintaining good tests is not so difficult (I still have to write proxy objects for complex integration testing... someone needs to come up with a good solution for that).
Proxy objects? Oh, I think someone described this to me: a proxy object is basically a stub that you use as a stand-in for the real thing? There was a guy I met at SD Boston who was all fired up with ideas for such a library -- I think he had written one. I encouraged him to post about it on the Boost list so the domain experts and people who really cared could respond, and he said he would, but... well, as in about 75% of such scenarios, he never did.
Right. They are very importat, especially for integration testing. You create a proxy of the "other" object, and then do all your interaction testing. You can make the other object behave in any way provided by the interface so it is a good way to test. You can even make the other object seem "buggy" to test error handling and such. What I would really like is something like the expect scripting language for integration testing. I like it, but it is a huge PITA to write all those proxies. However, when you do it, your actual integration testing is trivial. Sunstitute the proxies with the real thing and run the same tests (this is where some of the test_suite objects can come in handy as it allows you to easily replace them and "borrow" tests).
I don't see why that was hard before Boost.Test. I have absolutely no problem writing new tests using BOOST_ASSERT. Having the Boost.Build primitives to specify tests is a big help, but as I've said, that's independent of any language-level testing facility.
Right. However, with BOOST_ASSERT (or anything similarly simple), you are limited in what you can do, and it is more difficult to track down problems. I could write tests in any manner, and trust me, I've written plenty. However, in the end I always end up with some complex stuff. As I said, the trivial tests could be done in any manner you like, but anything beyond trivial tests are easier done, and failures are easier to decipher, with Boost.Test. If everything is done with Boost.Test, it is all integrated, similar, and the report processing is the same, no matter the complexity of the test. I didn't write it, and I'm probably not the best defender of its functionality. However, I have found it to be extremely useful. If all I had was simple tests, I probably would have never tried it. However, my needs are more than simple, and it addresses the complex AND the simple (though I wish it had EVEN MORE to offer, especially w.r.t. proxy objects... did I say that already). In fact, since it is used for all my NEWER tests, it is the most used Boost component (our older tests are still done with assert() or writing output to stdout/stderr and having scripts interpret the output, and some are still even interactive -- if we are lucky they are driven by expect or some other script).

Yes. What do you get from those macros that's very useful beyond what BOOST_ASSERT supplies? I really want to know. Some people I'll be consulting with next week want to know about testing procedures for C++, and if there's a reason to recommend Boost.Test, I'd like to do that.
So you choose to use BOOST_ASSERT. That essentially means that you couldn't have more than one failure. Now believe it or not there exist the whole different testing culture where people starts with *all* of their assertions failing. And then they are working to "fix" them all. And it's not that rare (at best - nowdays TDD is getting widely spread). The Test Tools provide wide variety of "check" different from trivial assert. Their primary advantage is they provide as much information as possible in case of failure. With assert based testing you bound to invoke the debugger in 90% of case if detected failures and only in 10% it's clear from output what is going on. With more smart tools you could deduce error cause without debugging in 90% and only 10% require more detailed excurtion inside test module. Also BOOST_ASSERT is no help when you need to test that particular expression does emit some exception. The Execution Monitor helps by catching all errors and reporting them in similar manner. If BOOST_ASSERT( expr ) emit unexpected exception. you have no choice but dig into your program to see what is going on. With Execution Monitor there is bog chance that an exception gets detected and reported automatically. You don't like that it's catching fatal system erros either. But with Unit Test framework it's easily configurable. Another topic is test organization. Boost.Test allows to build complex complex test trees. In addition it's automate task of testing C++ templates for different set of template parameters of parameterized function with different set of runtime arguments. Latest addition is an ability to test exceptions safety (along the line of the work you did) and facility for logged expectation testing (mostly useful for interaction/boundary testing). There is more to it. If you are really interrested check the docs (well better wait till we update them) Regards, Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
Yes. What do you get from those macros that's very useful beyond what BOOST_ASSERT supplies? I really want to know. Some people I'll be consulting with next week want to know about testing procedures for C++, and if there's a reason to recommend Boost.Test, I'd like to do that.
So you choose to use BOOST_ASSERT. That essentially means that you couldn't have more than one failure.
Usually my presumption is that if an assert fails, I can't really have confidence in anything thereafter anyway.
Now believe it or not there exist the whole different testing culture where people starts with *all* of their assertions failing. And then they are working to "fix" them all. And it's not that rare (at best - nowdays TDD is getting widely spread).
Hmm. So you might want to start fixing them without starting from the first one?
The Test Tools provide wide variety of "check" different from trivial assert. Their primary advantage is they provide as much information as possible in case of failure. With assert based testing you bound to invoke the debugger in 90% of case if detected failures and only in 10% it's clear from output what is going on. With more smart tools you could deduce error cause without debugging in 90% and only 10% require more detailed excurtion inside test module. Also BOOST_ASSERT is no help when you need to test that particular expression does emit some exception.
True. I do that in my Python tests all the time, but Python is better at that than native C++ is.
The Execution Monitor helps by catching all errors and reporting them in similar manner. If BOOST_ASSERT( expr ) emit unexpected exception. you have no choice but dig into your program to see what is going on. With Execution Monitor there is bog chance that an exception gets detected and reported automatically. You don't like that it's catching fatal system erros either. But with Unit Test framework it's easily configurable. Another topic is test organization. Boost.Test allows to build complex complex test trees.
Why is that good? At some point doesn't it make sense to write a separate test program?
In addition it's automate task of testing C++ templates for different set of template parameters of parameterized function with different set of runtime arguments.
Really? I guess there's a lot I haven't seen in the documentation. Is there a preview of the new docs somewhere?
Latest addition is an ability to test exceptions safety (along the line of the work you did) and facility for logged expectation testing (mostly useful for interaction/boundary testing). There is more to it. If you are really interrested check the docs (well better wait till we update them)
Well, I have checked the docs, and didn't see much of it. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
Yes. What do you get from those macros that's very useful beyond what BOOST_ASSERT supplies? I really want to know. Some people I'll be consulting with next week want to know about testing procedures for C++, and if there's a reason to recommend Boost.Test, I'd like to do that.
So you choose to use BOOST_ASSERT. That essentially means that you couldn't have more than one failure.
Usually my presumption is that if an assert fails, I can't really have confidence in anything thereafter anyway.
Fans of assert-based testing may want to check out <boost/detail/lightweight_test.hpp>, which is what I use for testing smart_ptr and bind. :-)

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Peter Dimov | Sent: 02 February 2006 15:32 | To: boost@lists.boost.org | Subject: Re: [boost] [Release 1.34] Supported Compilers - another view | | David Abrahams wrote: | > "Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes: | > | >>> Yes. What do you get from those macros that's very useful beyond | >>> what BOOST_ASSERT supplies? I really want to know. Some people | >>> I'll be consulting with next week want to know about testing | >>> procedures for C++, and if there's a reason to recommend | >>> Boost.Test, I'd like to do that. | >> | >> So you choose to use BOOST_ASSERT. That essentially means that you | >> couldn't have more than one failure. | > | > Usually my presumption is that if an assert fails, I can't | really have | > confidence in anything thereafter anyway. | | Fans of assert-based testing may want to check out | <boost/detail/lightweight_test.hpp>, which is what I use for testing | smart_ptr and bind. :-) Fine, but doesn't handle floating point comparisons so easly. Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

Yes. What do you get from those macros that's very useful beyond what BOOST_ASSERT supplies? I really want to know. Some people I'll be consulting with next week want to know about testing procedures for C++, and if there's a reason to recommend Boost.Test, I'd like to do that.
So you choose to use BOOST_ASSERT. That essentially means that you couldn't have more than one failure. Usually my presumption is that if an assert fails, I can't really have confidence in anything thereafter anyway.
So you need to arange statements in your test function in order of dependency (assuming such order exist). What if class contains some independent parts.
Now believe it or not there exist the whole different testing culture where people starts with *all* of their assertions failing. And then they are working to "fix" them all. And it's not that rare (at best - nowdays TDD is getting widely spread).
Hmm. So you might want to start fixing them without starting from the first one?
Sure. It's like a todo list formalized.
Another topic is test organization. Boost.Test allows to build complex complex test trees.
Why is that good? At some point doesn't it make sense to write a separate test program?
At some point - yes. We just have different opinion where this point. In you current practice you put all your assertions into single test function. And them when you collect enough of them split into new file. Nowletw take a look on a different side of the fence. TDD people put *single* per test case. Obviosly it means _a lot of_ test cases within the same test module. And organizing te min test suite (feature-based stage-based or any other way) is natural next step (Ecpecially if yuou consider suite level fixtures).
In addition it's automate task of testing C++ templates for different set of template parameters of parameterized function with different set of runtime arguments.
Really? I guess there's a lot I haven't seen in the documentation. Is there a preview of the new docs somewhere?
This is present for a while. It was reworked completely last release. I do not want to point you to the unclear docs. Better take a look on unit_test_example_07.cpp and unit_test_example_11.cpp for reference for now.
Latest addition is an ability to test exceptions safety (along the line of the work you did) and facility for logged expectation testing (mostly useful for interaction/boundary testing). There is more to it. If you are really interrested check the docs (well better wait till we update them)
Well, I have checked the docs, and didn't see much of it.
New features are not yet documented. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
Yes. What do you get from those macros that's very useful beyond what BOOST_ASSERT supplies? I really want to know. Some people I'll be consulting with next week want to know about testing procedures for C++, and if there's a reason to recommend Boost.Test, I'd like to do that.
So you choose to use BOOST_ASSERT. That essentially means that you couldn't have more than one failure. Usually my presumption is that if an assert fails, I can't really have confidence in anything thereafter anyway.
So you need to arange statements in your test function in order of dependency (assuming such order exist). What if class contains some independent parts.
Normally I use separate test programs to test independent things. That admittedly does run into a lot of compilation time. However, so often there's a chance of one small piece failing compilation on a given compiler, and in that case I still want to know the rest is working.
Another topic is test organization. Boost.Test allows to build complex complex test trees.
Why is that good? At some point doesn't it make sense to write a separate test program?
At some point - yes. We just have different opinion where this point. In you current practice you put all your assertions into single test function. And them when you collect enough of them split into new file.
No, I usually split into a new file each time I'm testing a new aspect of the system.
Nowletw take a look on a different side of the fence. TDD people put *single* per test case. Obviosly it means _a lot of_ test cases within the same test module. And organizing te min test suite (feature-based stage-based or any other way) is natural next step (Ecpecially if yuou consider suite level fixtures).
In addition it's automate task of testing C++ templates for different set of template parameters of parameterized function with different set of runtime arguments.
Really? I guess there's a lot I haven't seen in the documentation. Is there a preview of the new docs somewhere?
This is present for a while. It was reworked completely last release. I do not want to point you to the unclear docs. Better take a look on unit_test_example_07.cpp and unit_test_example_11.cpp for reference for now.
I'll try to have a look. -- Dave Abrahams Boost Consulting www.boost-consulting.com

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of David Abrahams | Sent: 01 February 2006 20:48 | To: boost@lists.boost.org | Subject: Re: [boost] [Release 1.34] Supported Compilers - another view | | Easier than BOOST_ASSERT? Well hardly more difficult - once you have the libraries built. | Yes. What do you get from those macros that's very useful beyond what | BOOST_ASSERT supplies? I really want to know. Some people I'll be | consulting with next week want to know about testing procedures for | C++, and if there's a reason to recommend Boost.Test, I'd like to do that. Floating-point tests are a really nasty with asserts. Despite the previously confusing documentation (better available Real Soon Now), Boost.Test is MUCH better - show exactly what the tests and values etc are.. I also like the documentation that you can get - a file that proves what tests you did, and when, and which passed - shows when things are improving. I HATE the way asserts that fail bring the whole business to a halt. Seeing which of a group of tests fail is a real help. I like it - a lot. Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

The Boost.Test tools make writing them much simpler than my previous tools. I really don't have an excuse to do otherwise (at least for unit tests). I am still lacking good integration testing tools. Lots of my code is related to distributed systems, so I have to write lots of proxy objects. Blech.
This release includes facility for interaction based testing. Is it something that would cover you requirements? Gennadiy

On Wed, 1 Feb 2006 15:55:48 -0500 "Gennadiy Rozental" <gennadiy.rozental@thomson.com> wrote:
This release includes facility for interaction based testing. Is it something that would cover you requirements?
I guess it depends on what you mean by that term. I'm not much into the testing literature, so I do not fully understand "interaction based testing."

"Jody Hagins" <jody-boost-011304@atdesk.com> wrote in message news:20060201174431.38eebe66.jody-boost-011304@atdesk.com...
On Wed, 1 Feb 2006 15:55:48 -0500 "Gennadiy Rozental" <gennadiy.rozental@thomson.com> wrote:
This release includes facility for interaction based testing. Is it something that would cover you requirements?
I guess it depends on what you mean by that term. I'm not much into the testing literature, so I do not fully understand "interaction based testing."
From what you are describing in other post it's exactly what you need.
Look specifically on mock_object.hpp - facility that supposed to help create these mock/proxy objects. Gennadiy

Gennadiy Rozental wrote:
"Jody Hagins" <jody-boost-011304@atdesk.com> wrote in message news:20060201174431.38eebe66.jody-boost-011304@atdesk.com...
On Wed, 1 Feb 2006 15:55:48 -0500 "Gennadiy Rozental" <gennadiy.rozental@thomson.com> wrote:
This release includes facility for interaction based testing. Is it something that would cover you requirements?
I guess it depends on what you mean by that term. I'm not much into the testing literature, so I do not fully understand "interaction based testing."
From what you are describing in other post it's exactly what you need.
Look specifically on mock_object.hpp - facility that supposed to help create these mock/proxy objects.
[sorry for intruding] Judging from the earlier description, I'd guess Jody meant "real" proxy objects (something marshalling function calls across process/machine boundaries). Of course, when testing code that uses these object/interfaces, stubs would be the thing to use (or perhaps mocks, which I'm personally not too found of). If I'm all wrong about this, sorry. Anyway, you caught my attention. I checked the mock_object.hpp header and also found a couple of usage examples under the 'examples' directory. What I was missing though, was examples of setting up / checking the expectations - did I just miss something or aren't they there (yet)? Any chance of having this functionality documented for 1.34? // Johan

Anyway, you caught my attention. I checked the mock_object.hpp header and also found a couple of usage examples under the 'examples' directory. What I was missing though, was examples of setting up / checking the expectations - did I just miss something or aren't they there (yet)?
Did you check logged_exp_example.cpp?
Any chance of having this functionality documented for 1.34?
Good chance. Gennadiy

Gennadiy Rozental wrote:
Anyway, you caught my attention. I checked the mock_object.hpp header and also found a couple of usage examples under the 'examples' directory. What I was missing though, was examples of setting up / checking the expectations - did I just miss something or aren't they there (yet)?
Did you check logged_exp_example.cpp?
I've checked it and just now rechecked it. It looks like it requires an input file of type .elog, right? Couldn't find that one on my local harddrive so the test kept failing - is there an option to generate this from a known good scenario, or what? Are there any other methods for settings up the expectations programmatically? Like (contrived pseudo-code example, never mind my cooking ;-): namespace { struct KitchenRobotSuiteFixture { MockFridge f; MockStove s; MockTimer t; }; } BOOST_FIXTURE_SUITE_BEGIN(KitchenRobotSuite, KitchenRobotSuiteFixture); BOOST_AUTO_TEST_CASE(MakeHardBoiledEggs) { const int Temperature = Stove::MaxTemp; const int NoOfEggs = 12; f.expect_call(Fridge::get_food, "eggs", NoOfEggs); s.expect_call(Stove::on, Temperature, Stove::WaitUntilReady); t.expect_call(Timer::wait, minutes(12) ); s.expect_call(Stove::off); ... etc ... <somehow setup the combined call sequence> KitchenRobot kr(f, s, t); kr.MakeHardBoiledEggs(NoOfEggs); } BOOST_TEST_SUITE_END(); // Johan

Jody Hagins wrote:
On Wed, 01 Feb 2006 09:34:14 -0500 David Abrahams <dave@boost-consulting.com> wrote:
[snip]
The Boost.Test tools make writing them much simpler than my previous tools. I really don't have an excuse to do otherwise (at least for unit tests). I am still lacking good integration testing tools. Lots of my code is related to distributed systems, so I have to write lots of proxy objects. Blech.
Have you seen http://www.codeproject.com/threads/RMI_For_Cpp.asp ? The library in question was discussed at the boost developers list some time ago, and this appears to be a newly released version. // Johan

Gennadiy Rozental wrote:
Basically I would love to copy the tutorial to my own project and edit it to make my test. That might sound ridiculous - but that's would I would like to be able it do.
Could you be more specific: what kind of tutorial, what topics should cover?
The serialization library - has Introduction Tutorial ... The tutorial in the serialization library illustrates what I have in mind. In the past its has been referred to as "monkey proof" - which I took as a high complement. Robert Ramey

On 1/31/06, Robert Ramey <ramey@rrsd.com> wrote: [snip]
I envision a common situation - (In fact, at this very moment I'm stuck on another project and I find my self in this exact situation). I'm working on my next wizbang project and I'm under huge pressure to fix a bug. I've been working on it for days with no luck. Now I realize that its much deeper than I thought and that it could be anywhere. In desperation I look to boost and find Boost Test. The introduction shows me the joys of unit testing which I havn't been using. I'm really desperate and will try anything that only takes two hours to try. I down load boost headers. Copy the tutorial example from boost test an make a test for one mof my routines. 2 hours. Still havn't found my bug but since I still don't know what to do I repeat the processes for the rest of the program. The bug turns out to be pretty stupid and easy to spot - if I had only thought to look at the right place. I leave the boost test stuff in becuase now its "free". I've been kidnapped into the boost community in spite of myself.
Contrast that the current situation. I look into boost test. Well reading the documents is a couple of hours. Then there is bjam and library building and linking. Right away I'm on to something else.
I just would like to let you know that the this case Robert puts is a lot common. I find myself wanting to use Boost.Test for some time already (like 6 months), but I'm just *not finding time to setup it*. [snip] -- Felipe Magno de Almeida

On Wed, 1 Feb 2006 17:49:17 -0200 Felipe Magno de Almeida <felipe.m.almeida@gmail.com> wrote:
I just would like to let you know that the this case Robert puts is a lot common. I find myself wanting to use Boost.Test for some time already (like 6 months), but I'm just *not finding time to setup it*.
I'd offer to help, but I'm confused. What extra setup? I'll admit my setup was a bit much, because I added "make test" stuff to all my makefiles and some other "nice" features to my make environment. However, setting up Boost.Test to use was pretty straight forward.

I just would like to let you know that the this case Robert puts is a lot common. I find myself wanting to use Boost.Test for some time already (like 6 months), but I'm just *not finding time to setup it*.
What exactly you have difficulty to setup? Gennadiy

"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
In my experience, Boost.Test is overpowered for the purposes of Boost regression testing,
Now that is an interesting observation that I would tend to agree with.
It really points to the design of boost test itself. I understand the the problem - library development gets driven be feature comparisons with other libraries, wish lists, "cool" features, and I think more mundane aspects like a good "user directed" manual, conceptual transparency, idiot-proofness etc seem to lose importance.
Actually, maybe my concerns can be addressed by going back into boost test and using the "minimal" option.
That might help, but the author is opposed to adding the facilities needed to make that viable for me. I can't turn off the crash handlers on Windows, for example, so debugging a problem in a "minimal test" application is often prohibitive.
As I said before, I think that Boost Test could be much, much more important than it is in drawing new users to boost. For this to happen I would like to see:
a) an easier to read and use manual
b) a minimal level implemented as header only so that users in need of immediate gratification could get it. This would address the user who turns to boost because he's under huge pressure to find a bug. Solving someone's problem in less than two hours is going to turn anyone into a boost booster.
I realize that the above is my own personal wish list so we have a lot of conceptual recursion here.
Windows it tends to stand in the way of debugging by "handling" crashes as exceptions rather than invoking JIT or the debugger.
Hmmm - I don't we want to invoke the debugger in the regression tests.
What happens on the testers machine if it runs this program? #include <cassert> int main() { assert(0); } If it throws up a dialog and if we have no test monitor to kill it, I agree that we ought to have something in place to make sure no dialog comes up. But I was sure the test script *did* start a monitor that could kill off any hung applications (?)
I practice I fire up my debugger on any failed test and set my VC debugger to trap on any exception - this works great for me.
IIRC, you only get to see the program state after the program has unwound to the exception handler in the test library. That's too late. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams a écrit :
IIRC, you only get to see the program state after the program has unwound to the exception handler in the test library. That's too late.
On MSVC 8, Tools/Exceptions (Ctrl-Alt-E on my keyboard config) let you trap when the exception is launched, not when it is caught. My pet peeve with debugging my serialization code based on boost is currently the slooowwness of the compiler in step by step walking through the code, when the call stack gets huge (deep withing serialization, for instance). 20s from one line the the next one... -- Loïc

Loïc Joly wrote:
David Abrahams a écrit :
IIRC, you only get to see the program state after the program has unwound to the exception handler in the test library. That's too late.
On MSVC 8, Tools/Exceptions (Ctrl-Alt-E on my keyboard config) let you trap when the exception is launched, not when it is caught.
My pet peeve with debugging my serialization code based on boost is currently the slooowwness of the compiler in step by step walking through the code, when the call stack gets huge (deep withing serialization, for instance). 20s from one line the the next one...
Hmm - I've never experienced that - but I'm using VC 7.1 Robert Ramey

Robert Ramey wrote:
Loïc Joly wrote:
My pet peeve with debugging my serialization code based on boost is currently the slooowwness of the compiler in step by step walking through the code, when the call stack gets huge (deep withing serialization, for instance). 20s from one line the the next one...
Hmm - I've never experienced that - but I'm using VC 7.1
Time to exit Visual Studio and reboot. I've seen this not just with serialization. Jeff Flinn

On Tue, 31 Jan 2006 17:34:45 -0500 "Jeff Flinn" <TriumphSprint2000@hotmail.com> wrote:
Time to exit Visual Studio and reboot. I've seen this not just with serialization.
Yeah. I have to do it all the time on my linux box ;-)

Jeff Flinn a écrit :
Robert Ramey wrote:
Loïc Joly wrote:
My pet peeve with debugging my serialization code based on boost is currently the slooowwness of the compiler in step by step walking through the code, when the call stack gets huge (deep withing serialization, for instance). 20s from one line the the next one...
Hmm - I've never experienced that - but I'm using VC 7.1
Time to exit Visual Studio and reboot. I've seen this not just with serialization.
Did so, no noticeable changes... -- Loïc

Loïc Joly wrote:
Jeff Flinn a écrit :
Robert Ramey wrote:
Loïc Joly wrote:
My pet peeve with debugging my serialization code based on boost is currently the slooowwness of the compiler in step by step walking through the code, when the call stack gets huge (deep withing serialization, for instance). 20s from one line the the next one...
Hmm - I've never experienced that - but I'm using VC 7.1
Time to exit Visual Studio and reboot. I've seen this not just with serialization.
Did so, no noticeable changes...
The other issue I found was that our AntiVirus software was slowing down access, in particular to files on the network. For some reason it was configured to check even source files. Other times a clean rebuild helped ( particularly delete any .pdb files if they are used). Jeff Flinn

David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
I practice I fire up my debugger on any failed test and set my VC debugger to trap on any exception - this works great for me.
IIRC, you only get to see the program state after the program has unwound to the exception handler in the test library. That's too late.
FWIW VC supports breaking on throw. I.e. you'll see the program state before any stack unwinding happened. IIRC this can be controllled on a exception type basis. Thomas -- Thomas Witt witt@acm.org

Thomas Witt <witt@acm.org> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
I practice I fire up my debugger on any failed test and set my VC debugger to trap on any exception - this works great for me.
IIRC, you only get to see the program state after the program has unwound to the exception handler in the test library. That's too late.
FWIW VC supports breaking on throw. I.e. you'll see the program state before any stack unwinding happened. IIRC this can be controllled on a exception type basis.
Yes, that works great if you happen to be running under the debugger. Sometimes a test crash is the result of difficult-to-reproduce conditions, and if you lose your opportunity to JIT it, the game is up. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
I practice I fire up my debugger on any failed test and set my VC debugger to trap on any exception - this works great for me.
IIRC, you only get to see the program state after the program has unwound to the exception handler in the test library. That's too late.
on my current vc 7.1 system (and I think is always been this way) I can invoke menu item debug/exceptions and get a dialog which permits me to select a setting which breaks into the debugger when an exception is thrown - works great for me. Robert Ramey

Actually, maybe my concerns can be addressed by going back into boost test and using the "minimal" option.
That might help, but the author is opposed to adding the facilities needed to make that viable for me. I can't turn off the crash handlers on Windows, for example, so debugging a problem in a "minimal test" application is often prohibitive.
Ok. Let's say I do this. How would your test behave during regression tests run? Hung the system? Crash? Show dialog message? Remember: no CLA.
Hmmm - I don't we want to invoke the debugger in the regression tests.
What happens on the testers machine if it runs this program?
#include <cassert> int main() { assert(0); }
If it throws up a dialog and if we have no test monitor to kill it, I agree that we ought to have something in place to make sure no dialog comes up. But I was sure the test script *did* start a monitor that could kill off any hung applications (?)
What if somebody else running regression tests that do not have such monitor? Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
Actually, maybe my concerns can be addressed by going back into boost test and using the "minimal" option.
That might help, but the author is opposed to adding the facilities needed to make that viable for me. I can't turn off the crash handlers on Windows, for example, so debugging a problem in a "minimal test" application is often prohibitive.
Ok. Let's say I do this. How would your test behave during regression tests run? Hung the system? Crash? Show dialog message? Remember: no CLA.
What do you mean, "no CLA?" It _is_ possible to specify command-line arguments in a Jamfile. Regardless, an environment variable would be a reasonable approach.
Hmmm - I don't we want to invoke the debugger in the regression tests.
What happens on the testers machine if it runs this program?
#include <cassert> int main() { assert(0); }
If it throws up a dialog and if we have no test monitor to kill it, I agree that we ought to have something in place to make sure no dialog comes up. But I was sure the test script *did* start a monitor that could kill off any hung applications (?)
What if somebody else running regression tests that do not have such monitor?
If we don't have a portable monitor that everyone can use, I wouild think we'd want some crash protection from a library. On the other hand, I have lots of tests that just use good-old-<cassert>, and those have never, to my knowledge, caused a problem for testers. So that tells me the only need for a monitor is for killing hung processes. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Boost Test is used by ALL libraries in boost to test themselves. For it to be an effective in this role, it must be usable with any compiler that is supported by any library. This is not a suggestion or normative statement. Its just a recognition of the fact that it can't do the job it has been doing if it doesn't support older compilers.
This is not exactly true. IMO. Boost.Test need to support only those compilers we are running regression tests on. If library author or any other interested party want to employ old compiler they will have to use older version of Boost.Test.
So Boost Test should be structured so that it doesn't break old tests.
Boost.Test doesn't break old tests on supported compilers (IOW those we are running regression tests on). Lets say library author want to support Sunpro 4.2. Does it mean Boost.Test have to comply. The fact that MSVC *was* used for regression testing sometime before shouldn't make any difference IMO.
A couple of random observations re boost test.
I appreciate that the author of Boost Test thinks its a pain in the rear to address the "old compilers" but he is too modest in appreciation of his own work and I'm sure that if knew how important it really has been and he would just say "$%%&%&*" OK and accept his lot and keep it widely compatible.
Actually it's not that difficult to support older compilers (not counting new features), but I believe in a long term we need to have a procedure to stop doing that for the sake of code base health.
An and another thing. It damn annoying to find that all my tests suddenly fail on msvc because of a change in the test system. Oh I'm sure it was announced somewhere and I don't care - its annoying none the less. Now what am I to do? Stop supporting msvc? Shouldn't that be my decision? Re write my tests to not use boost test? I don't want to do that!
Actually I believed it was kind of agreed upon: 1.34 doesn't support MSVC anymore. Is there any particular reason you want to hold on to this compiler?
Finally, I managed to get the serialization libraries test to work with comeau by commenting out some of boost test.
unit_test_parameters.ipp
// const_string rs_str = retrieve_framework_parameter( RANDOM_SEED, argc, argv ); // s_random_seed = rs_str.is_empty() ? 0 : lexical_cast<unsigned int>( rs_str );
This apparently instantiates some basic_stream template that the serialiation library also instantiates. The comeau 4.3.3 prelinker complains about this - which I don't thnk it should - and build of serialization library fails.
You meant build of unit test, right?
Commenting this out permits the serialization libary to be tested with comeau.
There should be another way. I do not have an access to this compiler. Maybe you could rewrite with statement somehow? Gennadiy

Gennadiy Rozental wrote:
Boost Test is used by ALL libraries in boost to test themselves. For it to be an effective in this role, it must be usable with any compiler that is supported by any library. This is not a suggestion or normative statement. Its just a recognition of the fact that it can't do the job it has been doing if it doesn't support older compilers.
This is not exactly true. IMO. Boost.Test need to support only those compilers we are running regression tests on.
That's were we disagree. Boost Test is so fundamentally important that I think it should support some level of functionality for any C++ compiler. Actually I can't really determine the boost compatibility for a given C++ compiler until I run some of the tests - at least config. But if I can't run boost test - well. Who knows, its conceivable that some C++ compiler has decent compliance but just happens to trip on a specific issue that Boost Test - delux version - trips up on. So we entirely mis-catagorize this particular C++ compiler as a basket case- when in fact it might be just slightly wounded.
If library author or any other interested party want to employ old compiler they will have to use older version of Boost.Test.
yep - and that's my complaint. The serialization library still supports msvc 6 and compilers which don't support partial specialialization and compilers which don't support partial function template ordering. It also works with Borland 5.51 and 5.64. If I were writing the library today I probably wouldn't have invested the effort required to do this. But now its done and in my view its a pity to throw it away. It costs me very little to maintain compatibility at this point. So I would like to continue this as long as its not too inconvenient to do so. All I need to do this is for boost test to keep working at the level I've been using it. I don't think that's toooooooo unreasonable.
So Boost Test should be structured so that it doesn't break old tests.
Lets say library author want to support Sunpro 4.2. Does it mean Boost.Test have to comply. The fact that MSVC *was* used for regression testing sometime before shouldn't make any difference IMO.
LOL - well I can't force you to make boost test support these compilers. But I've come to depend on boost test for ALL my testing. So if you can't do ths- then I'm sort of stuck.
Actually I believed it was kind of agreed upon: 1.34 doesn't support MSVC anymore. Is there any particular reason you want to hold on to this compiler?
Now here is the rub. I started this thread with the proposition that its not practical to fix a boost wide policy as to which compilers will be supported and which will not be. The best decision will depend on the pecularities of the library in question and its application. Ultimately it will depend upon the library author in any case. You can' t even fix a policy for regression tests. Suppose a new test jumps in with a new platform - QNX with DMC compiler - who is going to tell him he can't test this because its not our policy. So its going to happen. The only question is are we going to make as easy as we can for him or not?. I would like to see boost test provide a minimal level sufficient to work with boost regression testing. Actually I'm pretty sure its mostly all in there - its just not obvious with a cursory reading.
Finally, I managed to get the serialization libraries test to work with comeau by commenting out some of boost test.
unit_test_parameters.ipp
// const_string rs_str = retrieve_framework_parameter( RANDOM_SEED, argc, argv ); // s_random_seed = rs_str.is_empty() ? 0 : lexical_cast<unsigned int>( rs_str );
This apparently instantiates some basic_stream template that the serialiation library also instantiates. The comeau 4.3.3 prelinker complains about this - which I don't thnk it should - and build of serialization library fails.
You meant build of unit test, right?
I link with the exectution_monitor library
Commenting this out permits the serialization libary to be tested with comeau.
There should be another way. I do not have an access to this compiler. Maybe you could rewrite with statement somehow?
I don't remember how I discovered this but once I tweaked my local copy and I could run the commeau tests I moved on. The commeau tetsts weren't in the regression table. I just ran them on my own machine because I like the way that compiler works over my code. I'm not sure what the "real" fix is and maybe its not worth fixing for this "corner case". It was odd to me that this code got included in the library but as i said I didn't really investigate it. This occured after 1.32 and before 1.33. Robert Ramey

This is not exactly true. IMO. Boost.Test need to support only those compilers we are running regression tests on.
That's were we disagree. Boost Test is so fundamentally important that I think it should support some level of functionality for any C++ compiler.
Wow! This is quite a requirement. This never was true. Though Minimal Testing facility has wide coverage. IMO If we don't run regression tests for the particular compiler no reason to expect that anything is working for this configuration. How should I know that MSVC6.5 or gcc 2.95 are "still supported". I do not have them anymore.
Actually I can't really determine the boost compatibility for a given C++ compiler until I run some of the tests - at least config. But if I can't run boost test - well. Who knows, its conceivable that some C++ compiler has decent compliance but just happens to trip on a specific issue that Boost Test - delux version - trips up on. So we entirely mis-catagorize this particular C++ compiler as a basket case- when in fact it might be just slightly wounded.
So if some company ABC publish new compiler I need to run and make Boost.Test run of it? In reality things not that bad. If you want to try the compiler that is not covered by regression tests you could always do so (lets assume that ..tools.jam miraculously appeared if not present). Now if it fails somewhere within Boost.Test one could come on development list and query the status of this compiler. There are two possible responses: 1. Yes it's know but unsupported compiler. 2. Oh! New compiler lets try it out. If it conformant enough Boost.Test support will appear soon enough.
If library author or any other interested party want to employ old compiler they will have to use older version of Boost.Test.
yep - and that's my complaint. The serialization library still supports msvc 6 and compilers which don't support partial specialialization and compilers which don't support partial function template ordering. It also works with Borland 5.51 and 5.64. If I were writing the library today I probably wouldn't have invested the effort required to do this. But now its done and in my view its a pity to throw it away. It costs me very little to maintain compatibility at this point. So I would like to continue this as long as its not too inconvenient to do so.
Why? Just because it makes you portfolio bigger? It does indeed not that difficult to support old compilers, if ... you are not making any changes. Also you should admit that need for old compilers support seriously affected you design. Now would you stop trying to hold on to the past you could enhance your design, make it clearer and simpler. Eventually need for old compilers support will make your own design outdated.
All I need to do this is for boost test to keep working at the level I've been using it. I don't think that's toooooooo unreasonable.
It's not for now. It's becoming more and more burdensome while I am trying to move away from some "outdated" parts of existent design. But once again this is secondary issue here.
So Boost Test should be structured so that it doesn't break old tests.
Lets say library author want to support Sunpro 4.2. Does it mean Boost.Test have to comply. The fact that MSVC *was* used for regression testing sometime before shouldn't make any difference IMO.
LOL - well I can't force you to make boost test support these compilers. But I've come to depend on boost test for ALL my testing. So if you can't do ths- then I'm sort of stuck.
So How come Sunpro 4.2 or gcc 2.91 are less "C++ compilers" than MSVC 6.5? I personally still use both at work.
Actually I believed it was kind of agreed upon: 1.34 doesn't support MSVC anymore. Is there any particular reason you want to hold on to this compiler?
Now here is the rub. I started this thread with the proposition that its not practical to fix a boost wide policy as to which compilers will be supported and which will not be. The best decision will depend on the pecularities of the library in question and its application. Ultimately it will depend upon the library author in any case. You can' t even fix a policy for regression tests. Suppose a new test jumps in with a new platform - QNX with DMC compiler - who is going to tell him he can't test this because its not our policy. So its going to happen. The only question is are we going to make as easy as we can for him or not?.
I covered this in part above. I do not know how difficult this process look from you standpoint.
I would like to see boost test provide a minimal level sufficient to work with boost regression testing.
So now boost internal testing is limited so some very restricted minimal subset, because otherwise there is no promise Boost.Test is able to work?
unit_test_parameters.ipp
// const_string rs_str = retrieve_framework_parameter( RANDOM_SEED, argc, argv ); // s_random_seed = rs_str.is_empty() ? 0 : lexical_cast<unsigned int>( rs_str ); [...] fix is and maybe its not worth fixing for this "corner case". It was odd to me that this code got included in the library
Why is it odd? Requests for random test case order was quite popular. And I do not see anything "corner case" like in faulty code. Gennadiy

unit_test_parameters.ipp
// const_string rs_str = retrieve_framework_parameter( RANDOM_SEED, argc, argv ); // s_random_seed = rs_str.is_empty() ? 0 : lexical_cast<unsigned int>( rs_str ); [...] fix is and maybe its not worth fixing for this "corner case". It was odd to me that this code got included in the library
Why is it odd? Requests for random test case order was quite popular. And I do not see anything "corner case" like in faulty code.
What I meant was that I link with test_execution_monitor. I surprised to see this depend on unit_test_.... . Its not a criticism - I just made presumptions from the names of things. The real problem is the commeau pre-linker complains when the same template instantiation is found in two different librariies. That's what I meant to characterise as a "special case". I just worked around this by commenting out the above code - I have no idea what the best way to fix this is - or even if its worth spending any time on. Robert Ramey
participants (13)
-
Dave Steffen
-
David Abrahams
-
Felipe Magno de Almeida
-
Gennadiy Rozental
-
Jeff Flinn
-
Jody Hagins
-
Johan Nilsson
-
John Maddock
-
Loïc Joly
-
Paul A Bristow
-
Peter Dimov
-
Robert Ramey
-
Thomas Witt