
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
and on Windows it tends to stand in the way of debugging by "handling" crashes as exceptions rather than invoking JIT or the debugger.
And as we discussed this is just a default that could be easily changed for manual testing (for example by defining environment variable if you tired to pass cla every time).
It's just another thing to remember and manage.
No need to remember anything or mange. Just setup environment variable once.
Unless you don't like having your environment cluttered with settings whose purpose you can't recall. I'm *still* trying to figure out how to set up environment variables consistently across all the shells on my *nix systems. Call me incompetent if you like but to get that worked out requires some investment.
And then I have to manage linking with the right library
Again you either set it up once in your project file
Yes, a small thing to manage, but a thing nonetheless.
or even better rely on autolinking
Autolinking is nonportable, and you have to set up _some_ kind of path so that the libraries can be found by the linker.
and read the Boost.Test documentation to figure out which calls and macros to use, etc
I am sorry: you do need to read documentation to use a library. Though I believe 2-3 most frequently used tools you would learn quite quickly.
Yes, a small thing to manage, but a thing nonetheless.
Oh, and I also have to wait for Boost.Test to build
Why? You could build library once and reuse it or you could use inlined components.
It has to build once for each toolset, and then again each time the test library changes. Yes, a small inconvenience, but an inconvenience nonetheless.
before I can run my own tests,
Even if you are using inlined version you still need to wait for it to be parsed and compiled. And this is true for Boost.Test as well as for any other tool.
Yep. BOOST_ASSERT is small and easily included.
and if Boost.Test breaks I am stuck.
And if Boost.<any other component you depend on> breaks you are not?
I can usually fix those, or workaround the problem. With Boost.Test my workaround for any problem is to fall back on BOOST_ASSERT and wonder why I bother.
Actually Boost.Test is quite stable for a while now.
So there are lots of little pitfalls for me.
It feels like some negative predisposition speaks here.
It's not a predisposition; it's borne of experience. Every time I try to use the library, thinking it's probably the right thing to do, and wanting to like it, I find myself wondering what I've gained for my investment. Until you can hear that and respond accordingly -- instead of dismissing it as the result of predisposition -- Boost.Test is going to continue to be a losing proposition for me.
I'm sure Boost.Test is great for some purposes, but why should I use it when BOOST_ASSERT does everything I need (**)?
It's just mean that you have very limited testing needs from both construction and organization standpoints.
Maybe so; I never claimed otherwise.
And even in such trivial cases Boost.Test would fire better: BOOST_ASSERT stops at first failure (is it?) -
Yeah; that's fine for me. Either the test program fails or it passes.
BOOST_CHECK don't; if expression throw an exception you need to start a debugger to figure out what is going on - using Boost.Test in majority of the cases it's clear from test output.
It's hard to imagine what test output could allow me to diagnose the cause of an exception. Normally, the cause is contained in the context (e.g. backtrace, etc.) and that information is lost during exception unwinding.
And I am not talking of much more convenient other tools available.
It seems like a lot of little hassles for no particular gain,
I think it's subjective at best.
Of course it is subjective.
and I think that's true for 99% of all Boost regression tests.
And I think you are seriously mistaken.
That may be so. Maybe you should point me at some Boost regression tests that benefit heavily from Boost.Test so I can get a feeling for how it is used effectively.
I'd actually love to be convinced otherwise, but I've tried to use it, and it hasn't ever been my experience that it gave me something I couldn't get from lighter-weight facilities.
Boost.Test was enhanced significantly in last two releases from usability standpoint. Would you care to take another look?
I have used it in the past 6 months. It didn't seem to buy me much. Admittedly, my testing needs were not complicated, but that seems to be the case much of the time.
It's really important that the barrier to entry for testing be very low; you want to make sure there are no disincentives.
With latest Boost.Test all that you need to start is:
#define BOOST_TEST_MAIN #include <boost/test/unit_test.hpp>
BOOST_AUTO_TEST_CASE( t ) { // here you go: }
Is this a hi barrier?
It depends. Do I have to link with another library? If so, then add the lines of the Jamfile (and Jamfile.v2) to what I need to start with. What about allowing JIT debugging? Will this trap all my failures or can I get it to launch a debugger? -- Dave Abrahams Boost Consulting www.boost-consulting.com