I am using the unit testing framework with the BOOST_AUTO_UNIT_TEST macros. While debugging, is there a way to run a single test instead of running them all everytime? This is probably a FAQ, but I couldn't find it. Thanks, Daniel
Gennadiy Rozental wrote:
I am using the unit testing framework with the BOOST_AUTO_UNIT_TEST macros. While debugging, is there a way to run a single test instead of running them all everytime?
Not at the moment. Next version may include this ability (run by name).
If you add it, it would be nice to include the Boost.Build equivalent of --dump-tests. So that one can see all the available test without running them. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq
AFAIT, there is also no way to pass configuration data into BOOST test
either. For example, some unit tests might need the name of an account to
run under, or the name of an existing share, or some such site-specific
configuration. I'd like the ability to pass in a set of name-value pairs to
rectify that. Maybe, --configfile <xmlfile>
Rob.
"Rene Rivera"
Gennadiy Rozental wrote:
I am using the unit testing framework with the BOOST_AUTO_UNIT_TEST macros. While debugging, is there a way to run a single test instead of running them all everytime?
Not at the moment. Next version may include this ability (run by name).
If you add it, it would be nice to include the Boost.Build equivalent of --dump-tests. So that one can see all the available test without running them.
-- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq
"Robert Mathews"
AFAIT, there is also no way to pass configuration data into BOOST test either. For example, some unit tests might need the name of an account to run under, or the name of an existing share, or some such site-specific configuration. I'd like the ability to pass in a set of name-value pairs to rectify that. Maybe, --configfile <xmlfile>
Rob.
Don't hold your breath, but I may be able to present something like this in near future. Gennadiy
That'd be nice. I'm looking at using boost.test for some of my
application-specific unit tests, but these kinds of practical issues get in
the way, to wit:
- passing test configuration
- listing the tests contained in the unit test (so that I can compare the
tests run against the total number of tests available - hard to know how
much coverage you're getting if you don't know what's in the .exe)
- "expected failure" feature
- the ability to generate a better error message - ie, include a bit of text
about what the test was about.
Cheers,
Rob.
"Gennadiy Rozental"
"Robert Mathews"
wrote in message news:d2u5g9$aiv$1@sea.gmane.org... AFAIT, there is also no way to pass configuration data into BOOST test either. For example, some unit tests might need the name of an account
to
run under, or the name of an existing share, or some such site-specific configuration. I'd like the ability to pass in a set of name-value pairs to rectify that. Maybe, --configfile <xmlfile>
Rob.
Don't hold your breath, but I may be able to present something like this in near future.
Gennadiy
Robert Mathews writes: [...]
- "expected failure" feature
Cheers, Rob.
Expected failures look like failed unit tests where the number of failed assertions is _less_ than expected. I use a Perl script to look for this sort of thing. Personally, I'd be inclined not to call these "failures", but that's just me. I've found that Perl (or whatever) scripts to grab the output of test suites and do something reasonable with it are necessary. I suspect that I haven't really grokked the _intent_ behind the unit test library. ---------------------------------------------------------------------- Dave Steffen, Ph.D. "Irrationality is the square root of all evil" Numerica Corporation -- Douglas Hofstadter Software Engineer IV "Oppernockity tunes but once." -- anon.
"Dave Steffen"
Robert Mathews writes: [...]
- "expected failure" feature
Cheers, Rob.
Expected failures look like failed unit tests where the number of failed assertions is _less_ than expected. I use a Perl script to look for this sort of thing. Personally, I'd be inclined not to call these "failures", but that's just me.
But, how would you know in a generic way that the number of failed assertions is _less_ than expected? I'd like a facility whereby I could ask the .exe that question.
I've found that Perl (or whatever) scripts to grab the output of test suites and do something reasonable with it are necessary. I suspect that I haven't really grokked the _intent_ behind the unit test library.
Funny you would say that ... I currently work in test harness infrastructure of some 4000 tests written in perl. I'm looking at the boost tests stuff from the POV of having more unit-level tests for the individual libraries (most of the current stuff tests how the system works at an application level, and thus regression in individual libraries tend to show up as incredibly obscure issues, if they show up at all!). I'd like to have those tests be written in C++/STL/Boost (I'm really really sick of Perl) . Still, reality is, I'd probably wrap this boost.test unit test programs in a standard perl wrapper so that it would fit into our current distributed test harness infrastructure. To do this, I need a way query the test program about what tests it might run if asked, and a way to pass configuration to those tests.
---------------------------------------------------------------------- Dave Steffen, Ph.D. "Irrationality is the square root of all evil" Numerica Corporation -- Douglas Hofstadter Software Engineer IV "Oppernockity tunes but once." -- anon.
Robert Mathews writes:
"Dave Steffen"
wrote in message Expected failures look like failed unit tests where the number of failed assertions is _less_ than expected. I use a Perl script to look for this sort of thing. Personally, I'd be inclined not to call these "failures", but that's just me.
But, how would you know in a generic way that the number of failed assertions is _less_ than expected? I'd like a facility whereby I could ask the .exe that question.
Well, that's why I use a Perl script. If you turn the various output levels up high enough, you get (at the end of the test suite output) a unit-test by unit-test summary of each test, how many assertions passed, how many failed, and how many were expected to fail; that's what I use Perl to chop up.
I've found that Perl (or whatever) scripts to grab the output of test suites and do something reasonable with it are necessary. I suspect that I haven't really grokked the _intent_ behind the unit test library.
Funny you would say that ... I currently work in test harness infrastructure of some 4000 tests written in perl. I'm looking at the boost tests stuff from the POV of having more unit-level tests for the individual libraries (most of the current stuff tests how the system works at an application level, and thus regression in individual libraries tend to show up as incredibly obscure issues, if they show up at all!). I'd like to have those tests be written in C++/STL/Boost (I'm really really sick of Perl). Still, reality is, I'd probably wrap this boost.test unit test programs in a standard perl wrapper so that it would fit into our current distributed test harness infrastructure. To do this, I need a way query the test program about what tests it might run if asked, and a way to pass configuration to those tests.
One the one hand, I'm wondering if it wouldn't be possible to arrange for the test suite to supply the user with all this info via some API - say, a map of test names to test result structures, or some such - at which point you could arrange the output to look like whatever you want, all within the unit test suite. On the other hand, for some unit tests it's hard to avoid having various things output to the console. For example, I've got some unit tests that test our error handling code, including the bit that dumps messages to stderr and quits. Unless I build into our error handling code some way to redirect these error messages (and I don't really want to do this), this stuff is going to end up in the unit test's output, and I don't see any way around that. Thus, my current thinking: the output from running the unit test suite is A) saved and compared with 'canonical' output, and B) parsed for the information I want. I'm sure there's a way to get into the library code and get control of what it does on a lower level. If there isn't, Gennadiy can probably arrange for there to be. :-) ---------------------------------------------------------------------- Dave Steffen, Ph.D. "Irrationality is the square root of all evil" Numerica Corporation -- Douglas Hofstadter Software Engineer IV "Oppernockity tunes but once." -- anon.
Well, that's why I use a Perl script. If you turn the various output levels up high enough, you get (at the end of the test suite output) a unit-test by unit-test summary of each test, how many assertions passed, how many failed, and how many were expected to fail; that's what I use Perl to chop up.
I think XML output would be easier to "chop up", especially if you could employ existing XML parser
One the one hand, I'm wondering if it wouldn't be possible to arrange for the test suite to supply the user with all this info via some API - say, a map of test names to test result structures, or some such - at which point you could arrange the output to look like whatever you want, all within the unit test suite.
You do have an ability ot write custom log and report formatters.
On the other hand, for some unit tests it's hard to avoid having various things output to the console. For example, I've got some unit tests that test our error handling code, including the bit that dumps messages to stderr and quits.
I recomment using output_test_stream for testing output operations
Unless I build into our error handling code some way to redirect these error messages (and I don't really want to do this), this stuff is going to end up in the unit test's output, and I don't see any way around that. Thus, my current thinking: the output from running the unit test suite is A) saved and compared with 'canonical' output,
Again output_test_stream have a nabiltiy ot match to the pattern file.
and B) parsed for the information I want.
I'm sure there's a way to get into the library code and get control of what it does on a lower level. If there isn't, Gennadiy can probably arrange for there to be. :-)
I think you already have almost everything you need. The rest (configuration file support for example) is comming Gennadiy
Expected failures look like failed unit tests where the number of failed assertions is _less_ than expected.
Boost.Test "Expected failures" feature allows developer to specify that specific test case supposed to have this number of failures (IOW developer knows about the issue and doesn't want test case failure to be reported for now). Any other number of assertion failures (more or less) cause test case to fail. Be aware though: unless you are using TDD practice of one assertion per test case it could be quite dangerous to use this feature on permanent basis. Consider what will happened when after your change once assertion that supposed to fail is not failing anymore, while another one that shouldn't does? Most probably you will never notice that (since the test case will pass). So use it with caution ant preferably for temporary purposes. Gennadiy
Hello. I have just been looking recently at the subgraph feature of the BGL. If I understand correctly, in order to use it you *must* first create a root subgraph; you can't use an existing Graph as the root and call create_subgraph on that adjacency_list, for example. It also seems like when you create a subgraph, it is somehow stored on the root subgraph, since create_subgraph() has a return type of subgraph<Graph>&. As it returns a reference, I assume that the actual object is somehow stored on the root. But I need to create and delete lots of subgraphs on a given root graph. Creating the subgraph is no problem; however how can I deleted them ??? I could just leave them, of course, but that's an obvious waste of resources... Is there an answer to my question? If possible, I'd also like to learn a little bit more about the subgraph implementation (all I can suppose from the documentation is that it is stored in some kind of tree data-strucutre). Last, could someone make an announcement about when bundled properties will be available for subgraph, either in the CVS or in a minor revision to the BGL ? (I think someone suggested that idea in oprder to fix problematic bugs like the remove_edge fail...) I absolutely need soon the subgraph implementation of bundled properties, so as soon as it's in the CVS i'll have to fetch it. Thank you, Jean-Noël
On Apr 6, 2005, at 12:31 AM, Elvanör wrote:
Hello.
I have just been looking recently at the subgraph feature of the BGL. If I understand correctly, in order to use it you *must* first create a root subgraph; you can't use an existing Graph as the root and call create_subgraph on that adjacency_list, for example.
It also seems like when you create a subgraph, it is somehow stored on the root subgraph, since create_subgraph() has a return type of subgraph<Graph>&.
As it returns a reference, I assume that the actual object is somehow stored on the root. But I need to create and delete lots of subgraphs on a given root graph. Creating the subgraph is no problem; however how can I deleted them ??? I could just leave them, of course, but that's an obvious waste of resources...
The subgraph is stored on the root, so that when edges are added/removed anywhere in the graph the changes can ripple throughout the graph structure. However, there's no support for deleting subgraphs.
Is there an answer to my question? If possible, I'd also like to learn a little bit more about the subgraph implementation (all I can suppose from the documentation is that it is stored in some kind of tree data-strucutre).
The only good answers are in the implementation itself :) Doug
Gennadiy Rozental writes:
Expected failures look like failed unit tests where the number of failed assertions is _less_ than expected.
Boost.Test "Expected failures" feature allows developer to specify that specific test case supposed to have this number of failures (IOW developer knows about the issue and doesn't want test case failure to be reported for now). Any other number of assertion failures (more or less) cause test case to fail. Be aware though: unless you are using TDD practice of one assertion per test case it could be quite dangerous to use this feature on permanent basis.
Yes, that's more-or-less what I had in mind.
Consider what will happened when after your change once assertion that supposed to fail is not failing anymore, while another one that shouldn't does? Most probably you will never notice that (since the test case will pass). So use it with caution ant preferably for temporary
Precisely. I suppose maybe we could state on an assertion-by-assertion basis which ones are expected to fail, but I personally think that's overkill. :-) I haven't really decided how to use the Boost test library yet, and am very open to suggestions. Also see my reply to Robert Mathews. And, of course, thanks for the library! ---------------------------------------------------------------------- Dave Steffen, Ph.D. "Irrationality is the square root of all evil" Numerica Corporation -- Douglas Hofstadter Software Engineer IV "Oppernockity tunes but once." -- anon.
"Robert Mathews"
That'd be nice. I'm looking at using boost.test for some of my application-specific unit tests, but these kinds of practical issues get in the way, to wit: - passing test configuration
Will you be willing to comply to Boost.Test format or adopt your format to Boost.Test interfaces?
- listing the tests contained in the unit test (so that I can compare the tests run against the total number of tests available - hard to know how much coverage you're getting if you don't know what's in the .exe)
I am not sure how list of tests help you here: are you planning to run test cases by name and want to see what persentage of tests you run?
- "expected failure" feature
Boost.Test does provide "expected failure" feature for explicit test case registration
- the ability to generate a better error message - ie, include a bit of text about what the test was about.
Why BOOST_..._MESSAGE doesn't work for you? You could also use BOOST_MESSAGE for standalone mesage in test log
Cheers, Rob.
Gennadiy
"Gennadiy Rozental"wrote in message news:d2v6jg$30b$1@sea.gmane.org... > > "Robert Mathews" wrote in message > news:d2v27d$m9p$1@sea.gmane.org... > > That'd be nice. I'm looking at using boost.test for some of my > > application-specific unit tests, but these kinds of practical issues get > > in > > the way, to wit: > > - passing test configuration > > Will you be willing to comply to Boost.Test format or adopt your format to > Boost.Test interfaces? Sure ... ? I mean, I didn't see the part of the documentation that specified what Boost.Test format was. Perhaps you could be so kind as to point it out. Thx. > > > - listing the tests contained in the unit test (so that I can compare the > > tests run against the total number of tests available - hard to know how > > much coverage you're getting if you don't know what's in the .exe) > > I am not sure how list of tests help you here: are you planning to run test > cases by name and want to see what persentage of tests you run? Two reasons: 1) I need to collect a report of which tests worked, which failed, and which succeeded. Percentage is only useful if 100% pass, otherwise there has to be further investigation. Typically our team collects the entire list of failures and divvies them up every day, so that the failure reports need to list the names of the test cases that failed. 2) We need to inventory the testcases so that we know how many there are and what they do, in order to assess coverage and compare that to the test plan. The gap between the test plan and the current list of testcases is one measure of how far we have left - sort of a measure of the known unknowns. 3) Being able to list which tests are available would support our current test infrastructure, which allows you to request a build and then to run a particular test. > > - "expected failure" feature > > Boost.Test does provide "expected failure" feature for explicit test case > registration > > > - the ability to generate a better error message - ie, include a bit of > > text > > about what the test was about. > > Why BOOST_..._MESSAGE doesn't work for you? You could also use BOOST_MESSAGE > for standalone mesage in test log Because I'm stupid? I guess I looked at the parameters of BOOST_TEST and thought that everything about the reporting the results of the unit test should be right there. Currently, it looks to me like a message for reporting an error is formatted to look like a compile error, which is convenient if you happen to be running the ad-hoc tests from emacs, but not particular convenient for generating an overall report of successes or failures. > > > Cheers, > > Rob. > > Gennadiy
>> Will you be willing to comply to Boost.Test format or adopt your format >> to >> Boost.Test interfaces? > > Sure ... ? I mean, I didn't see the part of the documentation that > specified > what Boost.Test format was. Perhaps you could be so kind as to point it > out. > Thx. It's not there yet. I just asking to make sure that what I am doing will meet your needs. >> I am not sure how list of tests help you here: are you planning to run > test >> cases by name and want to see what persentage of tests you run? > Two reasons: > 1) I need to collect a report of which tests worked, which failed, and > which > succeeded. Percentage is only useful if 100% pass, otherwise there has to > be > further investigation. Typically our team collects the entire list of > failures and divvies them up every day, so that the failure reports need > to > list the names of the test cases that failed. Run a test program (all test cases) - report will show all test cases that fails (with names) > 2) We need to inventory the testcases so that we know how many there are > and > what they do, in order to assess coverage and compare that to the test > plan. > The gap between the test plan and the current list of testcases is one > measure of how far we have left - sort of a measure of the known unknowns. Set a report level to "detailed". Report will show you every test case with it's status (passes/failed) Note that you could specify report format as XML and use some automation to analize the test report. Also you could write and register your own report formatter. > 3) Being able to list which tests are available would support our current > test infrastructure, which allows you to request a build and then to run a > particular test. That's the only valid point from what I see. >> > - the ability to generate a better error message - ie, include a bit of >> > text >> > about what the test was about. >> >> Why BOOST_..._MESSAGE doesn't work for you? You could also use > BOOST_MESSAGE >> for standalone mesage in test log > > Because I'm stupid? I guess I looked at the parameters of BOOST_TEST and > thought that everything about the reporting the results of the unit test > should be right there. Currently, it looks to me like a message for > reporting an error is formatted to look like a compile error, which is > convenient if you happen to be running the ad-hoc tests from emacs, but > not > particular convenient for generating an overall report of successes or > failures. 1. You could specify XML as an log format and use some automation to get any information you need 2. You could write and register your own log formatter. HTH, Gennadiy
participants (7)
-
Daniel van der Zee
-
Dave Steffen
-
Doug Gregor
-
Elvanör
-
Gennadiy Rozental
-
Rene Rivera
-
Robert Mathews