
Steven Watanabe wrote:
AMDG
Robert Ramey wrote:
I'm feeling I'm missing something really dumb as I can't figure out how other authors run all the tests in their particular libraries prior to checking in
I personally use bjam directly.
The serialization library testing has been referred to as a "carpet bombing" approach. a full run on my local machine is compilers (gcc 4.32, msvc 7.1, msvc 9.0), two builds (debug and release) , two flavors (static and dynamic lib). There about 60 tests. About 40 of them are run on each kind of archive class (text_, xml_, binary, text_w and xml_w.) So the total number of tests run is approximately 3 * 2 * 2 * ( 20 + 5 * 40) = 2640 test results. So I just let it run - and the next morning I am rewarded with a really nice giant table of 3*2*2 columns and 20 + 5*40 rows. It's hard to describe the satisfaction that derives from scrolling all over it. I check the table and click on the red failures. It's much easier than examing the bjam logs then finding the results test directory. When I rerun just some of the tests, the table is rebuilt. This processess continues until my next "ouvre" is ready to check in.
or how users verify that any particular library works in their environment without some sort of tool such as this.
I would guess that most users don't run the tests.
lol - of course I knew that. They build their application and when it doesn't work they query the boost user's list. It would make helping users on the list easier if I could know that in fact the library does in fact build and test as expected before they even start to ask thequestion. Robert Ramey
Maybe they just do them by hand one by one? Or maybe they're just adding to their app without running the tests? Or? It's a mystery to me.
In Christ, Steven Watanabe
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost