running the regression tests: best practice

Hi! I am somewhat stymied by the proper test procedures for the current boost release candidate. (ie, RC_1_34_0 branch.) The documentation: http://www.boost.org/tools/regression/xsl_reports/runner/instructions.html Seems simple enough. However, it lies. ;) First of all, the links to regression.py are dead. That's fine, because there is ./tools/build/v2/test/regression.py ./tools/regression/xsl_reports/runner/regression.py Great. In addition, I'm more interested in checking my local build, after I built it, not really the workflow envisioned from this script. (As documented.) In boost build v1, there was a script: tools/regression/run_regression.sh This is what I had been using, but it apparently does not work with v2 and has not been updated. (basics are wrong in that script, starting with the location of the bjam sources.) Now, there is even a third way, ie "make check" if you use the ./configure; make; make check approach. (Which I would like to do!) However, that rule is wrong. (no rule for "test", should be --user-config=../user-config.jam). Omitting "test" as a rule and just doing bjam in the status director runs the tests, but then I have no summary. So. Before I start hacking in my own custom make check rules via cannibalizing the old run_regression script, I feel as if I must ask the obvious question: How are people running the regression tests for local builds so that they get results in an easy to comprehend format? There are pretty results on the boost web page: is it possible for the rest of us to generate these too? best, -benjamin

Benjamin Kosnik wrote:
I am somewhat stymied by the proper test procedures for the current boost release candidate. (ie, RC_1_34_0 branch.)
The documentation: http://www.boost.org/tools/regression/xsl_reports/runner/instructions.html
Seems simple enough. However, it lies.
;)
First of all, the links to regression.py are dead.
I also made this experience. I already corrected the links in the CVS version of this document, but not on the website. (I am not sure about the policy of changing the current website.)
In addition, I'm more interested in checking my local build, after I built it, not really the workflow envisioned from this script. (As documented.)
In this case you might be better off going to boost_root/status and issue "bjam toolset=<my-toolset>" from there. Probably it also is a good idea to capture the output to a file for later review. The resression.py is good for sharing your test results with others on the web site. Roland

In addition, I'm more interested in checking my local build, after I built it, not really the workflow envisioned from this script. (As documented.)
In this case you might be better off going to boost_root/status and issue "bjam toolset=<my-toolset>" from there. Probably it also is a good idea to capture the output to a file for later review.
Ooop. Sorry, this is what I'd meant by running bjam in the status directory. As you say, it's a good idea to capture the output. However, it's 13M: I need something more concise. So, my original issue remains: how to get the summary? In boost v1, using run_tests.sh I got something that could be eyeballed. So, from that, I'm assuming something like: cd status bjam $bjam_flags | >& regress.log cat regress.log | process_jam_log then some kind of invocation of compiler_status. However, I cannot figure out which directory compiler_status is to be run in: it seems to be particular, and want a "Jamfile". But, which one? The v1 build log shows directories that don't exist in v2. :( -benjamin

Benjamin Kosnik wrote:
So, my original issue remains: how to get the summary?
Benjamin, can you try the attached script? You will need to set the compiler and directory options as usual at the head of the script. I've added an extra option to run the tests for a specific library only: # # "test_dir" is the relative path to the directory to run the tests in, # defaults to "status" and runs all the tests, but could be a sub-directory # for example "libs/regex/test" to run the regex tests alone. # test_dir="status" The new script works OK for me with win32/msvc build's of everything, but I get endless filesystem errors in the final processing stage if I use cygwin. This may or may not be a specific cygwin issue. HTH, John.

John Maddock wrote:
Benjamin, can you try the attached script?
Excellent. Thanks John!
The new script works OK for me with win32/msvc build's of everything, but I get endless filesystem errors in the final processing stage if I use cygwin. This may or may not be a specific cygwin issue.
I had to do some more tweaks to your script: the script I ended up using is as attached. It turns out that process_jam_log also needs the magic "--v2" flags. Results for my preliminary boost-1.34.0 runs are here: http://people.redhat.com/bkoz/boost-1.34.0/status/results.html -benjamin
participants (3)
-
Benjamin Kosnik
-
John Maddock
-
Roland Schwarz