
on Wed Sep 12 2007, Jason Sankey <jason-AT-zutubi.com> wrote:
OK. I am underway, having added boost jam support and looking at the best way to gather and integrate the test results.
Fantastic!
I was hoping that there would be a way to run a full build with one or more XML test reports generated (since I know that Boost.Test supports XML reporting).
We can generate XML test reports, but it's not done by Boost.Test; it's done by process_jam_log.py
Looking more closely I see that the current regression testing process uses the normal test report format,
What "normal test report format" are you referring to?
which I can also integrate if generating XML proves difficult. I'm still examining the current build process to try and understand the best way to do it, but should be up and running soon.
As an aside, the output from boost jam is somewhat hostile to post-processing. One feature of Pulse is the ability to pull interesting information like warnings and errors out of your build log so we can summarise and highlight them in the UI. With tools like make and GCC this is fairly easy as they have a predictable and uniform error message output. Playing with boost jam I notice that the error messages are quite diverse and hard to predict. Although I added post-processing rules for the errors I found, it might be worth looking into making the output more machine-friendly - not just for my sake, but for any tools that might want to process the output.
Rene, IIUC you already have the results capture facility implemented that would allow this? -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com