
David Abrahams wrote:
This part of my analysis focuses on the tools available for getting feedback from the system about what's broken. Once again, because there's been substantial effort invested in dart/cmake/ctest and interest expressed by Kitware in supporting our use thereof, I'm including that along with our current mechanisms. Although not strictly a reporting system, I'll also discuss BuildBot a bit because Rene has been doing some research on it and it has some feedback features.
I think it is important to consider the different tools for the purpose they were designed for. None of them will do the whole job, but with a good combination of them I believe very useful and robust things can be built. Notably, I do believe that buildbot is an invaluable tool to drive the build and test automation. It provides a good framework to formalize the process in, and it scales very well. While it has some 'GUI' to visualize the state of the various builders, I wouldn't think of using it to display test reports. Generating test reports shouldn't be that hard, once you have all the essential information that should figure in it. (That sounds banal, but until recently wasn't even possible: there still is no way to figure out what revision (or source tree timestamp) a given test run corresponds to !) Once all that information is available in a machine-parsable form, writing that last bit of code to generate a useful report (html or other) should be straight-forward. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...