
David Abrahams wrote:
Boost's Systems ---------------
The major problems with our current feedback systems, AFAICT, are fragility and poor user interface.
100% agreement, it's not fault of metacomm: the current structure just can't cope with the volume of data generated these days.
Recommendations ---------------
Our web-based regression display system needs to be redesigned and rewritten. It was evolved from a state where we had far fewer libraries, platforms, and testers, and is burdened with UI ideas that only work in that smaller context. I suggest we start with as minimal a display as we think we can get away with: the front status reporting page should be both useful and easily-grasped.
IMO the logical approach is to do this rewrite as a Trac plugin, because of the obvious opportunities to integrate test reports with other Trac functions (e.g. linking error messages to the source browser, changeset views, etc.), because the Trac database can be used to maintain the kind of history of test results that Dart manages, and because Trac contains a nice builtin mechanism for generating/displaying reports of all kinds. In my conversations with the Kitware guys, when we've discussed how Dart could accomodate Boost's needs, I've repeatedly pushed them in the direction of rebuilding Dart as a Trac plugin, but I don't think they "get it" yet.
I have some experience writing Trac plugins and would be willing to contribute expertise and labor in this area. However, I know that we also need some serious web-UI design, and many other people are much more skilled in that area than I am. I don't want to waste my own time doing badly what others could do well and more quickly, so I'll need help.
Just thinking out loud here, but I've always thought that our test results should be collected in a database: something like each test gets an XML result file describing the test result, which then gets logged in the database. The display application would query the database, maybe in real-time for specific queries and present the results etc. Thinking somewhat outside the box here ... but could SVN be conscripted for this purpose, yes, OK I know it's an abuse of SVN, but basically our needs are quite simple: * Log the output from each build step and store it somewhere, with incremental builds much of this information would rairly change. In fact even if the test is rebuilt/rerun the chances are that logged data won't actually change. * Log the status of each test: pass or fail. * Log the date and time of the last test. So what would happen if build logs for each test were stored in an SVN tree set aside for the purpose, with pass/fail and date/time status stored as SVN properties? Could this be automated, from within bjam or CMake or whatever? Of course we're in serious danger of getting into the tool writing business again here .... John.