
Jeff Garland wrote:
Jason Sankey wrote:
John Maddock wrote:
Jason Sankey wrote:
Hi all,
You may have noticed the discussion going on re: developer feedback systems. As part of this I have just started setting up a demo build server at: <snip>
1) If you think this is useful and worth continuing with. Definitely. Thanks for the vote of confidence :).
+1 -- this looks very cool.
Excellent!
2) What important features you think are currently missing. I found navigation a tad tricky at first: it would be useful if I could get to "all the failures for library X" in the last build directly and easily. As it is I had to drill down into the detail report and then scroll through until I found what I was after. Or maybe I'm missing something obvious? :-) You are not missing anything, the drilling is necessary at the moment. This is largely because of the structure of boost, where all libraries are built in one hit. Hence they are all currently in one Pulse project, and the result set is large. One possible solution is to have a view of tests organised per suite (in Pulse terminology: each library is grouped into one suite) rather than the current view which is organised by stage (another Pulse term which refers to a single run on a specific agent). It is also possible to capture HTML reports and serve them up via Pulse, so the build process itself could generate different types of reports as necessary. Pulse supports permalinks to such artifacts so you could always find the latest report at a known URL.
Maybe there's a way the tests could be broken down by library by library? In case you don't already know, you can test a single library by going to the subdirectory (eg: libs/date_time/test) and running bjam. I expect it might take some scripting, but you're obviously doing some to set all this up anyway. Anyway, that way each library could show up as a 'success/fail' line in the table. And we'd also get a measurement of how long it took to build/run the test for each library. This would be huge...
It had occurred to me that this may be a better approach, although I have not experimented with it yet. One thing I am uncertain of is the dependencies between libraries and what it would mean for building them separately. Overall I think breaking it down this way would be much better if dependencies are manageable.
Can you explain more about the "My Builds" page and what it offers, as it looks interesting? This page shows you the results of your "personal" builds. Personal builds are a way for you to test outstanding changes before you commit them to subversion. Basically, you install a Pulse command line client ...snip..
This is a feature that really comes into its own when you need to test on multiple environments. Developers usually don't have an easy way to test all environments, so would usually just commit and find they have broken some other platform later. IMO it would be a great tool for boost developers, but there would be challenges:
1) Having enough machines available to cope with the load of both regular and personal builds. 2) The long build time. This could be mitigated if just the library of interest was built, which is possible if we configure recipes in Pulse for each library.
We've been in need of such a mechanism for a very long time. We're trying to move to a place where developers only check-in 'release-ready' code -- even though I think most of us have been trying to do that I think the reality is that with so many platforms it's really easy to break something. We're also getting more and more libs that provide a layer on OS services -- so we need more platforms to support developers trying to port these libs. Bottom line is that this would be major breakthru for boost to have an 'on-demand' ability to run a test on a particular platform with developer code before it gets checked in. Ideally we'd be able to do this on a single library -- that might also help on the resources.
Right, this is exactly what I am talking about. In my experience without this facility the less popular platforms end up being constantly broken which is very frustrating for the people working on those platforms. Cheers, Jason