
Jeff Garland wrote:
On Sun, 20 Mar 2005 13:53:48 -0600, Rene Rivera wrote
Jeff Garland wrote:
Do we even have a way of tracking the check-ins? That might be a good first step. I notice that sourceforge seems to be generating some sort of email when I check-in, but I don't know of a way to subscribe to the changelist.
Yes we do. Dave and I, long ago, set up those emails SF sends so we could get Buildbot to work. So if we track CVS changes to the individual builds we can tell who and what breaks a build. Even though I'm still working on the buildbot here's a sneak peek.. http://build.redshift-software.com:9990/
Very cool! No doubt buildbot will be a great asset -- when do you think it will be ready to 'go production'?
My goal is to have it doing Linux regressions (gcc-release) by next week. I taking it carefully, and hence slowly, as it's crucial to reduce the chances of any test system from breaking. So I do some changes and let the thing run for a day to see if anything strange happens. After it's running on my limited setup we can talk about expanding to other brave testers out there :-)
others run several. So depending on when you check-in, it takes up to a couple days to really see the results of all platforms/compilers. The only way I see us getting closer to the ideal is more machines really dedicated to just Boost testing...
Or going to an active system like Buildbot.
I don't think the existence of Buildbot solves all of our resource issues.
Nothing can solve every problem, unfortunately.
I would expect only a limited number of the current regression testers will be able to install and use Buildbot -- I'm certain there will be firewall and other issues for some that just stop this from happening.
Proxies can solve most firewall problems, so I wouldn't worry too much about that. As for the requirements of running Buildbot itself, they are equivalent to those of using the current regression.py script. But yes there will be issues just getting the setup working.
Plus if it takes 5 hours to run a Boost build you will still have a long delay before you find out if something is broken.
At minimum with Buildbot you can see the build log live. So if the build you triggered only builds a small part of the overall Boost then you'll get to see results almost immediately. Obviously I'm making two assumptions here.. that we can resolve some of the incremental testing problems, and that your changes don't cause a Boost wide rebuild like changing something like type traits would.
For most developers they would like to see a library focused rebuild, which for most could happen in minutes. As an example, since almost nothing in Boost depends on date-time it's very hard for me to break all of Boost. So rerunning all of the Boost regression for a date-time check-in is mostly a waste of resources.
Definitely. I made the suggestion earlier that we should break up the testing so that some testers can devote resources to only testing subsets of Boost: http://permalink.gmane.org/gmane.comp.lib.boost.testing/392 (I know it's a long post.. The scalability section is what I'm referring to.) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq