
As an experiment, I've got a script that is checking svn every 15 minutes, and if there is a change, running the regression tests and uploading results to a web site. For current VC++ results, see http://mysite.verizon.net/beman/win32-trunk-results.html On these runs the reporting is set to show a row only if the results are something other than a simple "pass" without link. For slightly outdated Linux results, see http://mysite.verizon.net/beman/ubuntu-trunk-results.html Here the results show all tests, regardless of result. The idea is to supplement the regular regression tests at http://beta.boost.org/development/tests/trunk/developer/serialization.html with rapid turnaround tests on a few major platforms, aimed at giving developers quick response as they try to fix bugs affecting platforms they don't have access to. At the current level failures, the Win32 tests are turning around in 11 minutes while the Linux tests turn in 4 minutes. Thus worst case developers can see test results in 26 minutes! It seems to me this might be helpful to developers. Opinions? Presumably in regular use the upload would go to a centralized site. Or maybe decentralization is useful for reliability; a boost web page could contain links. Opinions on that? Which reporting do you prefer? My preference is just to show the tests with something to report. --Beman

Beman Dawes wrote:
As an experiment, I've got a script that is checking svn every 15 minutes, and if there is a change, running the regression tests and uploading results to a web site.
For current VC++ results, see http://mysite.verizon.net/beman/win32-trunk-results.html On these runs the reporting is set to show a row only if the results are something other than a simple "pass" without link.
For slightly outdated Linux results, see http://mysite.verizon.net/beman/ubuntu-trunk-results.html Here the results show all tests, regardless of result.
The idea is to supplement the regular regression tests at http://beta.boost.org/development/tests/trunk/developer/serialization.html with rapid turnaround tests on a few major platforms, aimed at giving developers quick response as they try to fix bugs affecting platforms they don't have access to. At the current level failures, the Win32 tests are turning around in 11 minutes while the Linux tests turn in 4 minutes. Thus worst case developers can see test results in 26 minutes!
It seems to me this might be helpful to developers. Opinions?
I think this would be very valuable.
Presumably in regular use the upload would go to a centralized site. Or maybe decentralization is useful for reliability; a boost web page could contain links. Opinions on that?
Decentralization is fine with me, but there needs to be something on the boost site that points to all of them.
Which reporting do you prefer? My preference is just to show the tests with something to report.
I like to see all the tests -- just so that I know they were actually run. Jeff

Jeff Garland wrote:
Beman Dawes wrote:
Presumably in regular use the upload would go to a centralized site. Or maybe decentralization is useful for reliability; a boost web page could contain links. Opinions on that?
Decentralization is fine with me, but there needs to be something on the boost site that points to all of them.
http://beta.boost.org/development/testing.html ...Which reminds me I need to add pointers to the meta-comm result pages now that they are running again. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Jeff Garland wrote:
Beman Dawes wrote:
As an experiment, I've got a script that is checking svn every 15 minutes, and if there is a change, running the regression tests and uploading results to a web site.
For current VC++ results, see http://mysite.verizon.net/beman/win32-trunk-results.html On these runs the reporting is set to show a row only if the results are something other than a simple "pass" without link.
For slightly outdated Linux results, see http://mysite.verizon.net/beman/ubuntu-trunk-results.html Here the results show all tests, regardless of result.
The idea is to supplement the regular regression tests at http://beta.boost.org/development/tests/trunk/developer/serialization.html with rapid turnaround tests on a few major platforms, aimed at giving developers quick response as they try to fix bugs affecting platforms they don't have access to. At the current level failures, the Win32 tests are turning around in 11 minutes while the Linux tests turn in 4 minutes. Thus worst case developers can see test results in 26 minutes!
It seems to me this might be helpful to developers. Opinions?
I think this would be very valuable.
Presumably in regular use the upload would go to a centralized site. Or maybe decentralization is useful for reliability; a boost web page could contain links. Opinions on that?
Decentralization is fine with me, but there needs to be something on the boost site that points to all of them.
Yep.
Which reporting do you prefer? My preference is just to show the tests with something to report.
I like to see all the tests -- just so that I know they were actually run.
Yes, I worry about that too. Maybe I'll start out showing all results, and then reassess later depending on experience. I've stopped running the tests for now because I'm in the middle of switching to a new machine. If all goes well, they will start running regularly on Tuesday or Wednesday. --Beman

Beman Dawes <bdawes <at> acm.org> writes:
As an experiment, I've got a script that is checking svn every 15 minutes, and if there is a change, running the regression tests and uploading results to a web site.
Does it make sence to wait till there were no changes in last 15 min or so. To avoid getting in the middle of someone committing in a series of independent commits.
It seems to me this might be helpful to developers. Opinions?
It is helpfull Gennadiy

Gennadiy Rozental wrote:
Beman Dawes <bdawes <at> acm.org> writes:
As an experiment, I've got a script that is checking svn every 15 minutes, and if there is a change, running the regression tests and uploading results to a web site.
Does it make sence to wait till there were no changes in last 15 min or so. To avoid getting in the middle of someone committing in a series of independent commits.
IIUC, one of the advantages of Subversion is that it does a commit of multiple files as a single unit. Thus I'm not sure we need to worry about partial commits. Developers should only do independent commits of changes that are truly independent. --Beman

Sebastian Redl wrote:
Beman Dawes wrote:
Developers should only do independent commits of changes that are truly independent.
Unless you've got a great idea about how to enforce this policy, I think we do have to worry about developers making multiple commits for a single changeset.
The regular regression tests also suffer from the same problem. I really think developer education is the solution. --Beman

on Mon Oct 08 2007, Sebastian Redl <sebastian.redl-AT-getdesigned.at> wrote:
Beman Dawes wrote:
Developers should only do independent commits of changes that are truly independent.
Unless you've got a great idea about how to enforce this policy, I think we do have to worry about developers making multiple commits for a single changeset.
I'll worry about it by telling people not to do that. Otherwise they will get reports that they have broken the build. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

on Thu Oct 04 2007, Beman Dawes <bdawes-AT-acm.org> wrote:
As an experiment, I've got a script that is checking svn every 15 minutes, and if there is a change, running the regression tests and uploading results to a web site.
For current VC++ results, see http://mysite.verizon.net/beman/win32-trunk-results.html On these runs the reporting is set to show a row only if the results are something other than a simple "pass" without link.
That part doesn't seem to be working.
For slightly outdated Linux results, see http://mysite.verizon.net/beman/ubuntu-trunk-results.html Here the results show all tests, regardless of result.
The idea is to supplement the regular regression tests at http://beta.boost.org/development/tests/trunk/developer/serialization.html with rapid turnaround tests on a few major platforms, aimed at giving developers quick response as they try to fix bugs affecting platforms they don't have access to. At the current level failures, the Win32 tests are turning around in 11 minutes while the Linux tests turn in 4 minutes. Thus worst case developers can see test results in 26 minutes!
It seems to me this might be helpful to developers. Opinions?
I don't know. This seems to use some older regression reporting scripts; do they take into account explicit failure markup? -- Dave Abrahams Boost Consulting http://www.boost-consulting.com
participants (6)
-
Beman Dawes
-
David Abrahams
-
Gennadiy Rozental
-
Jeff Garland
-
Rene Rivera
-
Sebastian Redl