
Rene Rivera wrote:
Beman Dawes wrote:
The Development and Release Practices trac wiki page has been updated. See http://svn.boost.org/trac/boost/wiki/ImprovingPractices
Suggestion for automated release testing criteria:
* Release is tested continuously until there are no regressions for each release platform.
* The continuous release testing is restarted when new changes are present from last regression free test point.
I think those two simple rules cover all the testing use cases and is likely we can implement them either manually (through email exchanges), or automated (for example with Buildbot). One particular case of importance is that in the past some tests, irrespective of the quality of the test platform, fail intermittently. Hence testing continuously covers those situations.
You could provide an option in the regression.py script to check a web page to see what boost-roots currently needs testing. Some testers would prefer to always test against svn/boost/trunk, but other would be willing to activate the option, and test against the roots where we currently need testing. Sometimes that will just be the main trunk, but at other times the release trunk will also need testing, and if a breaking change is pending we may want to test some branch, too.
One issue that this brings up is in how we choose the release platforms. In the past it was an incremental choice. When the testing was clear for a platform and the test machine was reliable the release manager would deem the platform a release one. Is this approach still sufficient?
I'm going to suggest that we release on a quarterly basis. It may be easiest if the release manager, after consultation with the list, sets the release criteria compilers at the start of each release cycle, and doesn't normally change during the cycle.
And regardless, we should document this choice.
Yes, definitely. --Beman