[Important] Boost Subversion repository is now online

Hello all, The Boost Subversion repository is now back online. All of the files in CVS (including their histories) have been imported into the Subversion repository. CVS is still available for anonymous, read- only access for now, but will not be updated. The main Boost development branch is available via anonymous, read- only checkout at: http://svn.boost.org/svn/boost/trunk/ Or for developer read/write access at: https://svn.boost.org/svn/boost/trunk/ Information about accessing the Boost Subversion repository is available at: http://svn.boost.org/trac/boost/wiki/BoostSubversion Please report any problems to me to the main Boost list, and we will try to resolve them as quickly as possible. We still need help porting the regression-testing script over to use Subversion. See Trac ticket #1122: http://svn.boost.org/trac/boost/ticket/1122 Also, there is more documentation that will need to be updated within the Boost tree. - Doug Subversion tip: when you commit a change to Subversion that fixes ticket number NNN, include the text "Fixes #NNN" in your commit log. Trac will automatically close the ticket and cross-reference the commit with the ticket. See, for example, http://svn.boost.org/trac/ boost/changeset/38330

Douglas Gregor wrote: ...
The main Boost development branch is available via anonymous, read- only checkout at:
http://svn.boost.org/svn/boost/trunk/
Or for developer read/write access at:
https://svn.boost.org/svn/boost/trunk/
Information about accessing the Boost Subversion repository is available at:
So I've made the mistake of checking out https://svn.boost.org/svn/boost/ This is somewhat excessive. :-) How do people manage to keep their working copy reasonably small while still keeping an eye on non-trunk parts and a branch or two as needed?

On Jul 31, 2007, at 7:31 PM, Peter Dimov wrote:
So I've made the mistake of checking out
https://svn.boost.org/svn/boost/
This is somewhat excessive. :-)
That's a big checkout. Thanks for stress-testing our server <G>
How do people manage to keep their working copy reasonably small while still keeping an eye on non-trunk parts and a branch or two as needed?
I just check out several branches separately, although I imagine there are other solutions. - Doug

Peter Dimov wrote:
Douglas Gregor wrote: ...
The main Boost development branch is available via anonymous, read- only checkout at:
http://svn.boost.org/svn/boost/trunk/
Or for developer read/write access at:
https://svn.boost.org/svn/boost/trunk/
Information about accessing the Boost Subversion repository is available at:
So I've made the mistake of checking out
https://svn.boost.org/svn/boost/
This is somewhat excessive. :-) How do people manage to keep their working copy reasonably small while still keeping an eye on non-trunk parts and a branch or two as needed?
I believe you can checkout https://svn.boost.org/svn/boost/ with the -N flag. Then, change to the checked-out directory and do: svn up trunk svk up super_interesting_branch Then, "svn up" in the top-level dir will update only the branches you care about. - Volodya

-----Original Message----- From: boost-bounces@lists.boost.org on behalf of Peter Dimov So I've made the mistake of checking out https://svn.boost.org/svn/boost/ This is somewhat excessive. :-) How do people manage to keep their working copy reasonably small while still keeping an eye on non-trunk parts and a branch or two as needed? -----End Original Message----- Peter, If you don't need to work on different branches too often, then svn switch can help in multiple ways. One way is that you only have one working copy. In terms of speed, on a large repository (30k+ files), it takes about 2-3 minutes to switch, Also, as svn switch will only touch files that are different between the branches. you will only rebuild what is different so you more than make up for the 2-3 minutes switching time in time spent *not* building**. HTH, Sohail ** Unless one of the common diffs is a common header file.

Douglas Gregor wrote:
Please report any problems to me to the main Boost list, and we will try to resolve them as quickly as possible.
I just checked out the svn repository, and I noticed that every file I ever deleted from the cvs repository is back. For instance, the entire directory "boost/xpressive/detail/static/productions" is old dead code I deleted eons ago, and now it lives again. This is a major problem, if it's happening to other people, too. -- Eric Niebler Boost Consulting www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

Eric Niebler wrote:
Douglas Gregor wrote:
Please report any problems to me to the main Boost list, and we will try to resolve them as quickly as possible.
I just checked out the svn repository, and I noticed that every file I ever deleted from the cvs repository is back. For instance, the entire directory "boost/xpressive/detail/static/productions" is old dead code I deleted eons ago, and now it lives again. This is a major problem, if it's happening to other people, too.
Hm, one instance of removed files I know of is the old Boost.Build version at tools/build/v1. I don't see it in the checkout I just finished doing, or any of the older BBv1 files that lived in tools/build. But I see the same extra files you mention. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

On Jul 31, 2007, at 7:42 PM, Eric Niebler wrote:
Douglas Gregor wrote:
Please report any problems to me to the main Boost list, and we will try to resolve them as quickly as possible.
I just checked out the svn repository, and I noticed that every file I ever deleted from the cvs repository is back. For instance, the entire directory "boost/xpressive/detail/static/productions" is old dead code I deleted eons ago, and now it lives again. This is a major problem, if it's happening to other people, too.
I'm guessing this is related to http://cvs2svn.tigris.org/faq.html#atticprob and is caused by corruption in the SourceForge CVS repository that had some files in both a CVS directory and in the Attic. I don't know how this corruption occurred, but the safest route (from the perspective of preserving history) is to keep both copies of the files. That may be what happened here. I wonder what other files are affected... I'll be happy to clean this up if we can figure out how to identify these files. - Doug

Douglas Gregor wrote:
On Jul 31, 2007, at 7:42 PM, Eric Niebler wrote:
Please report any problems to me to the main Boost list, and we will try to resolve them as quickly as possible. I just checked out the svn repository, and I noticed that every file I ever deleted from the cvs repository is back. For instance, the entire
Douglas Gregor wrote: directory "boost/xpressive/detail/static/productions" is old dead code I deleted eons ago, and now it lives again. This is a major problem, if it's happening to other people, too.
I'm guessing this is related to http://cvs2svn.tigris.org/faq.html#atticprob and is caused by corruption in the SourceForge CVS repository that had some files in both a CVS directory and in the Attic. I don't know how this corruption occurred, but the safest route (from the perspective of preserving history) is to keep both copies of the files. That may be what happened here.
I wonder what other files are affected... I'll be happy to clean this up if we can figure out how to identify these files.
Diff the trees? I just tried to clean up xpressive's files, and found that I can't write to the subversion repository: $ svn commit -m "delete old data resurrected in the switch to svn" detail/static/productions/ detail/static/transforms/fold_to_xxx.hpp detail/static/transforms/transform.hpp proto/compile.hpp proto/compiler/ proto/transform/conditional.hpp proto/transform/fold_to_list.hpp Deleting detail/static/productions svn: Commit failed (details follow): svn: CHECKOUT of '/svn/boost/!svn/ver/38329/trunk/boost/xpressive/detail/static': 403 Forbidden (https://svn.boost.org) :-( -- Eric Niebler Boost Consulting www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

On Aug 1, 2007, at 12:47 AM, Eric Niebler wrote:
I wonder what other files are affected... I'll be happy to clean this up if we can figure out how to identify these files.
Diff the trees?
Will do.
I just tried to clean up xpressive's files, and found that I can't write to the subversion repository:
$ svn commit -m "delete old data resurrected in the switch to svn" detail/static/productions/ detail/static/transforms/fold_to_xxx.hpp detail/static/transforms/transform.hpp proto/compile.hpp proto/ compiler/ proto/transform/conditional.hpp proto/transform/fold_to_list.hpp Deleting detail/static/productions svn: Commit failed (details follow): svn: CHECKOUT of '/svn/boost/!svn/ver/38329/trunk/boost/xpressive/detail/static': 403 Forbidden (https://svn.boost.org)
*Smacks forehead* It's fixed, now. - Doug

Douglas Gregor wrote:
Hello all,
The Boost Subversion repository is now back online. All of the files in CVS (including their histories) have been imported into the Subversion repository. CVS is still available for anonymous, read- only access for now, but will not be updated.
The main Boost development branch is available via anonymous, read- only checkout at:
http://svn.boost.org/svn/boost/trunk/
Or for developer read/write access at:
Awesome. Thanks for working on this ! What are the next steps ? If I understand correctly, the 1_34_0 branch should now be copied to, say, 'stable', such that in regular intervals things can be merged in from the trunk. Am I reading the suggested procedure correctly ? (And then, at some point, 'stable' can be branched to '1_35', etc....) Also, what branches are the tests being run on, and triggered by what event ? I'd expect some testing on trunk, and some on stable (though at least the latter not nightly, since check-ins should occur less frequently. Correct ? Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

On Aug 1, 2007, at 12:02 AM, Stefan Seefeld wrote:
What are the next steps ? If I understand correctly, the 1_34_0 branch should now be copied to, say, 'stable', such that in regular intervals things can be merged in from the trunk. Am I reading the suggested procedure correctly ? (And then, at some point, 'stable' can be branched to '1_35', etc....)
That is my understanding, although IIRC, the last discussion ended up with, "We can finalize the new procedure later, once we have moved to Subversion." Personally, I'd like to see us find a good way to turn "stable" into an actual release branch of "trunk", with the appropriate svmerge.py tags to make it easy to keep it up-to-date. The trunk/stable divergence is really bad for future development.
Also, what branches are the tests being run on, and triggered by what event ? I'd expect some testing on trunk, and some on stable (though at least the latter not nightly, since check-ins should occur less frequently. Correct ?
Until someone gets regression.py updated to work with Subversion, no tests are run. My understanding of the new testing scheme is that most of the testing will go on the trunk, but that we'll also periodically test the stable branch (less frequently). No triggers; I expect perhaps 2 days of the week will test stable, the rest testing trunk. Or, for those with the resources, test both nightly. - Doug

Douglas Gregor wrote:
On Aug 1, 2007, at 12:02 AM, Stefan Seefeld wrote:
What are the next steps ? If I understand correctly, the 1_34_0 branch should now be copied to, say, 'stable', such that in regular intervals things can be merged in from the trunk. Am I reading the suggested procedure correctly ? (And then, at some point, 'stable' can be branched to '1_35', etc....)
That is my understanding, although IIRC, the last discussion ended up with, "We can finalize the new procedure later, once we have moved to Subversion." Personally, I'd like to see us find a good way to turn "stable" into an actual release branch of "trunk", with the appropriate svmerge.py tags to make it easy to keep it up-to-date. The trunk/stable divergence is really bad for future development.
Yes. I think it would be good to a) identify a 'stable' branch (from which the next release branch will fork) and b) establish a policy concerning checkins (for example, merges of stable change sets from trunk). Right after 1.34 came out, people suggested that 1.35 should follow shortly, with the most important changes being new library additions that were accepted before 1.34 but didn't make it into the 1.34 release branch. Thus, now would be a good time to allow project owners of such libraries to work on that.
Also, what branches are the tests being run on, and triggered by what event ? I'd expect some testing on trunk, and some on stable (though at least the latter not nightly, since check-ins should occur less frequently. Correct ?
Until someone gets regression.py updated to work with Subversion, no tests are run. My understanding of the new testing scheme is that most of the testing will go on the trunk, but that we'll also periodically test the stable branch (less frequently). No triggers; I expect perhaps 2 days of the week will test stable, the rest testing trunk. Or, for those with the resources, test both nightly.
I'm looking forward to a buildbot setup that makes such things easier. (Rene, if you need help, I'd like to contribute...) In particular, writing schedulers that trigger builds / test runs either by time or by checkins ("triggered by a checkin but no earlier than x minutes after a checkin", "no more than twice a week", etc.) Having regular test runs on 'stable' should be a requirement for allowing merges from trunk, to avoid the branch becoming unstable. On a related note: the boost tracker has milestones for the 1.34.1 release (still not closed) and 1.35.0. It would be good in addition to the existing issues assigned to 1.35 to define goals, such as what new libraries are expected to be merged / integrated. This has the advantage that users know what to expect from the next release, and developers what work remains to be done. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Douglas Gregor wrote:
Until someone gets regression.py updated to work with Subversion, no tests are run. My understanding of the new testing scheme is that most of the testing will go on the trunk, but that we'll also periodically test the stable branch (less frequently). No triggers; I expect perhaps 2 days of the week will test stable, the rest testing trunk. Or, for those with the resources, test both nightly.
To me, this doesn't sound anything like Beman's proposal which seemed to have gained a consensus. Robert Ramey.
- Doug _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Aug 1, 2007, at 11:25 AM, Robert Ramey wrote:
Douglas Gregor wrote:
Until someone gets regression.py updated to work with Subversion, no tests are run. My understanding of the new testing scheme is that most of the testing will go on the trunk, but that we'll also periodically test the stable branch (less frequently). No triggers; I expect perhaps 2 days of the week will test stable, the rest testing trunk. Or, for those with the resources, test both nightly.
To me, this doesn't sound anything like Beman's proposal which seemed to have gained a consensus.
Frankly, I lost track of that discussion, so my impression of its result may be wrong. Since this is not related to Subversion, it belongs in a different thread. - Doug

Doug Gregor wrote:
On Aug 1, 2007, at 11:25 AM, Robert Ramey wrote:
Douglas Gregor wrote:
Until someone gets regression.py updated to work with Subversion, no tests are run. My understanding of the new testing scheme is that most of the testing will go on the trunk, but that we'll also periodically test the stable branch (less frequently). No triggers; I expect perhaps 2 days of the week will test stable, the rest testing trunk. Or, for those with the resources, test both nightly.
To me, this doesn't sound anything like Beman's proposal which seemed to have gained a consensus.
Frankly, I lost track of that discussion, so my impression of its result may be wrong.
Since this is not related to Subversion, it belongs in a different thread.
Hmmm - well I think you're wrong about that - But its easy to make a new thread. The thrust of Beman's proposal is actually quite simple. It consists of a) designate an branch/trunk as the "Current Release". b) ALL development occurs on branches. c) Testing is applied to branches as requested. d) At the discretion of the release manager, Development branches are merged into the "Current Release" and the whole system is tested. e) Each time the "Current Release" test passes more tests than the previous one, A tag is added by the release manager and a new download package is automatically created. I would anticipate this happing about once/month. The only things we're missing right now are c) - which I believe will be doable in the near future. And a set of "best practices" for for developers and the release manager. This is just a question of agreeing on how to use SVN as regards branches. If you had nothing else to do, you could make the "Current Release" /main/trunk etc ONLY updateable by the release manager. Who would do this by merging in branches which have passed their tests. Then we'd be in business Robert Ramey

Robert Ramey wrote:
a) designate an branch/trunk as the "Current Release".
That's what I'm referring to as 'stable'. The question is what that gets created from, since it doesn't exist, yet.
b) ALL development occurs on branches.
I'm not sure what that means, given how subversion handles branches. The difference between 'trunk' and 'branches/something' is only in the naming.
c) Testing is applied to branches as requested.
I believe how test runs are triggered most efficiently depends on the usage patterns. Ideally (i.e. with infinite resources), test runs would be triggered on each change. If that isn't possible, alternative approaches can be chosen, such as 'no earlier than x minutes after a checkin', to allow developers to make multiple connected checkins in a row (though with subversion there shouldn't be any need for that, in contrast to cvs). Or, "triggered by checkins but no more frequent than once per day". Etc. (See http://buildbot.net/repos/release/docs/buildbot.html#Schedulers)
d) At the discretion of the release manager, Development branches are merged into the "Current Release" and the whole system is tested.
Does this imply that each individual feature (as defined by something that is meant to be merged into 'stable' as a whole) will be developed in isolation, on its own branch ? I'm not sure how practical that would be. In any case, I agree to the point that there should be relatively few, but coarse grained, checkins on the 'stable' branch, which can be backed out as a whole if any regressions occur.
e) Each time the "Current Release" test passes more tests than the previous one, A tag is added by the release manager and a new download package is automatically created. I would anticipate this happing about once/month.
As above, I'm not sure what the tag is good for, with a repository that has atomic / global revisions. Just remembering the revision number that contains a new feature the first time should be sufficient. Packaging automatically (after a successful test run) can and should be automated with buildbot, too. So, in this light, the release manager's job would be to decide which patches / features to merge from development branches to stable, based on the current release's life cycle.
If you had nothing else to do, you could make the "Current Release" /main/trunk etc ONLY updateable by the release manager. Who would do this by merging in branches which have passed their tests. Then we'd be in business
Actually I don't think it is practical to have a single person do all this. That would create a huge bottleneck. The most important thing to do is formalize the development process as far as version management is concerned, to be able to easily and quickly rollback anything that risks to destabilize the stable / release branch. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Robert Ramey wrote:
a) designate an branch/trunk as the "Current Release".
That's what I'm referring to as 'stable'. The question is what that gets created from, since it doesn't exist, yet.
Hmmm - actually it does - its the RC_1_34_1 branch. We could just designate that as the "Current Release" (or whatever you want to call it. Effort required - 0
b) ALL development occurs on branches.
I'm not sure what that means, given how subversion handles branches. The difference between 'trunk' and 'branches/something' is only in the naming.
So, given how subversion handles branches, the cost/effort of creating a branch for each developer's changes is 0
c) Testing is applied to branches as requested.
I believe how test runs are triggered most efficiently depends on the usage patterns. Ideally (i.e. with infinite resources), test runs would be triggered on each change. If that isn't possible, alternative approaches can be chosen, such as 'no earlier than x minutes after a checkin', to allow developers to make multiple connected checkins in a row (though with subversion there shouldn't be any need for that, in contrast to cvs). Or, "triggered by checkins but no more frequent than once per day". Etc. (See http://buildbot.net/repos/release/docs/buildbot.html#Schedulers)
This is the missing piece. I believe it will be available in a relatively short time. The mechanism will be tests of library x will be run on branch y by any tester interested in doing this. Tests can be run whenever a tester wants to - but it will really only be necessary when a developer requests it. The current testing on the "Current Release" can remain unchanged - though its usually interesting only to the release manager.
d) At the discretion of the release manager, Development branches are merged into the "Current Release" and the whole system is tested.
Does this imply that each individual feature (as defined by something that is meant to be merged into 'stable' as a whole) will be developed in isolation, on its own branch ? I'm not sure how practical that would be.
LOL - one hell of a lot more practical than the current system whereby everythign changes at once. I believe that this is the most widely accepted manner of using a system such as SVN. Failure to employ this practice has made life much more dificult than it has to be.
In any case, I agree to the point that there should be relatively few, but coarse grained, checkins on the 'stable' branch,
Good
which can be backed out as a whole if any regressions occur.
I would not expect regressions of such a drastic nature that the above would be necessary.
e) Each time the "Current Release" test passes more tests than the previous one, A tag is added by the release manager and a new download package is automatically created. I would anticipate this happing about once/month.
As above, I'm not sure what the tag is good for, with a repository that has atomic / global revisions. Just remembering the revision number that contains a new feature the first time should be sufficient.
For you perhaps. By my memory is fading as I get older. Its much easier for me to remember boost 1.36 than 3.1415917872348376485 But basically you're correct - a tag is a convenient naming device to bridge the gap between our feeble brains and the computer systems we use.
Packaging automatically (after a successful test run) can and should be automated with buildbot, too.
I don't know what build bot is - but it sounds like you're agreing that packaging the release should be an automatic process which is run affter the global test on the current release. If that's what your suggesting I'm sure everyone will agree
So, in this light, the release manager's job would be to decide which patches / features to merge from development branches to stable, based on the current release's life cycle.
Correct - upon request from a library developer. The release manager would review the requests from developers, select the order he want's to merge them in, and for each one, merge in the developer branch, request the global testing, and invoke the release build script if he's statisfied.
If you had nothing else to do, you could make the "Current Release" /main/trunk etc ONLY updateable by the release manager. Who would do this by merging in branches which have passed their tests. Then we'd be in business
Actually I don't think it is practical to have a single person do all this.
What's impractical is what is being done now. Thomas's accomplishment of getting all of boost and a new build system through the eye of a needle - all at the same time - was a heroic accomplishment. It is the last time that it can ever be done.
That would create a huge bottleneck.
Its the current system which is a huge bottleneck.
The most important thing to do is formalize the development process as far as version management is concerned,
That's what I'm trying to accomplish.
to be able to easily and quickly rollback anything that risks to destabilize the stable / release branch.
Totally, Totally wrong here. The only way to make things work is to integrate pieces in one at a time in digestble chunks. Note that all this has been discussed in quite a bit of detail. Beman made a pretty clear proposal along these lines and after quite a bit of discussion, it seemed to have reached a consensus. All the points made above have been made in previous posts. I'm sure you can find them if you're interested. Robert Ramey

Robert Ramey wrote:
Stefan Seefeld wrote:
Robert Ramey wrote:
a) designate an branch/trunk as the "Current Release". That's what I'm referring to as 'stable'. The question is what that gets created from, since it doesn't exist, yet.
Hmmm - actually it does - its the RC_1_34_1 branch. We could just designate that as the "Current Release" (or whatever you want to call it.
I've also been using the name "stable" for the "release ready" branch. And, yes, the starting point for "stable" is the current "RC_1_34_1" tag.
Effort required - 0
b) ALL development occurs on branches. I'm not sure what that means, given how subversion handles branches. The difference between 'trunk' and 'branches/something' is only in the naming.
So, given how subversion handles branches, the cost/effort of creating a branch for each developer's changes is 0
c) Testing is applied to branches as requested. I believe how test runs are triggered most efficiently depends on the usage patterns. Ideally (i.e. with infinite resources), test runs would be triggered on each change. If that isn't possible, alternative approaches can be chosen, such as 'no earlier than x minutes after a checkin', to allow developers to make multiple connected checkins in a row (though with subversion there shouldn't be any need for that, in contrast to cvs). Or, "triggered by checkins but no more frequent than once per day". Etc. (See http://buildbot.net/repos/release/docs/buildbot.html#Schedulers)
This is the missing piece. I believe it will be available in a relatively short time. The mechanism will be tests of library x will be run on branch y by any tester interested in doing this. Tests can be run whenever a tester wants to - but it will really only be necessary when a developer requests it.
Right, although as a practical matter most developers will want to test against "stable". I've been trying the following procedure for the six or eight weeks: I've got a working copy "stable" which was checked out from the RC_1_34_1 tag. I'm doing some Boost.System development on a branch "c++0x". So I've switched the components involved (.../boost/system, .../boost/cerrno.hpp, .../libs/system) to the c++0x branch. I go about testing and development as usual on my Windows machine, committing c++0x changes every time I want Chris Kohlhoff and Peter Dimov, who are helping, to be able to access work-in-process. I've also got a mini Mac and a Linux box set up with "stable" working copies checked out from RC_1_34_1. I'm running a web server on those machines (to simulate remote machines out on the Internet.) It has a "test-on-demand" web page, which fires off a CGI script written in Python that will switch a particular library to a specified branch, run bjam, upload four result files to the server, and switch the library back to the stable branch. So whenever I want to see if the code is working on the non-Windows, I sign onto the web sites, request tests be run, and have the results in a couple of minutes. Although the process needs a lot of polishing, it already works well enough to demonstrate the value of the approach. The tools involved are mainly just Subversion and bjam. The same approach would work with other testing frameworks. The bottom line is that I know that code works *before* it gets merged into the stable branch. That's the critical point; the exact way the testing is done is important operationally, but those details don't matter as far as the big picture goes.
The current testing on the "Current Release" can remain unchanged - though its usually interesting only to the release manager.
Right.
In any case, I agree to the point that there should be relatively few, but coarse grained, checkins on the 'stable' branch,
Good
Yes, agreed.
which can be backed out as a whole if any regressions occur.
I would not expect regressions of such a drastic nature that the above would be necessary.
The point of testing before actually changing the stable branch is that regressions should become very rare. --Beman

Beman Dawes wrote:
Robert Ramey wrote:
Stefan Seefeld wrote:
c) Testing is applied to branches as requested. I believe how test runs are triggered most efficiently depends on the usage patterns. Ideally (i.e. with infinite resources), test runs would be triggered on each change. If that isn't possible, alternative approaches can be chosen, such as 'no earlier than x minutes after a checkin', to allow developers to make multiple connected checkins in a row (though with subversion there shouldn't be any need for that, in contrast to cvs). Or, "triggered by checkins but no more frequent than once per day". Etc. (See http://buildbot.net/repos/release/docs/buildbot.html#Schedulers) This is the missing piece. I believe it will be available in a relatively short time. The mechanism will be tests of library x will be run on branch y by any tester interested in doing this. Tests can be run whenever a tester wants to - but it will really only be necessary when a developer requests it.
Right, although as a practical matter most developers will want to test against "stable".
What are they testing ? And what (and, more importantly, where) are they developing ?
I've been trying the following procedure for the six or eight weeks:
[...]
So whenever I want to see if the code is working on the non-Windows, I sign onto the web sites, request tests be run, and have the results in a couple of minutes.
For avoidance of doubt: the tests are run on your 'c++0x' branch, right ? How many such branches do you expect to coexist ? How many people do you expect to collaborate on such branches ? At what frequencies do you expect branch-specific testing requests to be issued ? Does the procedure scale ? Also, of course, such requests can only be issued for machines (platforms) that are readily available, right ? I think this is where buildbot enters the picture. It allows to set up a set of schedulers that control the actual testing, e.g. imposing constraints on how often tests may be run. That will help to manage the available (and probably rather scarce) resources: build slaves for the various platforms.
Although the process needs a lot of polishing, it already works well enough to demonstrate the value of the approach. The tools involved are mainly just Subversion and bjam. The same approach would work with other testing frameworks.
The bottom line is that I know that code works *before* it gets merged into the stable branch. That's the critical point; the exact way the testing is done is important operationally, but those details don't matter as far as the big picture goes.
Right. Again, for avoidance of doubt: do you expect the development branch to be created from the stable branch, to make sure a passing test on the development branch translates to a passing test on stable after a merge. Correct ? I'm asking because this essentially means that stable becomes the only reference, throughout boost development. In fact, not only a reference, but a synchronization point. It becomes the developer's duty to backport all changes that go into stable from other development effords, making sure the tests still pass, before forward-porting the local changes to stable. While I agree this sounds good, it also implies quite a bit of additional work for every developer. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Right, although as a practical matter most developers will want to test against "stable".
What are they testing ? And what (and, more importantly, where) are they developing ?
They are testing changes to the libraries they are developing. They are depending upon only the last/next released version of boost.
So whenever I want to see if the code is working on the non-Windows, I sign onto the web sites, request tests be run, and have the results in a couple of minutes.
I presume that this this "secret sauce" will sometime become available to other developers? I don't see it anywhere now.
For avoidance of doubt: the tests are run on your 'c++0x' branch, right ? How many such branches do you expect to coexist ?
approximately one per developer.
How many people do you expect to collaborate on such branches ?
one or two.
At what frequencies do you expect branch-specific testing requests to be issued ?
as needed - depending on how hard one is working it could be as often as once/day but I would expect 5-10 times for each major library revision.
Does the procedure scale ?
very much so. Instead of testing the whole of boost - whether anyone needs it or not. Only one library is tested at time. Currently, the time to test grows quadraticly. Number of libraries x Time to run a test. The time to run a test grows with the number of libraries. Under the new system, testing will only grow linearly with the number of libraries. as only the library branch is tested on request. This is a fundemental motivation for this system.
Also, of course, such requests can only be issued for machines (platforms) that are readily available, right ?
LOL - this is "secret sauce" which is still secret - at least from me. I presume it wll be revealed to the "rest of us" when its "ready"
I think this is where buildbot enters the picture. It allows to set up a set of schedulers that control the actual testing, e.g. imposing constraints on how often tests may be run. That will help to manage the available (and probably rather scarce) resources: build slaves for the various platforms.
Something that does this function will be needed but I doubt it will be as elaborate as you suggest. But who knows - it seems it's still being experimented with.
Right. Again, for avoidance of doubt: do you expect the development branch to be created from the stable branch, to make sure a passing test on the development branch translates to a passing test on stable after a merge. Correct ?
Now you've hit upon the motivation for my original post. I was under the impression that the "trunk" would be the last released version. It turns out that its not so. But no matter. With SVN there is no special status accorded "trunk" we can just branch off the last release. The only think we need is a set of "Best Practices" (or whatever one wants to call it so we're all in sync.
I'm asking because this essentially means that stable becomes the only reference, throughout boost development. In fact, not only a reference, but a synchronization point. It becomes the developer's duty to backport all changes that go into stable from other development effords, making sure the tests still pass, before forward-porting the local changes to stable.
Halleluhuh - you've got it !!!.
While I agree this sounds good, it also implies quite a bit of additional work for every developer.
Its A LOT LESS work for the developer. Under the current (old) system every time a test failed I would have to investigate whether it was due to an error new error in my library or some change/error in something that the library depended up. It consumed waaaaay too much time. I gave up commiting changes except on a very infrequent basis. Turns out that the failures still occurred but I knew they weren't mine so I could ignore them. Bottom line - testing was a huge waste of time providing no value to a library developer. Don't even start on the effort trying to get a release out when everything is changing at once. And scaling. each boost release with one library added will basically be a pain in the neck for one developer and maybe the release manager (though not necessarily). Now its a pain in the neck for ALL developers at once. Robert Ramey

Robert Ramey wrote:
Stefan Seefeld wrote:
Right, although as a practical matter most developers will want to test against "stable". What are they testing ? And what (and, more importantly, where) are they developing ?
They are testing changes to the libraries they are developing. They are depending upon only the last/next released version of boost.
But no development is taking place on 'stable'. Why test against it (for other purposes than preparing a release) ?
For avoidance of doubt: the tests are run on your 'c++0x' branch, right ? How many such branches do you expect to coexist ?
approximately one per developer.
That doesn't answer my question, though: I'm wondering how many build / test requests need to be dealt with. Where do the testing resources come from ?
How many people do you expect to collaborate on such branches ?
one or two.
At what frequencies do you expect branch-specific testing requests to be issued ?
as needed - depending on how hard one is working it could be as often as once/day but I would expect 5-10 times for each major library revision.
I must be misunderstanding something fundamental. What is being tested ? The code under development ? Running tests on stable won't tell anything about my development branch.
Does the procedure scale ?
very much so. Instead of testing the whole of boost - whether anyone needs it or not. Only one library is tested at time.
Indeed. That will remove redundancy.
Currently, the time to test grows quadraticly. Number of libraries x Time to run a test. The time to run a test grows with the number of libraries.
Huh ? That's only true if each test is run stand-alone (as opposed to incrementally, with an update instead of a fresh checkout). And even then, if I only build a test, only its prerequisites should be built. Since that shouldn't depend on the overal number of libraries, I don't see how the time to test is quadratic. Under the new system, testing
will only grow linearly with the number of libraries. as only the library branch is tested on request. This is a fundemental motivation for this system.
Yes.
Also, of course, such requests can only be issued for machines (platforms) that are readily available, right ?
LOL - this is "secret sauce" which is still secret - at least from me. I presume it wll be revealed to the "rest of us" when its "ready"
You make me curious. Is someone setting up a build farm ? All the more reason to set up a buildbot harness. :-) I think my main concern is that "on request" part. I believe there needs to be some scheduling to manage the resources, no matter how many there are, where they are located, and how they interact.
I think this is where buildbot enters the picture. It allows to set up a set of schedulers that control the actual testing, e.g. imposing constraints on how often tests may be run. That will help to manage the available (and probably rather scarce) resources: build slaves for the various platforms.
Something that does this function will be needed but I doubt it will be as elaborate as you suggest. But who knows - it seems it's still being experimented with.
Why so secretly ? Rene and I have been talking about a buildbot harness for many months now. I would very much appreciate if things were handled a little more transparently, to avoid wasting effort.
Right. Again, for avoidance of doubt: do you expect the development branch to be created from the stable branch, to make sure a passing test on the development branch translates to a passing test on stable after a merge. Correct ?
Now you've hit upon the motivation for my original post. I was under the impression that the "trunk" would be the last released version. It turns out that its not so. But no matter. With SVN there is no special status accorded "trunk" we can just branch off the last release. The only think we need is a set of "Best Practices" (or whatever one wants to call it so we're all in sync.
We totally agree. That's what I was referring to as checkin policies for all available branches, trunk and stable included.
I'm asking because this essentially means that stable becomes the only reference, throughout boost development. In fact, not only a reference, but a synchronization point. It becomes the developer's duty to backport all changes that go into stable from other development effords, making sure the tests still pass, before forward-porting the local changes to stable.
Halleluhuh - you've got it !!!.
While I agree this sounds good, it also implies quite a bit of additional work for every developer.
Its A LOT LESS work for the developer. Under the current (old) system every time a test failed I would have to investigate whether it was due to an error new error in my library or some change/error in something that the library depended up. It consumed waaaaay too much time. I gave up commiting changes except on a very infrequent basis. Turns out that the failures still occurred but I knew they weren't mine so I could ignore them. Bottom line - testing was a huge waste of time providing no value to a library developer.
Don't even start on the effort trying to get a release out when everything is changing at once.
Don't worry. On that we very much agree, too. :-) Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

on Wed Aug 01 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
Robert Ramey wrote:
Stefan Seefeld wrote:
Right, although as a practical matter most developers will want to test against "stable". What are they testing ? And what (and, more importantly, where) are they developing ?
They are testing changes to the libraries they are developing. They are depending upon only the last/next released version of boost.
But no development is taking place on 'stable'. Why test against it (for other purposes than preparing a release) ?
I don't know if anyone else is having trouble with this, but I can't keep track of the definitions of terms such as "stable" in this discussion, and I suspect several of the participants may be using the same terms to mean different things. Is there a glossary somewhere? If not, would someone (preferably Beman, who has been at the center of this effort from the beginning) put one on the wiki? And can I count on everyone else to use the same terms and make up a new one when they mean something different? Or am I the only one who's confused? -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

David Abrahams wrote:
Or am I the only one who's confused?
No, you're not. "stable" seems to be used with at least two different meanings during this discussion. (That's why I tried to avoid that term in my testing-oriented posts and used "tested branch" instead.) We need more discipline regarding commits and also regarding use of terms. Regards, m

David Abrahams wrote:
I don't know if anyone else is having trouble with this, but I can't keep track of the definitions of terms such as "stable" in this discussion, and I suspect several of the participants may be using the same terms to mean different things. Is there a glossary somewhere? If not, would someone (preferably Beman, who has been at the center of this effort from the beginning) put one on the wiki? And can I count on everyone else to use the same terms and make up a new one when they mean something different?
Or am I the only one who's confused?
Oh no - this has been an on-going problem in this discussion. personally I tried to stay away from "stable" because it doesn't capture any meaning for me. I prefer: Release Branch (or Release Trunk). - branch into which libraries are merged after individual testing. Ideally, this would be read-only except for a release manager who merges in changes from a library which has passed library tests. Current Release - Release used as a basis for generating tarballs and such. This is a tag placed on the the Release Branch. Previous Release - tag on the Release branch for previous release. Developement branch - development code for an individual library. Library Tests. Tests for a library usually done on a Development Branch Comprehensive Tests - Tests on ALL of boost - perhaps with a proposed library included. But these terms don't really map to the current system very well so I think there is still opportuntity for confusion when trying to compare and contrast the two approaches. Robert Ramey

David Abrahams wrote:
on Wed Aug 01 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
Robert Ramey wrote:
Stefan Seefeld wrote:
Right, although as a practical matter most developers will want to test against "stable". What are they testing ? And what (and, more importantly, where) are they developing ? They are testing changes to the libraries they are developing. They are depending upon only the last/next released version of boost. But no development is taking place on 'stable'. Why test against it (for other purposes than preparing a release) ?
I don't know if anyone else is having trouble with this, but I can't keep track of the definitions of terms such as "stable" in this discussion, and I suspect several of the participants may be using the same terms to mean different things. Is there a glossary somewhere? If not, would someone (preferably Beman, who has been at the center of this effort from the beginning) put one on the wiki?
Will do. --Beman

Robert Ramey wrote:
Its A LOT LESS work for the developer. Under the current (old) system every time a test failed I would have to investigate whether it was due to an error new error in my library or some change/error in something that the library depended up. It consumed waaaaay too much time. I gave up commiting changes except on a very infrequent basis. Turns out that the failures still occurred but I knew they weren't mine so I could ignore them. Bottom line - testing was a huge waste of time providing no value to a library developer.
A situation possible under proposed system is: - You develop things on your branch. When your feature is ready, you merge from trunk. Suddenly half of tests in your library fails. The merge brought changes in about 100 different files, and you have to figure out what's up. With the current system, you'd get a failure whenever the problematic change is checked in. So, you'll know what some commit between 1000 and 1010 broke your library and it's easy to find out the offending commit from that. In other words, in the current system, if some other library breaks yours, you find about that immediately, and can take action. In the new system, you'll find about that only when your feature is done -- which is more inconvenient. You can try to workaround this by frequently merging from trunk, but it won't quite work. Trunk receives bulk updates. So, if the other developer did 100 changes on this branch and merged, you'll only have the chance to test all those 100 changes when they are merged to trunk. - Volodya
Don't even start on the effort trying to get a release out when everything is changing at once.
And scaling. each boost release with one library added will basically be a pain in the neck for one developer and maybe the release manager (though not necessarily). Now its a pain in the neck for ALL developers at once.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Aug 2, 2007, at 12:51 PM, Vladimir Prus wrote:
In other words, in the current system, if some other library breaks yours, you find about that immediately, and can take action. In the new system, you'll find about that only when your feature is done -- which is more inconvenient. You can try to workaround this by frequently merging from trunk, but it won't quite work. Trunk receives bulk updates. So, if the other developer did 100 changes on this branch and merged, you'll only have the chance to test all those 100 changes when they are merged to trunk.
Volodya is absolutely correct. Delaying integration of new work by using more branches will not fix problems; it just delays them. Frankly, I think this whole approach of "fixing the process" is wrongheaded. We're in this mess because our *tools* are broken, not our *process*. Lots of other projects, many larger than Boost, work perfectly well with the same or similar processes because their tools work better. What doesn't work? Regression testing. Thomas Witt has pointed out the myriad problems with our testing setup that affected the Boost 1.34.0 release. He should know: he managed the 1.34.x release series. I hit exactly the same problems when I managed the 1.33.x release series. Report generation stalls every few days, cycle times are horrible, it's impossible to isolate what checkins caused failures, and we only really have our testers testing one thing at a time. So either we aren't stabilizing a release (because we're testing the trunk) or the trunk has turned into an untested wild-west because we *are* stabilizing a release. That wild-west went on for a *year* while we were stabilizing the 1.34.0 release, so our trunk is, of course a mess. At one point, I thought we could fix this problem with a stable branch based on 1.34.1, from which future releases would occur. Now, I'm convinced that is the absolutely wrong approach. It means that "trunk" and "stable" would be forever divergent, and would rely on manual merges to get features into stable. That's a recipe for unwanted surprises, because library authors---who typically work from the trunk---are going to forget to merge features and bug-fixes (including the test cases for those things) to the stable branch, and BOOM! No progress. It's more work in the long run to require so many small merges, and it really is just a way to avoid doing what we really must do: fix the trunk. If our trunk were well-tested, release branches would be short-lived and the risk of divergence (features/ fixes not making it between branch and trunk) would be minimized. Plus, developers wouldn't need to manually merge anything *except* the few things that are needed for those short-lived release branches. Since we now have Subversion, svnmerge.py can even make it easy to deal with those merges relatively easily. - Doug P.S. Here are some of the many things I could have done that would have been more productive than writing the message above: 1) Made regression.py work with Subversion, so that we would be performing regression testing on the trunk. 2) Looked at the changes made on the RC_1_34_0 branch to determine which ones can be merged back to the trunk. 3) Fixed some of the current failures on the trunk. 4) Setup a new nightly regression tester. 5) Studied Dart2 to see how we can make it work for Boost 6) Investigated the problems with incremental testing. 7) Improved the existing test reporting system to track Subversion revisions associated with test runs, and link to those revisions in the Trac. 8) Improved the existing test reporting system to track changes from day to day Before I reply to any messages in this thread, I'll be thinking about that list. Will you? P.P.S. I know I sound grumpy, because I am. The amount of time we have collectively used discussing policies would have used far more wisely to improve the tools we have.

Doug Gregor wrote:
On Aug 2, 2007, at 12:51 PM, Vladimir Prus wrote:
In other words, in the current system, if some other library breaks yours, you find about that immediately, and can take action. In the new system, you'll find about that only when your feature is done -- which is more inconvenient. You can try to workaround this by frequently merging from trunk, but it won't quite work. Trunk receives bulk updates. So, if the other developer did 100 changes on this branch and merged, you'll only have the chance to test all those 100 changes when they are merged to trunk.
Volodya is absolutely correct. Delaying integration of new work by using more branches will not fix problems; it just delays them.
Frankly, I think this whole approach of "fixing the process" is wrongheaded. We're in this mess because our *tools* are broken, not our *process*. Lots of other projects, many larger than Boost, work perfectly well with the same or similar processes because their tools work better.
I'd disagree -- there's one bit where out process is not broken, it's nonexistent. The important aspect of Boost is that we have lots of automated tests, or lots of different configurations and there's the goal of no regressions. This is a very strict goal. At the same time we don't have any equally strict, or even written down bug-triage-and-developer-pinging process. A process that makes sure that: (1) Every issue found by regression tester or reported by a user is assigned to the right person and to the right release. (2) By the time the right release should be made, the issue is either fixed, or it is made clear it cannot be fixed. (3) The chances that a critical bug is fixed are higher than for a minor nuisance. The current process is basically expecting library authors will do all that. But: 1. We have issues with "None" as component and as owner. 2. Not all authors start the day by looking at issues in Trac, so they might miss important issues. 3. An author might just forget about important issue, or just disappear. As result, we used to have some regressions present for month, without any apparent work being done on them. So, where my proposal for a good process? There is none. Many projects have such bug-triage-and-developer-pinging process, so we don't have to invent anything, it only takes a volunteer who will manage such process. Ah, and BTW -- if the branch-based proposal is adopted, somebody should volunteer to integrate changes to stable, and be ready to integrate several patches per day.
P.S. Here are some of the many things I could have done that would have been more productive than writing the message above: 1) Made regression.py work with Subversion, so that we would be performing regression testing on the trunk. 2) Looked at the changes made on the RC_1_34_0 branch to determine which ones can be merged back to the trunk. 3) Fixed some of the current failures on the trunk. 4) Setup a new nightly regression tester. 5) Studied Dart2 to see how we can make it work for Boost 6) Investigated the problems with incremental testing.
That's something long overdue, and I can probably fix it. The question is -- will process_jam_logs remain? If not, I'd rather not spend time doing chances that will have to be redone.
7) Improved the existing test reporting system to track Subversion revisions associated with test runs, and link to those revisions in the Trac. 8) Improved the existing test reporting system to track changes from day to day
I'd add 8.1) Implemented a mechanism to record a revision in which a failure first occured, and a previous revision where test passed.
Before I reply to any messages in this thread, I'll be thinking about that list. Will you?
I think this is a good list -- it's likely to have more direct effect than any process discussion. - Volodya

Vladimir Prus wrote:
Doug Gregor wrote:
On Aug 2, 2007, at 12:51 PM, Vladimir Prus wrote:
P.S. Here are some of the many things I could have done that would have been more productive than writing the message above:
Oh, how I have avoided responding in these recent threads, exactly for that reason...
1) Made regression.py work with Subversion, so that we would be performing regression testing on the trunk.
I'll get to it <http://svn.boost.org/trac/boost/ticket/1122> this weekend, unless someone wants to grab that task from me now.
2) Looked at the changes made on the RC_1_34_0 branch to determine which ones can be merged back to the trunk.
Volodya and myself are going to do some of that for Boost.Build next week.
3) Fixed some of the current failures on the trunk. 4) Setup a new nightly regression tester. 5) Studied Dart2 to see how we can make it work for Boost
Noel and I are still trying to figure out how to make Boost.Build output the needed XML files directly. I think we need some help from Volodya at this point since the core support is done in bjam and we are both a bit lost in how to approach the problem in Boost.Build.
6) Investigated the problems with incremental testing.
That's something long overdue, and I can probably fix it.
I've been looking at this as a prelude to #5 above.
The question is -- will process_jam_logs remain?
No. My goal is to make Boost.Build itself generate the XML result fragment files directly. This should now be possible as bjam now supports capturing the output of all actions it runs and calling a rule with all the information about the action, like target, timing, output, etc. I've been hacking at testing.jam to see how to make it generate the XML files. But this is not something that's in my top 5 to do list :-\
If not, I'd rather not spend time doing chances that will have to be redone.
7) Improved the existing test reporting system to track Subversion revisions associated with test runs, and link to those revisions in the Trac. 8) Improved the existing test reporting system to track changes from day to day
Those are precisely what BuildBot was designed for. I've been talking with Stefan on what needs to get set up to start using it ASAP. This is another task I have for myself this weekend. But BuildBot it not a complete solution. It is not a reporting system, as I mentioned at BoostCon. But in combination with Dart2 we might be able to get close.
I'd add
8.1) Implemented a mechanism to record a revision in which a failure first occured, and a previous revision where test passed.
Which is easy if you record *all* past test results and the revision they tested. And Dart2 does some of this, although not as nicely as I'd like.
Before I reply to any messages in this thread, I'll be thinking about that list. Will you?
I think this is a good list -- it's likely to have more direct effect than any process discussion.
:-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Rene Rivera wrote:
Vladimir Prus wrote:
I'd add
8.1) Implemented a mechanism to record a revision in which a failure first occured, and a previous revision where test passed.
PS. 9) Clean up the tags and branches in the new svn code import. I did some of that for Boost.Build, Boost.Jam, and Quickbook last night. For example: http://svn.boost.org/trac/boost/browser/branches/build http://svn.boost.org/trac/boost/browser/branches/jam http://svn.boost.org/trac/boost/browser/branches/quickbook http://svn.boost.org/trac/boost/browser/tags/jam -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

on Thu Aug 02 2007, Vladimir Prus <ghost-AT-cs.msu.su> wrote:
there's one bit where out process is not broken, it's nonexistent. The important aspect of Boost is that we have lots of automated tests, or lots of different configurations and there's the goal of no regressions. This is a very strict goal.
At the same time we don't have any equally strict, or even written down bug-triage-and-developer-pinging process.
I agree that we could improve in that area, but it doesn't have much to do with our long release cycle. The bugs that held up our last release showed up in our regression tests, not in our bug tracker. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

David Abrahams wrote:
on Thu Aug 02 2007, Vladimir Prus <ghost-AT-cs.msu.su> wrote:
there's one bit where out process is not broken, it's nonexistent. The important aspect of Boost is that we have lots of automated tests, or lots of different configurations and there's the goal of no regressions. This is a very strict goal.
At the same time we don't have any equally strict, or even written down bug-triage-and-developer-pinging process.
I agree that we could improve in that area, but it doesn't have much to do with our long release cycle.
I think it's one of the primary problem causing long release cycle.
The bugs that held up our last release showed up in our regression tests, not in our bug tracker.
This difference is not important -- regressions are trivially convertible into bugs in a bug tracker; and clearly regressions must be tracked somehow. And long release cycle is direct result of: 1. Wanting zero regressions. 2. Library authors sometimes being not available, and there being no pinging process. 3. Having to time window for fixing. So we end up with a regression and all we know there's a regression. We do not know if this issue is being worked on, or if the library author will have time only in N days, or if the library author needs help from platform expert (which will have time only in N days). The library author, in turn, might have little motivation fixing a single regression on obscure platform, if he feels that there are 100 other regressions that are not worked on. We actually had examples of such proactive release management in past, and it worked good, but it's clearly time consuming. So one possible solution is to 1. Document the process. 2. Distribute it in time -- for example if we have a single 'development' branch, we can record all regressions that appear on the branch and demand that they are fixed in a month, or the offending commit reverted. 3. Distribute it over people -- instead of having one release manager doing all the work, we can have "bug masters" that will focus on regressions in a subset of platforms, or subset of libraries. - Volodya

----- Mensaje original ----- De: Vladimir Prus <ghost@cs.msu.su> Fecha: Viernes, Agosto 3, 2007 6:06 pm Asunto: Re: [boost] [SVN]Best Practices for developers using SVN Para: boost@lists.boost.org [...]
We actually had examples of such proactive release management in past, and it worked good, but it's clearly time consuming. So one possible solution is to
1. Document the process. 2. Distribute it in time -- for example if we have a single 'development' branch, we can record all regressions that appear on the branch and demand that they are fixed in a month, or the offending commit reverted. 3. Distribute it over people -- instead of having one release manager doing all the work, we can have "bug masters" that will focus on regressions in a subset of platforms, or subset of libraries.
This last point --having platform gurus that roam the regression landscape in search for bugs to fix in their area of platform expertise-- is something I really think we should encourage in an explicit manner. Specially in abandoned libs, which are not actively evolved, new regressions are typically a side effect of distant changes in other libs, and these kind of problems are usually not that hard to fix even for people unacquainted with the code. I've done some routine fixing for libs other than mine for MSVC++ 6.0, and, hey, it's even moderately fun. So maybe we should issue a call for platform gurus or somehow officialize this role within the community. JoaquÃn M López Muñoz Telefónica, Investigación y Desarrollo

Volodya, Vladimir Prus wrote:
David Abrahams wrote:
on Thu Aug 02 2007, Vladimir Prus <ghost-AT-cs.msu.su> wrote:
We actually had examples of such proactive release management in past, and it worked good, but it's clearly time consuming. So one possible solution is to
For one thing it does not scale. The more important part that you are missing is: We have zero leverage over library developers. Let me repeat this: We have ZERO LEVERAGE over library developers. Any approach that relies on people doing things when asked is doomed.
1. Document the process. 2. Distribute it in time -- for example if we have a single 'development' branch, we can record all regressions that appear on the branch and demand that they are fixed in a month, or the offending commit reverted. 3. Distribute it over people -- instead of having one release manager doing all the work, we can have "bug masters" that will focus on regressions in a subset of platforms, or subset of libraries.
There were many documented and distributed processes in the past. Nobody reads the FM. And to be honest I can't even blame people. Thomas -- Thomas Witt witt@acm.org

Thomas Witt wrote:
Volodya,
Vladimir Prus wrote:
David Abrahams wrote:
on Thu Aug 02 2007, Vladimir Prus <ghost-AT-cs.msu.su> wrote:
We actually had examples of such proactive release management in past, and it worked good, but it's clearly time consuming. So one possible solution is to
For one thing it does not scale. The more important part that you are missing is:
We have zero leverage over library developers.
Let me repeat this:
We have ZERO LEVERAGE over library developers.
You don't need to repeat this twice, and I'm not missing this. This is common in open source. For example, the gcc release manager has no leverage over large majority of gcc developers. Yet, gcc release process where periodic status updates are posted, and where specific persons are pinged to fix specific critical issues works. And much more efficiently than waiting for bugs to disappear.
Any approach that relies on people doing things when asked is doomed.
We actually had some experience ourself. In particular, I think the release managed by Aleksey Gurtovoy (1.32.*) had such proactive approach, and if I remember correctly, it worked good. I think 1.33.*, managed by Doug Gregor was also more proactive. - Volodya

Thomas Witt wrote:
Volodya,
Vladimir Prus wrote:
David Abrahams wrote:
on Thu Aug 02 2007, Vladimir Prus <ghost-AT-cs.msu.su> wrote:
We actually had examples of such proactive release management in past, and it worked good, but it's clearly time consuming. So one possible solution is to
For one thing it does not scale. The more important part that you are missing is:
We have zero leverage over library developers.
Let me repeat this:
We have ZERO LEVERAGE over library developers.
you have no leverage because of the way the task of Release Manager has been defined. The current task of the Release Manager is to get out a boost quality release. In order to accomplish this task he has to do a lot ot stuff. Under Beman's proposal, the task of Release Manager will change in a subtle but important way. One starts with a boost quality release - 1.34.0 The job of the release manager is to only allow merges into the next release which have been demonstrated to yield an improvement. Its up to the developer to provide the tests and test runs which prove that his next "oeuvre" is up to snuff. If the developer makes his case, the "Release Manager" lets the developer merge in his changes an run the final tests. I he approves it pushes the button which makes the tarballs, etc. BTW - I think this change is a done deal. First I don't think anyone will volunteer to manage the next release given the amount of effort you had to invest. Second, even if someone does, given the increase in libraries and the current state of the trunk, I think the job will take too much time to finish and even if it gets finished, will be so late as to be irrelevant. Of course that's just my uninformed opinion. As I have never managed the release of a large open source project myself I'm perhaps overly intimidated by such a prospect. Robert Ramey

Thomas Witt wrote:
We have zero leverage over library developers.
Let me repeat this:
We have ZERO LEVERAGE over library developers.
Any approach that relies on people doing things when asked is doomed.
So what do you recommend? There are two options, addressing the zero leverage problem by using the threat of not including a library in a release, or somehow doing a release without depending on the library developers. Which one would you pick?

I haven't read the entire chain of emails, so i might be wayy off here. I apologize if that's the case. But think about it. Why do people write/maintain them? Usually because it's something they believe in. If you convince them that the proposed changes will increase adoption/usage I think that would be a sufficient enough motivation. Threatening just seems like a bad idea and maybe, a sign of insufficient communication. Jigish Sent from my iPhone On Aug 3, 2007, at 3:04 PM, "Peter Dimov" <pdimov@pdimov.com> wrote:
Thomas Witt wrote:
We have zero leverage over library developers.
Let me repeat this:
We have ZERO LEVERAGE over library developers.
Any approach that relies on people doing things when asked is doomed.
So what do you recommend? There are two options, addressing the zero leverage problem by using the threat of not including a library in a release, or somehow doing a release without depending on the library developers. Which one would you pick?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

on Fri Aug 03 2007, Vladimir Prus <ghost-AT-cs.msu.su> wrote:
David Abrahams wrote:
on Thu Aug 02 2007, Vladimir Prus <ghost-AT-cs.msu.su> wrote:
there's one bit where out process is not broken, it's nonexistent. The important aspect of Boost is that we have lots of automated tests, or lots of different configurations and there's the goal of no regressions. This is a very strict goal.
At the same time we don't have any equally strict, or even written down bug-triage-and-developer-pinging process.
I agree that we could improve in that area, but it doesn't have much to do with our long release cycle.
I think it's one of the primary problem causing long release cycle.
Interesting.
The bugs that held up our last release showed up in our regression tests, not in our bug tracker.
This difference is not important --
Okay... well, developer pinging can and should be automated. We already have a "strict" mechanism for it in place. Maybe it could be better; I don't know. What kind of bug triage process do you think we should have?
regressions are trivially convertible into bugs in a bug tracker; and clearly regressions must be tracked somehow.
I guess the problem is that such conversions are hard to effectively automate. One mistake in a library could turn into 50 test failures; if they all look the same, you'd probably only want one ticket.
And long release cycle is direct result of:
1. Wanting zero regressions. 2. Library authors sometimes being not available, and there being no pinging process.
There certainly is a pinging process. Don't you get the "there are bugs in one or more libraries you maintain" messages?
3. Having to time window for fixing.
Do you mean http://article.gmane.org/gmane.comp.lib.boost.devel/158259 ? If not, what is a "time window" and how would a time window help?
So we end up with a regression and all we know there's a regression. We do not know if this issue is being worked on, or if the library author will have time only in N days, or if the library author needs help from platform expert (which will have time only in N days). The library author, in turn, might have little motivation fixing a single regression on obscure platform, if he feels that there are 100 other regressions that are not worked on.
It's a plausible scenario that probably happens sometimes, but do you have any evidence at all that it's at the heart of the long release cycle?
We actually had examples of such proactive release management in past, and it worked good, but it's clearly time consuming. So one possible solution is to
1. Document the process.
I guess that would be helpful, since you seem to have some ideas that differ from what we've been doing, but haven't been specific about them. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

Doug Gregor wrote:
Frankly, I think this whole approach of "fixing the process" is wrongheaded. We're in this mess because our *tools* are broken, not our *process*.
That's the issue. its the process that's broken. No amount of improvement in the tools will fix it.
What doesn't work? Regression testing.
This is true. And it can never be made to work and be useful under the current procedures.
- Doug
P.S. Here are some of the many things I could have done that would have been more productive than writing the message above: 1) Made regression.py work with Subversion, so that we would be performing regression testing on the trunk.
wouldn't be necessary under the new proposal.
2) Looked at the changes made on the RC_1_34_0 branch to determine which ones can be merged back to the trunk.
wouldn't be necessary under the new proposal.
3) Fixed some of the current failures on the trunk.
wouldn't be necessary under the new proposal.
4) Setup a new nightly regression tester.
wouldn't be necessary under the new proposal.
5) Studied Dart2 to see how we can make it work for Boost
Hmm I don't know what this is. But another tool won't make a difference.
6) Investigated the problems with incremental testing.
of course any improvement is necessary. But current tools work well enough. They are not the bottleneck.
7) Improved the existing test reporting system to track Subversion revisions associated with test runs, and link to those revisions in the Trac.
wouldn't be necessary under the new proposal.
8) Improved the existing test reporting system to track changes from day to day
wouldn't be necessary under the new proposal.
Before I reply to any messages in this thread, I'll be thinking about that list. Will you? P.P.S. I know I sound grumpy, because I am. The amount of time we have collectively used discussing policies would have used far more wisely to improve the tools we have.
You guys have been making a heroic effort and in no way to I want to denigrate your efforts. But I think you're shoveling against the tide. We have more problems now than before not because someone isn't working hard enough or putting enough time. The current process is fundamentally flawed in that it doesn't scale. Working harder with better tools isn't going to produce the improvements that re-thinking the process will. Of course the beauty of this is we really don't all have to agree. You're free to improve the tools for trunk testing and the like and those of us who want to are free to use branches for development. If we can't agree to move to smaller incremental releases - well, we can do what Joaquin has done - post incremental improvements on a library by library basis. We'll see how it works out. Robert Ramey

on Thu Aug 02 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
Of course the beauty of this is we really don't all have to agree. You're free to improve the tools for trunk testing and the like and those of us who want to are free to use branches for development.
You could have been using branches for development all along. I do it often. It helps me get work done without worrying about other peoples' changes, and gives me a place to check in my work at intermediate points when it isn't ready for release. However, it doesn't change anything fundamental about the release process. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

David Abrahams wrote:
on Thu Aug 02 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
Of course the beauty of this is we really don't all have to agree. You're free to improve the tools for trunk testing and the like and those of us who want to are free to use branches for development.
You could have been using branches for development all along. I do it often. It helps me get work done without worrying about other peoples' changes, and gives me a place to check in my work at intermediate points when it isn't ready for release. However, it doesn't change anything fundamental about the release process.
As a practical matter, thats what a number of us are effectively doing. We're running development tests on our local system against the latest release. There is currently no real value in creating a branch because that branch is never going to get tested anywhere besides one's local machine anyway. And you're correct, this doesn't change the fundamental release procedures. It keeps the release procedures from making our own lives difficult. So from an individual developer's standpoint, its not really that great a problem anymore. Except for the tools we have to use - which is a whole other thread. Robert Ramey

Robert Ramey wrote:
We're running development tests on our local system against the latest release. There is currently no real value in creating a branch because that branch is never going to get tested anywhere besides one's local machine anyway.
How does this actually work ? I have been proposing changes to Boost.Build (v2) to allow an individual library to be built / tested against an installed (or at least, external) boost tree providing the prerequisites. Up to now, this isn't quite possible, IIUC, so your statement makes me wonder how you achieved this. This is indeed one of the most important (yet seemingly simple to implement) changes I would like to see in the testing procedure. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Robert Ramey wrote:
We're running development tests on our local system against the latest release. There is currently no real value in creating a branch because that branch is never going to get tested anywhere besides one's local machine anyway.
How does this actually work ? I have been proposing changes to Boost.Build (v2) to allow an individual library to be built / tested against an installed (or at least, external) boost tree providing the prerequisites.
Err, what changes do you want and why: svn sw <some-branch> <regular testing procedure> is not adequate? - Volodya

Vladimir Prus wrote:
Stefan Seefeld wrote:
Robert Ramey wrote:
We're running development tests on our local system against the latest release. There is currently no real value in creating a branch because that branch is never going to get tested anywhere besides one's local machine anyway.
How does this actually work ? I have been proposing changes to Boost.Build (v2) to allow an individual library to be built / tested against an installed (or at least, external) boost tree providing the prerequisites.
Err, what changes do you want and why:
svn sw <some-branch> <regular testing procedure>
That's what I'm doing now. I'm not sure what <regular testing procedure> contains - but the closest thing I've found is Rene's runtest.sh script in regression/tools/ and basically Its fine as far as it goes. I want a different report - see other post. I'm not crazy about having to edit the environmental variables for each installation. But these are minor quibbles. I"m using a variation of this now. It just needs to be tweaked and documented so that users can validate thier own installations.
is not adequate?
So all this is fine- the only thing I can't do is use it to test my library with someone elses compiler on their machine. Of course that will soon change. If I develop on the branch all I will have to do is to ask a tester - to switch to the branch and run test for the serialization library. I am expecting for a solution to this to spontaneously appear at any time. So we will be there regardless of what happens to the current trunk I had thought I read that we were going to start with RC_1_34 and I was just hoping for some sort guidence and / or consensus as to branching an naming. Questions like: a) should branches be made off of RC_1_34 root or from the library directories themselves. I think its the former but my SVN experience is a little sketchy here. b) Is there a way to do a read only checkout of a the release branch and the switch the checkout of the directories I'm working with to read/write. This to avoid accidently checking in a change I didn't mean to. Or accdently checking back into the wrong branch. c) Would it convenient to adopt some sort of naming convention like"serialization_next" ... for branches of each library? d) Would the "development branches" stay around after being merged into the Next Release branch. I would think it convenient that they would (afer applying a tag) but there may be a reason why his wouldn't be a good idea. e) Suppose I wanted to check in some small changes into something which is outside the serialization library. Currently I would like to check in the source to my Library Status program along with scripts and instructions for running it to the web page. I previiously asked if anyone had any object to this and got no answer, so it should be OK. Of course now I don't know where I would check such a thing in. If Doug wants to resurrect the trunk - I could well check it in there - But then I would be subjected to the ridicule of my peers - with some justification. But there is no branch for boost/../// regressoin and I'm sort of relucant to do this since as things stand now, it wouldn't get tested or integrated. Regardless of what happens there is still some value on agreement/suggestion as to what practices/policies should be used with the new SVN system. Robert Ramey

Vladimir Prus wrote:
Stefan Seefeld wrote:
Robert Ramey wrote:
We're running development tests on our local system against the latest release. There is currently no real value in creating a branch because that branch is never going to get tested anywhere besides one's local machine anyway. How does this actually work ? I have been proposing changes to Boost.Build (v2) to allow an individual library to be built / tested against an installed (or at least, external) boost tree providing the prerequisites.
Err, what changes do you want and why:
svn sw <some-branch> <regular testing procedure>
is not adequate?
This assumes that branches are self-contained copies (in the cow sense) of the whole boost tree. I'm talking about boost component A, compiled and tested against (a variety of) boost prerequisite components B that are built (and possibly installed) separately. This logical and technical separation between A and B is the kind of modularity I have in mind. Whether that actually leads to separate release cycles for A and B is an entirely different (and to the most degree non-technical) matter. And I believe something similar is what Robert was alluding to when talking about "running development tests on our local system against the latest release". Correct ? Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
This logical and technical separation between A and B is the kind of modularity I have in mind. Whether that actually leads to separate release cycles for A and B is an entirely different (and to the most degree non-technical) matter.
And I believe something similar is what Robert was alluding to when talking about "running development tests on our local system against the latest release". Correct ?
Correct. Robert Ramey
Thanks, Stefan

Robert Ramey wrote:
Stefan Seefeld wrote:
This logical and technical separation between A and B is the kind of modularity I have in mind. Whether that actually leads to separate release cycles for A and B is an entirely different (and to the most degree non-technical) matter.
And I believe something similar is what Robert was alluding to when talking about "running development tests on our local system against the latest release". Correct ?
Correct.
Well, you didn't tell us how you do that. Building a library against an external boost tree (installed or not) isn't quite supported yet. Have you modified boost.build locally to fit this purpose ? Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Robert Ramey wrote:
Stefan Seefeld wrote:
This logical and technical separation between A and B is the kind of modularity I have in mind. Whether that actually leads to separate release cycles for A and B is an entirely different (and to the most degree non-technical) matter.
And I believe something similar is what Robert was alluding to when talking about "running development tests on our local system against the latest release". Correct ?
Correct.
Well, you didn't tell us how you do that. Building a library against an external boost tree (installed or not) isn't quite supported yet. Have you modified boost.build locally to fit this purpose ?
Its amazingly simple. Here is how I currently do it. I havn't switched over to SVN yet so its couched in terms of CVS. I synced my local CVS with RC_1_34_1 branch. I did this starting at the root so my whole system as on this branch. I use CVS to switch three directories libs/serialization, boost/archive and boost/serialization. to the HEAD which has been the current boost practice. Strickly speaking, this isn't necessary until I try to check something it. But doing it now prevents me from later checking in changes to the release branch which I once accidently did. I run tests from inside libs/serialization/test using the short script described in the previous email. This builds required libraries, builds the serialization library, runs all the tests and updates the table of all the latest results. Edit source code and repeat as necessary. When my results table is noticibly improved I check in to the HEAD in accordance with current boost practices. When I get a set of local test results (against the current boost release) that I like, I'll post the versions of the cited three directories as zip file on my web site. That way anyone who want's the version with "no known bugs" can get that and I'll only have to deal with newer issues. This will cut the time down between time of discovery of a bug or implementation of a small enhancement to around 30 days rather than 1 1/2 years as it is now. Sooooooooooooo This is the context of my original question about "Best Practices for SVN". I've been using SVN on my local machine for a few weeks and find it to be quite good. Though CVS was good enough for my purposes, I do find SVN easier to use, better documented, etc and all around better. So I'm happy about the change. It's not enough to have a new tool. One has to decide on the best way to employ it for his situation - I spend a little time doing that and it has helped me alot. Robert Ramey
Thanks, Stefan

Stefan Seefeld wrote:
Robert Ramey wrote:
We're running development tests on our local system against the latest release. There is currently no real value in creating a branch because that branch is never going to get tested anywhere besides one's local machine anyway.
How does this actually work ? I have been proposing changes to Boost.Build (v2) to allow an individual library to be built / tested against an installed (or at least, external) boost tree providing the prerequisites. Up to now, this isn't quite possible, IIUC, so your statement makes me wonder how you achieved this.
I have made a variation of the program compiler status - called library status. It generates a table for one library which includes ALL the build variants for a library rather than just one. You can see the output of his program at www.rrsd.com Follow the boost link I run it from my serialization/test directory with the following shell script if test $# -eq 0 then echo "Usage: $0 <bjam arguments>" echo "Typical bjam arguements are:" echo " --toolset=msvc-7.1,gcc" echo " variant=debug,release,profile" echo " link=static,shared" echo " threading=single,multi" else bjam --dump-tests $@ >bjam.log 2>&1 process_jam_log --v2 <bjam.log library_status library_status.html links.html fi I needed this because the regression test don't test all the combinations I require debug/release, link-static/ and the current display tools only display one combination in any case. Robert Ramey

on Fri Aug 03 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
David Abrahams wrote:
on Thu Aug 02 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
Of course the beauty of this is we really don't all have to agree. You're free to improve the tools for trunk testing and the like and those of us who want to are free to use branches for development.
You could have been using branches for development all along. I do it often. It helps me get work done without worrying about other peoples' changes, and gives me a place to check in my work at intermediate points when it isn't ready for release. However, it doesn't change anything fundamental about the release process.
As a practical matter, thats what a number of us are effectively doing.
We're running development tests on our local system against the latest release. There is currently no real value in creating a branch because that branch is never going to get tested anywhere besides one's local machine anyway.
Of course there's value: * If you are suddenly killed or your server implodes, your intermediate work is preserved. * You can collaborate with other Boosters through the repository. * Merging is easier and more reliable (using svnmerge.py, or the upcoming svn 1.5) because the revision control system knows where everything came from and where it's going. * Other people can observe and/or coordinate with development in process.
And you're correct, this doesn't change the fundamental release procedures. It keeps the release procedures from making our own lives difficult.
And how did release procedures ever make our lives as developers difficult?
So from an individual developer's standpoint, its not really that great a problem anymore.
What isn't a problem?
Except for the tools we have to use - which is a whole other thread.
The one we ought to be spending keystrokes on. Or, better yet, work cycles. http://www.flynnfiles.com/archives/world_events2007/roll_up_your_sleeves.htm... -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

David Abrahams wrote:
We're running development tests on our local system against the latest release. There is currently no real value in creating a branch because that branch is never going to get tested anywhere besides one's local machine anyway.
Of course there's value:
* If you are suddenly killed or your server implodes, your intermediate work is preserved.
* You can collaborate with other Boosters through the repository.
* Merging is easier and more reliable (using svnmerge.py, or the upcoming svn 1.5) because the revision control system knows where everything came from and where it's going.
* Other people can observe and/or coordinate with development in process.
And you're correct, this doesn't change the fundamental release procedures. It keeps the release procedures from making our own lives difficult.
And how did release procedures ever make our lives as developers difficult?
the long release cycle means that i have several sets of changes. Fixes for the current release. Others which are in the head but haven't been tested (except on my machine)
So from an individual developer's standpoint, its not really that great a problem anymore.
What isn't a problem?
If the trunk is going to be used as it has been - it will continue to be a problem for boost but not for me personally as I don't use it for testing.
Except for the tools we have to use - which is a whole other thread.
The one we ought to be spending keystrokes on. Or, better yet, work cycles. http://www.flynnfiles.com/archives/world_events2007/roll_up_your_sleeves.htm...
Well, actually, I have made a modest contribution to the toolset with my library status program. Admitidly, its not a large one but it did address some of my issues. Of course, not everyone will find it useful, but that's the way it is with everything. I don't find testing on the trunk useful. The suggestion seems to have been made that I'm part of the problem because I've brought up the issue that things aren't working. Dream on, The situation isn't going to improve until changes are made and and changes aren't going to be made until someone points them out and makes the case. Beman's Proposal points them out and makes the case. If people don't want to discuss it that's fine. I'm just responding to arguments that Beman's Proposal won't work, won't address the issue, etc. If one thinks that the discussion isn't helpful then don't make the argument in the first place. My posts (except for the very first) are only reponses to what I see as ill - founded arguments to Beman's proposal. Robert Ramey

on Fri Aug 03 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
And how did release procedures ever make our lives as developers difficult?
the long release cycle means that i have several sets of changes. Fixes for the current release. Others which are in the head but haven't been tested (except on my machine)
And how does that make your life difficult?
So from an individual developer's standpoint, its not really that great a problem anymore.
What isn't a problem?
If the trunk is going to be used as it has been - it will continue to be a problem
What is "it?" The trunk? The trunk is a problem?
for boost but not for me personally as I don't use it for testing.
Except for the tools we have to use - which is a whole other thread.
The one we ought to be spending keystrokes on. Or, better yet, work cycles. http://www.flynnfiles.com/archives/world_events2007/roll_up_your_sleeves.htm...
Well, actually, I have made a modest contribution to the toolset with my library status program. Admitidly, its not a large one but it did address some of my issues. Of course, not everyone will find it useful, but that's the way it is with everything. I don't find testing on the trunk useful.
The suggestion seems to have been made that I'm part of the problem because I've brought up the issue that things aren't working.
The suggestion seems to have been made (passive voice is a great way to accuse without being accused of accusing) that I'm suggesting you're part of the problem. 8-{ Don't take everything so personally. I don't have time to single anyone out for blame right now. If you want that, catch up with me in a few months; things may have cooled down for me by then ;-) There's certainly nothing wrong with raising the issue that things aren't working, and it's easy to think the process needs a complete overhaul... and maybe it even does. As Thomas said, though, you can't build an effective release process atop systems that are so unreliable. As Doug points out, many much larger projects function effectively with processes that are very close to what we're using. When it comes to release requirements, Boost is not so different from those other projects, so it ought to be possible to make our process work through a series of small, nonconvulsive adjustments. I find this logic very compelling. Therefore I want to fix the systems before making any adjustments to the process, and I will argue against attempts to make changes (especially large ones) in the process before the systems are fixed. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

Hi, Doug Gregor wrote:
On Aug 2, 2007, at 12:51 PM, Vladimir Prus wrote:
Frankly, I think this whole approach of "fixing the process" is wrongheaded. We're in this mess because our *tools* are broken, not our *process*. Lots of other projects, many larger than Boost, work perfectly well with the same or similar processes because their tools work better.
What doesn't work? Regression testing.
FWIW I wholeheartedly agree with Doug's post, all of it. To me the degree at which some people are either denying or disregarding hard earned experience leaves me speechless. Thomas -- Thomas Witt witt@acm.org

on Thu Aug 02 2007, Doug Gregor <dgregor-AT-osl.iu.edu> wrote:
Volodya is absolutely correct. Delaying integration of new work by using more branches will not fix problems; it just delays them.
True, but delaying these problems can make for a sane environment in which one can develop a feature without being upset every few days while someone else works on correcting the bugs he's checked in.
Frankly, I think this whole approach of "fixing the process" is wrongheaded. We're in this mess because our *tools* are broken, not our *process*. Lots of other projects, many larger than Boost, work perfectly well with the same or similar processes because their tools work better.
I'm 100% convinced the tools are broken. I'm only about 50% convinced that the process isn't (or is, if you prefer) broken.
What doesn't work? Regression testing.
Thomas Witt has pointed out the myriad problems with our testing setup that affected the Boost 1.34.0 release. He should know: he managed the 1.34.x release series. I hit exactly the same problems when I managed the 1.33.x release series. Report generation stalls every few days, cycle times are horrible, it's impossible to isolate what checkins caused failures, and we only really have our testers testing one thing at a time. So either we aren't stabilizing a release (because we're testing the trunk) or the trunk has turned into an untested wild-west because we *are* stabilizing a release. That wild-west went on for a *year* while we were stabilizing the 1.34.0 release, so our trunk is, of course a mess.
1000% agreed.
At one point, I thought we could fix this problem with a stable branch based on 1.34.1, from which future releases would occur. Now, I'm convinced that is the absolutely wrong approach.
Are you saying that 1.35 shouldn't be based on 1.34.1, or is it something else? You said yourself that our trunk is a mess.
It means that "trunk" and "stable" would be forever divergent, and would rely on manual merges to get features into stable.
Again I'm falling victim to a lack of clarity about what these names mean. "stable" is presumably the place from which we spin releases? What's "trunk?" The wild west?
That's a recipe for unwanted surprises, because library authors---who typically work from the trunk
Well, that may be part of the problem. We may need to get authors over their fear of branches.
---are going to forget to merge features and bug-fixes (including the test cases for those things) to the stable branch, and BOOM! No progress. It's more work in the long run to require so many small merges, and it really is just a way to avoid doing what we really must do: fix the trunk. If our trunk were well-tested, release branches would be short-lived and the risk of divergence (features/ fixes not making it between branch and trunk) would be minimized.
As long as the number of bugs on the trunk were always close to zero, as it should be, I think that's right.
Plus, developers wouldn't need to manually merge anything *except* the few things that are needed for those short-lived release branches. Since we now have Subversion, svnmerge.py can even make it easy to deal with those merges relatively easily.
Yep.
P.S. Here are some of the many things I could have done that would have been more productive than writing the message above: 1) Made regression.py work with Subversion, so that we would be performing regression testing on the trunk. 2) Looked at the changes made on the RC_1_34_0 branch to determine which ones can be merged back to the trunk. 3) Fixed some of the current failures on the trunk. 4) Setup a new nightly regression tester. 5) Studied Dart2 to see how we can make it work for Boost 6) Investigated the problems with incremental testing. 7) Improved the existing test reporting system to track Subversion revisions associated with test runs, and link to those revisions in the Trac. 8) Improved the existing test reporting system to track changes from day to day
Before I reply to any messages in this thread, I'll be thinking about that list. Will you?
I wish you'd put that list at the top of your message. ;-)
P.P.S. I know I sound grumpy, because I am. The amount of time we have collectively used discussing policies would have used far more wisely to improve the tools we have.
I agree with that. I've become suspicious of technological solutions over the years, but maybe I should be even more suspicious of process solutions, especially when those processes have to be followed by humans. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

on Fri Aug 03 2007, David Abrahams <dave-AT-boost-consulting.com> wrote:
Frankly, I think this whole approach of "fixing the process" is wrongheaded. We're in this mess because our *tools* are broken, not our *process*. Lots of other projects, many larger than Boost, work perfectly well with the same or similar processes because their tools work better.
I'm 100% convinced the tools are broken. I'm only about 50% convinced that the process isn't (or is, if you prefer) broken.
And let me add, on that basis I will spend any energy I have on fixing tools and resist any major changes of process until we have experience with the fixed tools. Changes lead to churn and we should minimize that. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

David Abrahams wrote:
on Fri Aug 03 2007, David Abrahams <dave-AT-boost-consulting.com> wrote:
Frankly, I think this whole approach of "fixing the process" is wrongheaded. We're in this mess because our *tools* are broken, not our *process*. Lots of other projects, many larger than Boost, work perfectly well with the same or similar processes because their tools work better. I'm 100% convinced the tools are broken. I'm only about 50% convinced that the process isn't (or is, if you prefer) broken.
And let me add, on that basis I will spend any energy I have on fixing tools and resist any major changes of process until we have experience with the fixed tools. Changes lead to churn and we should minimize that.
To me focusing on the tools is a temptation that gets nourished by the fact that tools are more tangible than processes. However, the distinction often is blurry. As you know, there have been endless discussions about whether to use CVS, SVN, GIT, or whatever, and bjam, cmake, scons, etc. It is fun to look at alternatives to better grasp the limitations of the tools currently in use. But constantly looking for what next generation of tools to replace the current ones with, to me, looks like a broken process, too. (To paraphrase: Not spending enough time in thinking about focus, scope, and strategy is a problem in many projects I have seen, and in all such cases people were keen on fixing the tools.) That's why I'm suspicious that 'fixing the tools' alone will change much. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan, Stefan Seefeld wrote:
David Abrahams wrote:
on Fri Aug 03 2007, David Abrahams <dave-AT-boost-consulting.com> wrote:
To me focusing on the tools is a temptation that gets nourished by the fact that tools are more tangible than processes. However, the distinction often is blurry. As you know, there have been endless discussions about whether to use CVS, SVN, GIT, or whatever, and bjam, cmake, scons, etc.
It is fun to look at alternatives to better grasp the limitations of the tools currently in use. But constantly looking for what next generation of tools to replace the current ones with, to me, looks like a broken process, too.
(To paraphrase: Not spending enough time in thinking about focus, scope, and strategy is a problem in many projects I have seen, and in all such cases people were keen on fixing the tools.)
That's why I'm suspicious that 'fixing the tools' alone will change much.
Frankly this is missing the point that both Doug and I raise. The most important point is reliability. Our given tool chain is not even close to providing the reliability that is needed to support any kind of process. This is not about bells and whistles it's about *making it work*. Another important point is to make it easy for people to do the right thing. If the right way to do things is at the same time the easiest way. Your success rate will be much higher than with any kind of procedure. And no amount of documentation or communication of the process will change this basic fact. In essence the argument is over spending effort wisely. Fixing our tools will gives us a much higher return on investment than tinkering with the process. Once our tools are up to par it'll be worthwhile to look at the process. Doing it the other way round is like buying an expensive map to make up for a broken GPS. Thomas -- Thomas Witt witt@acm.org

Thomas Witt wrote:
That's why I'm suspicious that 'fixing the tools' alone will change much.
Frankly this is missing the point that both Doug and I raise. The most important point is reliability. Our given tool chain is not even close to providing the reliability that is needed to support any kind of process. This is not about bells and whistles it's about *making it work*.
I agree, and I never meant to say the opposite. But fixing tools where they are unreliable, and reinventing / rewriting everything from scratch every now and then are not quite the same.
Another important point is to make it easy for people to do the right thing. If the right way to do things is at the same time the easiest way.
I believe in incremental enhancements, not in Getting It Right This Time. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

David Abrahams wrote:
And let me add, on that basis I will spend any energy I have on fixing tools and resist any major changes of process until we have experience with the fixed tools. Changes lead to churn and we should minimize that.
My personal experience is that I spend waaaaaaaay more time dealing with the tools than actually working on library code. Dealing with boost tools is my greatest source of frustration. I shudder to think that even more elaborate tools are in the offing. Robert Ramey

on Fri Aug 03 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
David Abrahams wrote:
And let me add, on that basis I will spend any energy I have on fixing tools and resist any major changes of process until we have experience with the fixed tools. Changes lead to churn and we should minimize that.
My personal experience is that I spend waaaaaaaay more time dealing with the tools than actually working on library code. Dealing with boost tools is my greatest source of frustration. I shudder to think that even more elaborate tools are in the offing.
What's in the offing are simpler, better- and more-widely-supported tools. Ones that, for the most part, Boost is not maintaining itself. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

David Abrahams wrote:
on Thu Aug 02 2007, Doug Gregor <dgregor-AT-osl.iu.edu> wrote:
At one point, I thought we could fix this problem with a stable branch based on 1.34.1, from which future releases would occur. Now, I'm convinced that is the absolutely wrong approach.
Are you saying that 1.35 shouldn't be based on 1.34.1, or is it something else? I don't know what to do about 1.35.0. We need to get it out sooner rather than later, and it seems that the build-it-from-1.34.1 approach is the only way that's going to happen.
It's a band-aid, not a long-term solution, and it hurts us to prolong the trunk/release divergence.
You said yourself that our trunk is a mess.
Yes, and we need to fix the trunk, not devise elaborate processes to isolate the "civilized world" of releases from the "wild west" of the development trunk. The title of this thread is "best practices" but the discussion has used the term "policy". A best practice is a suggestion that developers are free to ignore; a policy is a requirement that will somehow be enforced. Which one are we discussing? - Doug

Douglas Gregor wrote:
David Abrahams wrote:
Are you saying that 1.35 shouldn't be based on 1.34.1, or is it something else? I don't know what to do about 1.35.0. We need to get it out sooner rather than later, and it seems that the build-it-from-1.34.1 approach is the only way that's going to happen.
It's a band-aid, not a long-term solution, and it hurts us to prolong the trunk/release divergence.
I totally agree. And in fact, this state of affairs is what prompted me to start this discussion with the question about 'next steps'. I am really hoping that, if some simple policies can be established quickly after the svn switch, it will be far more easy to move forward than by having two switches: one to svn (replacing 'HEAD' by 'trunk', but no new policies), and one to a new development process (stable vs. branches).
You said yourself that our trunk is a mess.
Yes, and we need to fix the trunk, not devise elaborate processes to isolate the "civilized world" of releases from the "wild west" of the development trunk.
The title of this thread is "best practices" but the discussion has used the term "policy". A best practice is a suggestion that developers are free to ignore; a policy is a requirement that will somehow be enforced. Which one are we discussing?
I'd suggest this: * establish a 'stable' branch (by copying from RC_1_34_0). This is easy, and should be done *now*. (Then regression testing should be set up to monitor this branch's health.) * establish goals for 1.35, notably by defining a list of accepted but otherwise new libraries to be added, and encourage library authors to work towards that goal (branching from 'stable', adding code, testing, submitting self-contained patches to be merged into stable, etc.). This should be easy, too: one trac ticket per new library. * Discouraging developers to check *anything* into trunk any more, but instead, encourage the use of branches as a means to backport changes that now are in trunk, and should go into stable, whether in time for 1.35 or not. There clearly are a lot of refinements and improvements that need to be done in the above, but they all can be done incrementally. The important thing is to get the ball rolling (into the right direction). Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

On 2007-08-03, Stefan Seefeld <seefeld@sympatico.ca> wrote: > Douglas Gregor wrote: > > Yes, and we need to fix the trunk, not devise elaborate processes to > > isolate the "civilized world" of releases from the "wild west" of the > > development trunk. [...] > I'd suggest this: > * establish a 'stable' branch (by copying from RC_1_34_0). [...] > * establish goals for 1.35, [...] > * Discouraging developers to check *anything* into trunk any more, but instead, > encourage the use of branches as a means to backport changes that now are in > trunk, and should go into stable, whether in time for 1.35 or not. For what it's worth, I completely agree with the first and third points here. This development model *does* work. It is the one that I use (and have used) every day for the last 5 years.[*] But for it to work it also requires all the tools changes/fixes that Doug has been saying to get done - the "stable" (or "release branch") must be being tested (and obviously in a more efficient way than it is currently). phil [*] And no, I haven't managed a large open source project release before; I just manage the release of the 20 or 30 interrelated libraries/applications used by multiple projects in my day job. The "process" that we have normally means that I can assemble a release from the non-broken mainlines pretty quickly. Or highlight the integration problems, as the case may be. Of course, I have written quite a lot of tools to help me... -- change name before "@" to "phil" for email

on Fri Aug 03 2007, Douglas Gregor <dgregor-AT-osl.iu.edu> wrote:
we need to fix the trunk, not devise elaborate processes to isolate the "civilized world" of releases from the "wild west" of the development trunk.
Even in the moments when I've thought the process needed an overhaul, I have always taken it for granted that there should be /no/ "wild west." -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

Vladimir Prus wrote:
Robert Ramey wrote:
Its A LOT LESS work for the developer. Under the current (old) system every time a test failed I would have to investigate whether it was due to an error new error in my library or some change/error in something that the library depended up. It consumed waaaaay too much time. I gave up commiting changes except on a very infrequent basis. Turns out that the failures still occurred but I knew they weren't mine so I could ignore them. Bottom line - testing was a huge waste of time providing no value to a library developer.
A situation possible under proposed system is:
- You develop things on your branch. When your feature is ready, you merge from trunk.
This you won't do. If you merge into your own branch it will be from the stable release. You might want to such a thing when a new release occurs - maybe once a month or so. Suppose that when you merge from the next release the following occurs.
Suddenly half of tests in your library fails. The merge brought changes in about 100 different files,
Which indicates that either the other library changed its interface - which it shouldn't do without advice. Or that your library was relying on undefined aspects of the other library's interface - which you shouldn't do. This problem should occur much less frequently than it does now.
and you have to figure out what's up.
Like you have to do ALL the time now.
With the current system, you'd get a failure whenever the problematic change is checked in. So, you'll know what some commit between 1000 and 1010 broke your library and it's easy to find out the offending commit from that.
Not that easy. As I've been checking in at the same time I still have to look at modules in my own library as well as other libaries. I need to know that if something breaks, its in my library and that only one thing changed at a time.
In other words, in the current system, if some other library breaks yours, you find about that immediately, and can take action. In the new system, you'll find about that only when your feature is done -- which is more inconvenient.
Nope - its better. If my changes pass all the tests against the release version - I'm ok. I merge into the stable release I'm sure to pass - after all that's what I've been testing. If someone else merges something into the stable release - but my tests break when I merge into my branch - then I know what it is and we can arm wrestle with the right persone. Basically using someone else's library to test your library is not great idea. Its inefficient and not really effective.
You can try to workaround this by frequently merging from trunk, but it won't quite work. Trunk receives bulk updates. So, if the other developer did 100 changes on this branch and merged, you'll only have the chance to test all those 100 changes when they are merged to trunk.
I don't think anyone should merge from the trunk ever. Why interject a bunch of experimental code into my project. I have my hands full just trying to find and fix my own errors. Robert Ramey

Robert Ramey wrote:
I don't think anyone should merge from the trunk ever. Why interject a bunch of experimental code into my project. I have my hands full just trying to find and fix my own errors.
The "trunk" in Vladimir's mail likely refers to the "stable" branch. If development proceeds on branches, there is no other trunk, so stable becomes the trunk. You'll have to merge from it periodically to get the latest changes. For such a development model, svk is probably a much better fit than raw svn.

Peter Dimov wrote:
Robert Ramey wrote:
I don't think anyone should merge from the trunk ever. Why interject a bunch of experimental code into my project. I have my hands full just trying to find and fix my own errors.
The "trunk" in Vladimir's mail likely refers to the "stable" branch. If development proceeds on branches, there is no other trunk, so stable becomes the trunk. You'll have to merge from it periodically to get the latest changes.
hmmm - we've been using "trunk" as its used in the current SVN load which contains the code form the CVS HEAD. This has been distinguished from "stable" which I believe that people have used to refer to what has been called RC_1_34. But basically you're correct. In a complete application of the proposed practices there is no place for the current concept of "trunk" or HEAD with all the experimental code. In practice it will be all the code planed for the next "large - yearly" release. Its basically a developement branch with a slow merge frequency
For such a development model, svk is probably a much better fit than raw svn.
LOL - I'm sure there is better tool for everything somewhere. The point is, that improving the tool is pointless if we're not exploiting it in an effective manner. The current SVN (even CVS) is plenty good enough for what we need. Robert Ramey

On 2007-08-02, Robert Ramey <ramey@rrsd.com> wrote:
Peter Dimov wrote:
Robert Ramey wrote:
I don't think anyone should merge from the trunk ever. Why interject a bunch of experimental code into my project. I have my hands full just trying to find and fix my own errors. The "trunk" in Vladimir's mail likely refers to the "stable" branch. If development proceeds on branches, there is no other trunk, so stable becomes the trunk. You'll have to merge from it periodically to get the latest changes. hmmm - we've been using "trunk" as its used in the current SVN load which contains the code form the CVS HEAD. This has been distinguished from "stable" which I believe that people have used to refer to what has been called RC_1_34.
With the fly-out branch approach for development, Peter is correct - there is no difference between "trunk" and "stable" (and "mainline"): you only need one, and that is what releases are generated (directly) from. It is a branch that is *always* expected to pass the full unit test suite, and any failures need to be addressed as a matter of urgency. Or, of course, reverted. The only thing that will be needed on top of the development branches are "patch" branches for releases where *serious* bugs are found in a release, which weren't covered by the testing. (And yes, I have read Doug's post: I just think that fixing the process for library delivery and fixing the tools can be done/thought about in parallel.) phil -- change name before "@" to "phil" for email

On 2007-08-03, Phil Richards <news@derived-software.ltd.uk> wrote:
The only thing that will be needed on top of the development branches are "patch" branches for releases where *serious* bugs are found in a release, which weren't covered by the testing.
Ignore that. Obviously the approach is to fix it on the "stable" branch and make a new release from that... the only time that might not be a possible is if the "stable" branch is temporarily broken, and the bug fix is *really* urgent. phil -- change name before "@" to "phil" for email

Vladimir Prus wrote:
Robert Ramey wrote:
Its A LOT LESS work for the developer. Under the current (old) system every time a test failed I would have to investigate whether it was due to an error new error in my library or some change/error in something that the library depended up. It consumed waaaaay too much time. I gave up commiting changes except on a very infrequent basis. Turns out that the failures still occurred but I knew they weren't mine so I could ignore them. Bottom line - testing was a huge waste of time providing no value to a library developer.
A situation possible under proposed system is:
- You develop things on your branch. When your feature is ready, you merge from trunk. Suddenly half of tests in your library fails. The merge brought changes in about 100 different files, and you have to figure out what's up.
With the current system, you'd get a failure whenever the problematic change is checked in. So, you'll know what some commit between 1000 and 1010 broke your library and it's easy to find out the offending commit from that.
In other words, in the current system, if some other library breaks yours, you find about that immediately, and can take action.
"can" is the key word in the last sentence. Often enough, that didn't happen. Some change to some library broke some other library, unrecognized by the author of the change and not felt responsible for by the maintainer of the victim library. The regression persists until release preparation. Or some change caused regressions in the library changed and the author assumed the regression was an artifact of the test harness or caused by a change to a different library. When release preps start, we have accumulated dozens or even hundreds of problems. IMO, exactly this is the reason for the wild west impression we have regarding the way we used CVS. This is certainly a matter of testing resources. If tests could be run frequently and reliably enough, then we could automatically blame the author of the offending check-in. We don't have the resources. I don't expect we'll have them soon, unless there's a massive donation. Unlike many others, I don't believe we have a fundamental problem with our testing harness. A lot of tweaks are needed, definitely, but the overall systems looks ok to me. We do detect regressions and other test failures. We just don't happen to handle them timely. To cope with the lack of resources, a more organized way of checking in changes to the tested branch are needed. If there's only one maintainer at a time who checks stuff in then that would compensate for our slow testing. We'd be able to see changes to which library caused the regressions. Admittedly, we wouldn't be able to blame an individual code change in case of bundled updates, but we would at least know which library to blame and who would be responsible to look into the problems (with the help of the maintainers of the victim libraries, I suppose). That's more than we have now. I believe, such a way to change our procedures (from not having any procedures to having only one committer at a time) would be uncomfortable and would slow things down for maintainers queuing for a chance to commit their stuff, but, overall, we would get shorter development and release times. Releasing 1.34 took more than a year. 1.33 took similarly long. A more organized way of working would have had reduced that time to a month per release or even less. That's almost two years to spend in development instead of in finding out which change caused what regression and how to fix it. If we identify leaf (of the dependency tree) libraries, which shouldn't be hard to do, then changes to multiple leaf libraries can be done in parallel. This reduces the time spent waiting in the commit queue. A harder problem is adding of a new toolset. In that case, hundreds of test failures may pop up and nobody really feels responsible to look into them, effectively leaving that work to the release manager, unless he decides to consider that toolset not relevant for the release (in which case the testing effort is wasted). We need a way to organize addition of toolsets. The test runner can't alone be made responsible for fixing all the problems that get reported. Neither should the release manager be responsible for driving the process at release preparation time. Regards, m

Martin Wille wrote:
A harder problem is adding of a new toolset. In that case, hundreds of test failures may pop up and nobody really feels responsible to look into them, effectively leaving that work to the release manager, unless he decides to consider that toolset not relevant for the release (in which case the testing effort is wasted).
We need a way to organize addition of toolsets. The test runner can't alone be made responsible for fixing all the problems that get reported. Neither should the release manager be responsible for driving the process at release preparation time.
I think the proposed practice would also apply to toolsets. In fact I think its a lot easier than improving a library itself. Someone decides to add a new toolset. He has the the current release (stable) version on his desktop. He builds the whole stable version with his new toolset. Any failures are particular to the toolset. So he may fiddle around and minimize them. At that point he builds markup for that toolset and merges it (or requests a merge into the stable branch). Maybe there is a re-test with the new toolset. But likely, since its a new toolset, he is the only one with it so that pretty much has to be the end of it unless he's willing to test it on request (which he probably be expected to do). Now that is going to leave a situation which some people aren't going to like. The "Next" release is going to have a new toolset with lots of failures (typicly). The question isn't whether its perfect, the question is whether the Next release is better than the current one. Well, it IS better even though it has more failures. The total breadth of applicability is broader than the previous version. We can't make releases perfect no matter how long we stretch out the delivery time no matter how much we put current development on hold no matter how many times we test. We can guarentee the each release is better than the current one and we should do that as frequently as is practical. Robert Ramey

Martin Wille wrote:
Vladimir Prus wrote:
A situation possible under proposed system is:
- You develop things on your branch. When your feature is ready, you merge from trunk. Suddenly half of tests in your library fails. The merge brought changes in about 100 different files, and you have to figure out what's up.
With the current system, you'd get a failure whenever the problematic change is checked in. So, you'll know what some commit between 1000 and 1010 broke your library and it's easy to find out the offending commit from that.
In other words, in the current system, if some other library breaks yours, you find about that immediately, and can take action.
"can" is the key word in the last sentence. Often enough, that didn't happen. Some change to some library broke some other library, unrecognized by the author of the change and not felt responsible for by the maintainer of the victim library. The regression persists until release preparation.
Consider what happens under the new system, though: the feature is blocked because an unrelated library doesn't like it. In the worst case, unmaintained libraries eventually block all progress on the stable branch. This is stable, but as bit more stable than needed. No process can solve the problem of missing/unresponsive maintainers.
A harder problem is adding of a new toolset. In that case, hundreds of test failures may pop up and nobody really feels responsible to look into them, ...
Another interesting example is adding a new test that exposes an existing bug. This test has never passed, but its inclusion is prevented by the stability requirement.

Peter Dimov wrote:
Martin Wille wrote:
Vladimir Prus wrote:
A situation possible under proposed system is:
- You develop things on your branch. When your feature is ready, you merge from trunk. Suddenly half of tests in your library fails. The merge brought changes in about 100 different files, and you have to figure out what's up.
With the current system, you'd get a failure whenever the problematic change is checked in. So, you'll know what some commit between 1000 and 1010 broke your library and it's easy to find out the offending commit from that.
In other words, in the current system, if some other library breaks yours, you find about that immediately, and can take action.
"can" is the key word in the last sentence. Often enough, that didn't happen. Some change to some library broke some other library, unrecognized by the author of the change and not felt responsible for by the maintainer of the victim library. The regression persists until release preparation.
Consider what happens under the new system, though: the feature is blocked because an unrelated library doesn't like it. In the worst case, unmaintained libraries eventually block all progress on the stable branch. This is stable, but as bit more stable than needed.
True, this is a problem.
No process can solve the problem of missing/unresponsive maintainers.
The scenario you described would indicate that it is time to deprecate the unmaintained library. There's no way to promise stability for unmaintained code. Once deprecation is part of the process, the problem does get solved. Quite slowly so, I admit.
A harder problem is adding of a new toolset. In that case, hundreds of test failures may pop up and nobody really feels responsible to look into them, ...
Another interesting example is adding a new test that exposes an existing bug. This test has never passed, but its inclusion is prevented by the stability requirement.
No, in this scenario, the bug has been there before. There's no break of stability if the bug gets indicated by the testing harness from some point in time on. Regards, m

Martin Wille wrote:
Peter Dimov wrote:
Another interesting example is adding a new test that exposes an existing bug. This test has never passed, but its inclusion is prevented by the stability requirement.
No, in this scenario, the bug has been there before. There's no break of stability if the bug gets indicated by the testing harness from some point in time on.
There is no break in stability, but there is a violation of the stability requirements, which demand that there should be no test failures on the stable branch. This prevents the merge of the new test unless the same merge also contains a fix.

On 8/3/07, Peter Dimov <pdimov@pdimov.com> wrote:
Martin Wille wrote:
Peter Dimov wrote:
Another interesting example is adding a new test that exposes an existing bug. This test has never passed, but its inclusion is prevented by the stability requirement.
No, in this scenario, the bug has been there before. There's no break of stability if the bug gets indicated by the testing harness from some point in time on.
There is no break in stability, but there is a violation of the stability requirements, which demand that there should be no test failures on the stable branch. This prevents the merge of the new test unless the same merge also contains a fix.
Or you check in the test, and roll back 'stable' until the test passes! Or make it something of a 'all hands on deck' situation - lock SVN until there is a fix or something like that (maybe not quite so drastic...) Tony

Peter Dimov wrote:
Martin Wille wrote:
Peter Dimov wrote:
Another interesting example is adding a new test that exposes an existing bug. This test has never passed, but its inclusion is prevented by the stability requirement. No, in this scenario, the bug has been there before. There's no break of stability if the bug gets indicated by the testing harness from some point in time on.
There is no break in stability, but there is a violation of the stability requirements, which demand that there should be no test failures on the stable branch. This prevents the merge of the new test unless the same merge also contains a fix.
You can mark the failure "expected" if you don't have a fix for it. We could extend our markup to allow for "expected failure, fix needed" to contrast "expected failure, target platform broken beyond repair" or "expected failure, feature not supported on target platform". Regards, m

Peter Dimov wrote:
Martin Wille wrote:
Peter Dimov wrote:
Another interesting example is adding a new test that exposes an existing bug. This test has never passed, but its inclusion is prevented by the stability requirement. No, in this scenario, the bug has been there before. There's no break of stability if the bug gets indicated by the testing harness from some point in time on.
There is no break in stability, but there is a violation of the stability requirements, which demand that there should be no test failures on the stable branch. This prevents the merge of the new test unless the same merge also contains a fix.
So if library A highlights a bug in library B than the author of library A must go and fix the bug in library B and add covering tests even though he isn't the maintainer? This seems like a recipe for subtle bugs and peoples toes getting stepped on. Perhaps I misunderstood. Thanks, Michael Marcin

Michael Marcin wrote:
Peter Dimov wrote:
Martin Wille wrote:
Peter Dimov wrote:
Another interesting example is adding a new test that exposes an existing bug. This test has never passed, but its inclusion is prevented by the stability requirement. No, in this scenario, the bug has been there before. There's no break of stability if the bug gets indicated by the testing harness from some point in time on. There is no break in stability, but there is a violation of the stability requirements, which demand that there should be no test failures on the stable branch. This prevents the merge of the new test unless the same merge also contains a fix.
So if library A highlights a bug in library B than the author of library A must go and fix the bug in library B and add covering tests even though he isn't the maintainer? This seems like a recipe for subtle bugs and peoples toes getting stepped on.
Not necessarily. However, the maintainer of A would be responsible to drive the process of fixing B. How the workload is balanced between the maintainers of A and B, depends no the individual case. Regards, m

Peter Dimov wrote:
Consider what happens under the new system, though: the feature is blocked because an unrelated library doesn't like it. In the worst case, unmaintained libraries eventually block all progress on the stable branch. This is stable, but as bit more stable than needed.
I don't see this happening at all. suppose library A is non-maintained and library B is being worked on. The write of library B makes and tests his improvements using the published interface of library A and its concluded that library A doesn't support its published interface in some way. The one of a couple of things happens. a) Author of library B contacts author of Library A (say via bug report and Author of library A fixes it. The author of library B either patches his local code or merges in author A's fix. b) Author B can't get Author A to fix it and works around it in some way. In either case, the whole release process isn't affect - just that for library B. Author B will have to find some way of dealing with it but its an issue between two people and its not holding up the whole of boost. The other scenenario. Author A makes his changes and runs his tests and things are great. Unbeknownst to him, he has changed the public interface, by enforcing a previously unenforced interface requirement (I did this once) and of course he doesn't notice as his tests run. At this point the changes are merged into the next release and stuff in other libraries break. This is not the case where the maintainer of A is AWOL its just a normal fiasco which has to be resolved in the usual manner. So I would refine the proposal somewhat to a) development and tests are run on a branch (for that library) b) when its time to merge tests are run next release branch with the library switched in. That is, one can run the test with all of boost BEFORE the library is actually merged in c) If b passes then the changes are merged into the release and all tests are run again just to make sure. This presupposes a test request infrastructure under which one can specify tests for specific branches and/or all of boost.
No process can solve the problem of missing/unresponsive maintainers.
I believe that this new process will prevent missing/unresponsive maintainers from holding up the whole system.
Another interesting example is adding a new test that exposes an existing bug. This test has never passed, but its inclusion is prevented by the stability requirement.
The "stability requirement" definition is unclear. It currently seems to be "passing all tests" or marking tests as not-passing. If a new test makes the library better but displays exposes more errors, then it should be considered an improvement, marked up and released. Boost explicitly permits standard conforming code which breaks with non-conforming compilers. Failing tests are part of the system and explicitly addressed through the markup system. Robert Ramey

"Robert Ramey" <ramey@rrsd.com> writes:
The other scenenario. Author A makes his changes and runs his tests and things are great. Unbeknownst to him, he has changed the public interface, by enforcing a previously unenforced interface requirement (I did this once) and of course he doesn't notice as his tests run. At this point the changes are merged into the next release and stuff in other libraries break. This is not the case where the maintainer of A is AWOL its just a normal fiasco which has to be resolved in the usual manner.
So I would refine the proposal somewhat to
a) development and tests are run on a branch (for that library) b) when its time to merge tests are run next release branch with the library switched in. That is, one can run the test with all of boost BEFORE the library is actually merged in c) If b passes then the changes are merged into the release and all tests are run again just to make sure.
The issue is what to do if b) fails with an error in library B. Author A has just made a change that breaks another library (even if the other library was depending on something it shouldn't have done). If author B cannot or will not fix the problem, what do we do? What if the change makes library B unusable? Anthony -- Anthony Williams Just Software Solutions Ltd - http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

on Wed Aug 01 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
Beman wrote:
Right. Again, for avoidance of doubt: do you expect the development branch to be created from the stable branch, to make sure a passing test on the development branch translates to a passing test on stable after a merge. Correct ?
Now you've hit upon the motivation for my original post. I was under the impression that the "trunk" would be the last released version. It turns out that its not so. But no matter. With SVN there is no special status accorded "trunk" we can just branch off the last release. The only think we need is a set of "Best Practices" (or whatever one wants to call it so we're all in sync.
Ummm, so is that a "yes" or a "no?" -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

David Abrahams wrote:
on Wed Aug 01 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
Beman wrote:
Right. Again, for avoidance of doubt: do you expect the development branch to be created from the stable branch, to make sure a passing test on the development branch translates to a passing test on stable after a merge. Correct ?
Now you've hit upon the motivation for my original post. I was under the impression that the "trunk" would be the last released version. It turns out that its not so. But no matter. With SVN there is no special status accorded "trunk" we can just branch off the last release. The only think we need is a set of "Best Practices" (or whatever one wants to call it so we're all in sync.
Ummm, so is that a "yes" or a "no?"
Yes, I expected that the RC_1_34_0 would become the new "trunk" and that individual developers would merger their changes from the "old trunk" into their development branches as needed. Robert Ramey

Beman Dawes wrote:
I've also been using the name "stable" for the "release ready" branch. And, yes, the starting point for "stable" is the current "RC_1_34_1" tag.
FWIW the tag name is RC_1_34_0. Thomas -- Thomas Witt witt@acm.org

Beman Dawes wrote:
The bottom line is that I know that code works *before* it gets merged into the stable branch. That's the critical point; the exact way the testing is done is important operationally, but those details don't matter as far as the big picture goes.
Ok, and what about a case we saw in 1.34.0 -- a library fails on obscure compiler, obscure version. Nobody seems interested to fix that. Are you expecting that tests on all compilers will be run before merge to stable? Or you expect testing on just gcc and msvc?
I would not expect regressions of such a drastic nature that the above would be necessary.
The point of testing before actually changing the stable branch is that regressions should become very rare.
If the set of compilers tested with before merge is gcc + msvc, then there's no advantage. Both are highly conforming in recent versions, and fixes are easy. What to do about obscure compilers is not obvious. - Volodya

Vladimir Prus wrote:
Beman Dawes wrote:
The bottom line is that I know that code works *before* it gets merged into the stable branch. That's the critical point; the exact way the testing is done is important operationally, but those details don't matter as far as the big picture goes.
Ok, and what about a case we saw in 1.34.0 -- a library fails on obscure compiler, obscure version. Nobody seems interested to fix that. Are you expecting that tests on all compilers will be run before merge to stable? Or you expect testing on just gcc and msvc?
Obviously Beman's proposal does not (and cannot) address every problem. The criteria for acceptance of the proposal is whether it improves things. At this point there shouldn't be too much doubt that it will.
I would not expect regressions of such a drastic nature that the above would be necessary.
The point of testing before actually changing the stable branch is that regressions should become very rare.
If the set of compilers tested with before merge is gcc + msvc, then there's no advantage. Both are highly conforming in recent versions, and fixes are easy. What to do about obscure compilers is not obvious.
The main problem that is addressed by this proposal is that testing tests "one thing" at a time to "one" person, so when something fails its easy to isolate without everyone looking at all the code. It also elminates the requirement that everyone be in sync with their next release simultaneously. This is obviously not scalable and the source of much agony in the past. The issue of obscure compilers is a separate problem and not helped nor hurt by this change. Robert Ramey

Robert Ramey wrote:
Vladimir Prus wrote:
Beman Dawes wrote:
The bottom line is that I know that code works *before* it gets merged into the stable branch. That's the critical point; the exact way the testing is done is important operationally, but those details don't matter as far as the big picture goes. Ok, and what about a case we saw in 1.34.0 -- a library fails on obscure compiler, obscure version. Nobody seems interested to fix that. Are you expecting that tests on all compilers will be run before merge to stable? Or you expect testing on just gcc and msvc?
Obviously Beman's proposal does not (and cannot) address every problem. The criteria for acceptance of the proposal is whether it improves things. At this point there shouldn't be too much doubt that it will.
<sarcasm>At this point there isn't much doubt that anything will improve things.</sarcasm>
The main problem that is addressed by this proposal is that testing tests "one thing" at a time to "one" person, so when something fails its easy to isolate without everyone looking at all the code. It also elminates the requirement that everyone be in sync with their next release simultaneously. This is obviously not scalable and the source of much agony in the past.
And this is still the part I'm not understanding. If there is one 'stable' branch, and if that's the only reference point, this is IMO a scalability problem. Contrast that to the much more radical proposition to make boost modular, where each 'component' (or however you'd like to call those) would follow its own release schedule, and a developer can choose which versions of prerequisite components to depend on, as long as they are actually released. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
<sarcasm>At this point there isn't much doubt that anything will improve things.</sarcasm>
I knew I wasn't going to get away with that. oh well.
The main problem that is addressed by this proposal is that testing tests "one thing" at a time to "one" person, so when something fails its easy to isolate without everyone looking at all the code. It also elminates the requirement that everyone be in sync with their next release simultaneously. This is obviously not scalable and the source of much agony in the past.
And this is still the part I'm not understanding. If there is one 'stable' branch, and if that's the only reference point, this is IMO a scalability problem.
Contrast that to the much more radical proposition to make boost modular, where each 'component' (or however you'd like to call those) would follow its own release schedule, and a developer can choose which versions of prerequisite components to depend on, as long as they are actually released.
I'll make the contrast. The second is a more elaborate version of the first. More elaborate than necessary in my opinion. But we can start with the first right now by just adopting practices in common usage. It is common to have a "stable" trunk and develope on a branch, test the branch and merge into the release version. BTW its already being done now by some developers. I'm doing it, Beman is doing it Joaquin is doing it (witness that he has been able to provide an increment to to multi-index). I'm convinced that little by little more developers will migrate to this model. If in the future, its necessary to make this even more elaborate that can be considered later. Robert Ramey

Robert Ramey wrote:
If in the future, its necessary to make this even more elaborate that can be considered later.
I fully agree. things should be improved incrementally. You are totally right, the suggested change is a big improvement. I was just afraid it would be taken as an end-goal, not something that needs further refinement. And, some comment on Doug's point about process vs. tools: I thing I find rather disturbing is that, apparently, in quite a number of times checkins mix different (and unrelated !) features, making it impossible to track regressions back to changesets. (I remember one particular case where it was impossible to roll back, due to this.) So, along with setting up branches, we need a clear checkin policy. Using subversion in combination with trac will certainly help enforcing this. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

On Fri, 2007-08-03 at 04:55 -0400, Stefan Seefeld wrote:
And, some comment on Doug's point about process vs. tools: I thing I find rather disturbing is that, apparently, in quite a number of times checkins mix different (and unrelated !) features, making it impossible to track regressions back to changesets.
Examples? I haven't seen this as a real problem. The more immediate problem I've seen is that nearly any new feature or bug-fix in Boost is going to span 3 directories (boost/libname, lib/libname/test, lib/libname/doc), and CVS doesn't keep those things in a changeset. Subversion and Trac fixed that problem. Boost developers are smart; they don't need a process to tell them to keep change-sets to a single feature, they need an SCM and a regression test system that keeps those change-sets together. We now have the first. - Doug

on Fri Aug 03 2007, Douglas Gregor <doug.gregor-AT-gmail.com> wrote:
On Fri, 2007-08-03 at 04:55 -0400, Stefan Seefeld wrote:
And, some comment on Doug's point about process vs. tools: I thing I find rather disturbing is that, apparently, in quite a number of times checkins mix different (and unrelated !) features, making it impossible to track regressions back to changesets.
Examples?
Some developers have seen fit to separate their development from Boost's, and only check in to our repository after they've completed sweeping changes to their local copies of their libraries. Some Boost libraries are even developed in totally separate repositories.
I haven't seen this as a real problem. The more immediate problem I've seen is that nearly any new feature or bug-fix in Boost is going to span 3 directories (boost/libname, lib/libname/test, lib/libname/doc), and CVS doesn't keep those things in a changeset.
Subversion and Trac fixed that problem.
Boost developers are smart; they don't need a process to tell them to keep change-sets to a single feature,
Are you sure? I'm pretty sure that smartness doesn't prevent the rise of "cussed individualists." In fact, sometimes intelligence and intransigence go hand-in-hand. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

Robert, Robert Ramey wrote:
Obviously Beman's proposal does not (and cannot) address every problem. The criteria for acceptance of the proposal is whether it improves things. At this point there shouldn't be too much doubt that it will.
I can't help but think this is funny. In a sad way actually. The release managers for the last two major releases have independently expressed serious doubt. This claim is ludicrous. Thomas PS: Have you ever managed the release of a large open source project? -- Thomas Witt witt@acm.org

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Thomas Witt Sent: Friday, August 03, 2007 2:38 AM Subject: Re: [boost] [SVN]Best Practices for developers using SVN
Robert Ramey wrote:
Obviously Beman's proposal does not (and cannot) address
every problem. The
criteria for acceptance of the proposal is whether it improves things. At this point there shouldn't be too much doubt that it will.
I can't help but think this is funny. In a sad way actually. The release managers for the last two major releases have independently expressed serious doubt. This claim is ludicrous.
This process vs. tools discussion remembers the dispute about waterfall vs. agile, hardware development vs. software development ... What's rather interesting is that the roles are normally alloted the other way around (at least in my experience): managers do want more process, (software) developers do want more tools (and less process, because it usually just kills creativity). One reason may be that at least two orthogonal aspects are mixed in this thread: getting a release out of the door in reasonable time and with reasonable effort, OTOH making developers confident that they can easily fix their component, even if changes in other libs, they depend on, break them (e.g. to get their components out of the blame list ASAP) and thus making them confident and capable to change their component with ease. Thus my stance is: add and fix the tools, *and* (cautiously) fix the process because this solves two different problems.
Thomas
PS: Have you ever managed the release of a large open source project?
That's not a factual argument but an argument by authority and thus unfair. Have you ever developed a popular (at least counting related posts on boost-user) open source lib like Boost.Serialization? cheers, aa -- Andreas Ames | Programmer | Comergo GmbH | ames AT avaya DOT com Sitz der Gesellschaft: Stuttgart Registergericht: Amtsgericht Stuttgart - HRB 22107 Geschäftsführer: Andreas von Meyer zu Knonow, Udo Bühler, Thomas Kreikemeier

Andreas, Ames, Andreas (Andreas) wrote:
-----Original Message-----
Thus my stance is: add and fix the tools, *and* (cautiously) fix the process because this solves two different problems.
My argument is more about focus and order than it is about tools vs. process. Our resources are limited we are not spending them well.
PS: Have you ever managed the release of a large open source project?
That's not a factual argument but an argument by authority and thus unfair.
Oh it was never intended to be a factual argument, nor one of authority BTW. To some degree it was a genuine question. If you haven't the chances are very high that you have no idea what you are talking about. Well at least I didn't when I started.
Have you ever developed a popular (at least counting related posts on boost-user) open source lib like Boost.Serialization?
The point is that I am not trying to tell Robert that he has no idea how to do library development. Thomas -- Thomas Witt witt@acm.org

on Wed Aug 01 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
Stefan Seefeld wrote:
The most important thing to do is formalize the development process as far as version management is concerned,
That's what I'm trying to accomplish.
to be able to easily and quickly rollback anything that risks to destabilize the stable / release branch.
Totally, Totally wrong here.
The only way to make things work is to integrate pieces in one at a time in digestble chunks.
Robert, I don't see how what Stefan says here is incompatible with what you're saying. Why do you say he's not just "wrong," but "Totally, Totally wrong?" -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

David Abrahams wrote:
to be able to easily and quickly rollback anything that risks to destabilize the stable / release branch.
Totally, Totally wrong here.
The only way to make things work is to integrate pieces in one at a time in digestble chunks.
I was really responding to the whole idea that its normal and OK to merge errors into the next release. Of course this could happen but I would hope that it would come to be an unusual occurence outside of the expected workings of the system. In other words, I'm not really responding to the specific statement but rather the whole world view that we "add stuff in and see what breaks" which suggests a lot of rolling in/rolling out of changes. Robert Ramey

On 2007-08-01, Stefan Seefeld <seefeld@sympatico.ca> wrote:
Robert Ramey wrote:
b) ALL development occurs on branches. I'm not sure what that means, given how subversion handles branches. The difference between 'trunk' and 'branches/something' is only in the naming.
It means, I suspect, that the branch called "trunk" (or "stable", or whatever) is the one from which releases are made, and only things approved by the release manager can be merged on to it. Therefore, to do development, it must be done on another branch. Yes, it is just a matter of naming, but the branch named "trunk" *is* special simply because of what it is being used for... Of course, my understanding might be flawed :-)
d) At the discretion of the release manager, Development branches are merged into the "Current Release" and the whole system is tested. Does this imply that each individual feature (as defined by something that is meant to be merged into 'stable' as a whole) will be developed in isolation, on its own branch ? I'm not sure how practical that would be.
It works fine on other projects - including the ones I work on in my day job. It is a very simple way of adding large chunks of functionality to a release branch without breaking it, or, at least, not breaking it in a way that can't easily be reverted. The main thing to do is make sure that people working on the branches either keep the branches short lived *or* they stay up to date with the "trunk" (release branch) by periodic merging from it.
e) Each time the "Current Release" test passes more tests than the previous one, A tag is added by the release manager and a new download package is automatically created. I would anticipate this happing about once/month. As above, I'm not sure what the tag is good for, with a repository that has atomic / global revisions. Just remembering the revision number that contains a new feature the first time should be sufficient.
Because tags are more readable and understandable than revision numbers when you are looking for Boost 1.35.0? If you have a choice of telling people to check out r15647 or boost-1.35.0, I know which one I'd go for. And, if it is ever decide to move from SVN in the future, it makes that process easier.
If you had nothing else to do, you could make the "Current Release" /main/trunk etc ONLY updateable by the release manager. Who would do this by merging in branches which have passed their tests. Then we'd be in business Actually I don't think it is practical to have a single person do all this. That would create a huge bottleneck. The most important thing to do is formalize the development process as far as version management is concerned, to be able to easily and quickly rollback anything that risks to destabilize the stable / release branch.
And this is a good point. The job of the release manager should not be to do the merge - the code being "delivered" on the branch should basically be completely up-to-date with the trunk, so, to all intents and purposes, whatever is on the branch will become the contents of the trunk. (Again, this is what is mandated in my day job. The full tests are run on the branch, and if they pass, over to the trunk it goes.) This means that all the release manager needs to do is coordinate the order of deliveries to the trunk, and perform the final "replace trunk code with branch code" operation. There would be no merging, as such. phil -- change name before "@" to "phil" for email

Phil Richards wrote:
This means that all the release manager needs to do is coordinate the order of deliveries to the trunk, and perform the final "replace trunk code with branch code" operation.
How is this different from merging the branch into the "Current Release" (or trunk or next release)?
There would be no merging, as such.
In any case, we're clearly on the same page here. Robert Ramey

On 2007-08-01, Robert Ramey <ramey@rrsd.com> wrote:
Phil Richards wrote:
This means that all the release manager needs to do is coordinate the order of deliveries to the trunk, and perform the final "replace trunk code with branch code" operation. How is this different from merging the branch into the "Current Release" (or trunk or next release)?
It isn't. Because the branch is up-to-date with respect to the trunk (from the immediately previous trunk-to-branch merge), a merge from branch-to-trunk just does the "replace trunk code with branch code". There can be no conflicts, or, in fact, any modification to the code that is coming from the branch during this merge operation. Life will be considerably easier when SVN 1.5 comes out since it will handle merge points automatically. phil -- change name before "@" to "phil" for email

Robert Ramey wrote:
The thrust of Beman's proposal is actually quite simple. It consists of
a) designate an branch/trunk as the "Current Release". b) ALL development occurs on branches. c) Testing is applied to branches as requested. d) At the discretion of the release manager, Development branches are merged into the "Current Release" and the whole system is tested. e) Each time the "Current Release" test passes more tests than the previous one, A tag is added by the release manager and a new download package is automatically created. I would anticipate this happing about once/month.
Okay, say somebody reports a bug in program_options that I fix (on a branch named program_options_one_line_fix_127). Say the same somebody reports a bug in a future library, and the author immediately fixes that (on a branch named future_one_line_fix_777). Now the user has to grab stable branch and perform two merges manually to get both fixes. Not to mention that if every one-line change is gated via release manager, it creates a bottleneck we don't presently have. While it's good to be able to test specific branch, I doubt requiring all changes to be done on branch is so great idea.
The only things we're missing right now are c) - which I believe will be doable in the near future. And a set of "best practices" for for developers and the release manager. This is just a question of agreeing on how to use SVN as regards branches.
If you had nothing else to do, you could make the "Current Release" /main/trunk etc ONLY updateable by the release manager.
This is one-line change in Subversion config, I'd suspect. - Volodya

Vladimir Prus wrote:
Okay, say somebody reports a bug in program_options that I fix (on a branch named program_options_one_line_fix_127).
Say the same somebody reports a bug in a future library, and the author immediately fixes that (on a branch named future_one_line_fix_777). Now the user has to grab stable branch and perform two merges manually to get both fixes.
LOL - what does the user do now? I'm assuming that "user" refers to someone who is using the latest (or previous) release. This user currently has seveal options: a) work around the error until he downloads the next release (a year's wait) b) patch his copy of the library. c) Download the trunk and use that (assuming the bug is fixed there) Under the new system he has the following options: a) work around the error until he downloads the next release (a MONTH's wait) b) patch his copy of the library. c) If he want's he could sync up with the next release branch of the library where the bug is fixed. (personally I wouldn't do that). This is easy if he's using SVN "switch"
Not to mention that if every one-line change is gated via release manager, it creates a bottleneck we don't presently have.
That one line change is now gated by the WHOLE release - 18 months last time. Now if by "user" you're referring to other library developers, they can easily switch particular libraries to the development branch if they want to coordinate with some other library. That is, in some unusual situations, I could see c) above being useful.
While it's good to be able to test specific branch, I doubt requiring all changes to be done on branch is so great idea.
Well, it doesn't look like we can require it. But we don't really have to. Those who want to continue to check into the trunk and watch the tests that occur there are free to continue to do so. I suspect that such tests will be even less useful in the future than than they are now as more library developers move to this incremental test/release scheme. Note: I would propose a scheme for naming: currently we have the "trunk" - OK its not relevant to me we have something like "RC_1_34_0". I would create a branch off of this tag called "serialization_next_release". When the time comes, "serialization_next_release" can be merged into stable. Then tagged with "serialization_1_35". The branch "serialization_next_release" would continue for the 1_36 version. etc. Robert Ramey

On 8/1/07, Robert Ramey <ramey@rrsd.com> wrote:
The thrust of Beman's proposal is actually quite simple. It consists of
a) designate an branch/trunk as the "Current Release". b) ALL development occurs on branches. c) Testing is applied to branches as requested. d) At the discretion of the release manager, Development branches are merged into the "Current Release" and the whole system is tested. e) Each time the "Current Release" test passes more tests than the previous one, A tag is added by the release manager and a new download package is automatically created. I would anticipate this happing about once/month.
Has anyone tried this before, or know anyone that has? Like maybe the Photoshop team? Maybe you can bug Sean for details. The Premiere team is trying a version of this as well, but we are just starting out, so I can't give any useful feedback yet. Maybe if you guys take a long time to discuss it, we can ship a version of Premiere before you decide, and I can give you feedback then :-).
From what I know from PS's CS3 release, the system mostly worked. There was at least one big checkin to main that broke a seemingly unrelated component, and all heck broke loose. I'm not sure why the change wasn't noticed when the branch pre-merged main INTO the branch before merging into main, but I think it was because they allowed checkins into main to overlap. To avoid that, the Premiere team is going to schedule checkins to main very carefully, so that only one branch is checking in at a time.
We also have only 3 or 4 branches, with 3 or 4 people per branch ('feature groups'), not a ton of separate developers. And we only check into main when a feature is done and shippable, not just stable. But that might not be much of a distinction for library code. I do think you may find things work better/worse for different types of libraries - ie 'core' libs vs leaf libs, but that's just a guess. Tony

Douglas Gregor wrote:
On Aug 1, 2007, at 12:02 AM, Stefan Seefeld wrote:
What are the next steps ? If I understand correctly, the 1_34_0 branch should now be copied to, say, 'stable', such that in regular intervals things can be merged in from the trunk. Am I reading the suggested procedure correctly ? (And then, at some point, 'stable' can be branched to '1_35', etc....)
That is my understanding, although IIRC, the last discussion ended up with, "We can finalize the new procedure later, once we have moved to Subversion." Personally, I'd like to see us find a good way to turn "stable" into an actual release branch of "trunk", with the appropriate svmerge.py tags to make it easy to keep it up-to-date. The trunk/stable divergence is really bad for future development.
Has there been any progress on these questions ? There is no 'stable' branch right now (no matter the spelling), so people don't know what reference code to take if they want to set up a development branch, or to merge 'new' (accepted, but not yet released) libraries in. Pleeaase ! (Any decision is better than no decision...) (Also, somewhat unrelated, http://svn.boost.org/trac/boost/roadmap indicates the 1.34.1 release as open and '3 months late', and there is no place discussing what to expect from 1.35 in terms of new features, additional libraries, etc.) Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Douglas Gregor wrote:
Hello all,
The Boost Subversion repository is now back online. All of the files in CVS (including their histories) have been imported into the Subversion repository. CVS is still available for anonymous, read- only access for now, but will not be updated.
The main Boost development branch is available via anonymous, read- only checkout at:
http://svn.boost.org/svn/boost/trunk/
Or for developer read/write access at:
This is great, thanks for making this happen. I have a nit-picky question -- are we sure that RC_1_34_0 in CVS and SVN is identical? Same question about CVS HEAD and SVN trunk? The answer "no" is fine -- I guess I can run diff myself.
Information about accessing the Boost Subversion repository is available at:
So, what are development procedures? In particular: 1. Am I free to commit to trunk as I would commit to CVS HEAD? 2. Am I free to create any work branches under /svn/boost/branches (surely, using names that clearly indicate what the branch is) - Volodya

On Aug 1, 2007, at 12:50 AM, Vladimir Prus wrote:
I have a nit-picky question -- are we sure that RC_1_34_0 in CVS and SVN is identical? Same question about CVS HEAD and SVN trunk? The answer "no" is fine -- I guess I can run diff myself.
I didn't check yet, but I will diff them now.
Information about accessing the Boost Subversion repository is available at:
So, what are development procedures? In particular:
1. Am I free to commit to trunk as I would commit to CVS HEAD?
Yes.
2. Am I free to create any work branches under /svn/boost/branches (surely, using names that clearly indicate what the branch is)
Yes. - Doug

2007/8/1, Douglas Gregor <doug.gregor@gmail.com>:
The main Boost development branch is available via anonymous, read- only checkout at:
http://svn.boost.org/svn/boost/trunk/
Or for developer read/write access at:
Will this setup work with externals? Are externals forbidden? /$

On Aug 1, 2007, at 3:51 AM, Henrik Sundberg wrote:
2007/8/1, Douglas Gregor <doug.gregor@gmail.com>:
The main Boost development branch is available via anonymous, read- only checkout at:
http://svn.boost.org/svn/boost/trunk/
Or for developer read/write access at:
Will this setup work with externals?
Yes.
Are externals forbidden?
No. - Doug

on Wed Aug 01 2007, Douglas Gregor <doug.gregor-AT-gmail.com> wrote:
On Aug 1, 2007, at 3:51 AM, Henrik Sundberg wrote:
2007/8/1, Douglas Gregor <doug.gregor@gmail.com>:
The main Boost development branch is available via anonymous, read- only checkout at:
http://svn.boost.org/svn/boost/trunk/
Or for developer read/write access at:
Will this setup work with externals?
Yes.
How? IIUC, the externals will need to refer to the https:// address for developers and the http:// address for everyone else... unless the https:// address is really available for read-only anonymous access also. In that case maybe we ought to encourage people to use the https:// address universally and just maintain the http:// one as a fallback for people with crazy firewalls, to ease the transition from user to developer.
Are externals forbidden?
No.
- Doug _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

David Abrahams wrote:
on Wed Aug 01 2007, Douglas Gregor <doug.gregor-AT-gmail.com> wrote:
On Aug 1, 2007, at 3:51 AM, Henrik Sundberg wrote:
2007/8/1, Douglas Gregor <doug.gregor@gmail.com>:
The main Boost development branch is available via anonymous, read- only checkout at:
http://svn.boost.org/svn/boost/trunk/
Or for developer read/write access at:
https://svn.boost.org/svn/boost/trunk/ Will this setup work with externals? Yes.
How? IIUC, the externals will need to refer to the https:// address for developers and the http:// address for everyone else... unless the https:// address is really available for read-only anonymous access also. In that case maybe we ought to encourage people to use the https:// address universally and just maintain the http:// one as a fallback for people with crazy firewalls, to ease the transition from user to developer.
Are externals forbidden? No.
I'd just like to mention again that I've had lots of subtle issues with externals and I think they should be discouraged or forbidden. Not the least of which is that if you are planning to move to a system that involves lots of merges, branching, and tagging all the externals must be branched and updated individually or they will point to the same unbranched location. Updating a working copy has a noticeable performance hit when it reaches an external. It might be a TortoiseSVN bug but if I have an external named B in folder A and I remove the external link and create a folder named B inside of A the external link has to be removed and committed on each of our developers machines or they have to do a clean checkout before they can successfully update again. If the repository URL ever changes then all internal externals must be updated by hand. This means that revisions must be made on tags which is never good practice. True externals (links to other repositories) can go away in the blink of an eye as you have no direct control over them and can thus cause your checkouts/updates to fail. Thanks, Michael Marcin

Michael Marcin wrote:
I'd just like to mention again that I've had lots of subtle issues with externals and I think they should be discouraged or forbidden.
Not the least of which is that if you are planning to move to a system that involves lots of merges, branching, and tagging all the externals must be branched and updated individually or they will point to the same unbranched location.
There is a contrib script called "svncopy" which does the same thing as "svn copy", but with some extra flags for managing externals in the appropriate ways for branching and tagging. It's not a great solution but it is better than doing it manually.
If the repository URL ever changes then all internal externals must be updated by hand.
I have been able to use svn-controlled symlinks rather than "internal externals". But presumably this is not portable to Windows. I suggest that externals should be considered a mechanism of last resort. Phil.

on Tue Jul 31 2007, Douglas Gregor <doug.gregor-AT-gmail.com> wrote:
The main Boost development branch is available via anonymous, read- only checkout at:
http://svn.boost.org/svn/boost/trunk/
Or for developer read/write access at:
Hi Doug, Why not fix it to allow anonymous read access over https right away? If we don't do that, svn:externals will be fairly useless to us. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com

On 8/2/07, David Abrahams <dave@boost-consulting.com> wrote:
Why not fix it to allow anonymous read access over https right away? If we don't do that, svn:externals will be fairly useless to us.
That'd also help certain folks like myself behind proxy servers that do not support some of the access methods used by SVN over http, but which do work with https. -- Caleb Epstein
participants (24)
-
"JOAQUIN LOPEZ MU?Z"
-
Ames, Andreas (Andreas)
-
Anthony Williams
-
Beman Dawes
-
Caleb Epstein
-
David Abrahams
-
Doug Gregor
-
Douglas Gregor
-
Douglas Gregor
-
Eric Niebler
-
Gottlob Frege
-
Henrik Sundberg
-
Jigish
-
Martin Wille
-
Michael Marcin
-
Peter Dimov
-
Phil Endecott
-
Phil Richards
-
Rene Rivera
-
Robert Ramey
-
Sohail Somani
-
Stefan Seefeld
-
Thomas Witt
-
Vladimir Prus