
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html This is a continuation of the "Proposal for radically altered approach to releases" discussion of a year ago. The topic has broadened as it has become obvious that release management is best dealt with in relation to our overall development environment, rather than something that happens in isolation. The proposal reflects a discussion at BoostCon with Troy Straszheim, Dave Abrahams, Eric Niebler, and Doug Gregor, and also insights from Thomas Witt regarding difficulties with release stability. The plan is to implement this proposal as we move to Subversion, so that things like directory permissions can be set up accordingly. Comments welcome! --Beman

Beman Dawes wrote:
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html
Here are some suggestions to promote the idea that individual Boost libraries can be released between full releases of the entire Boost set of libraries, as relating to your item "It is reasonably easy to release of subsets of Boost." These suggestions are admittedly all given from this end-user's perspective, rather than a Boost developer's perspective , but the suggestions are well-meant and are not given to be a burden on Boost developers. 1) For all libraries a main header file should contain a release number in some human readable format, such as "Boost XXX Version 1.34". This is purely for the purpose of the end-user knowing which version of a subset he has. 2) For non-header only libraries, versioning should be used to generate appropriate version resources for each shared library being produced. This allows shared libraries to be installed using normal installation packaging programs, such as Installshield, Wise, Microsoft Install itself, as well as providing the end-user the appropriate information about the library. I do realize that libraries are normally encoded with these numbers but I still feel the internal versioning, when it exists for an operating system, should also be used as appropriate, especially as Boost Build allows end-users to turn off the encoding of library names by their versions. 3) Any subset release of a Boost library should contain a package which incorporates all subset dependencies. This ensures that any dependencies of the subset are always distributed with their latest working changes when the subset is distributed. Theoretically one would not need to do this if the dependency had not changed since the last full release, so perhaps this could be relaxed, but I see confusion in distributing subsets with such dependencies in determining if any changes had occurred in the dependencies, as well as confusion on the end-user's part in determining to which release the subset should be installed. In this way, after a major Boost release, such as 1.34, subsets can be released as necessary for downloads as packages with numbers like 1.34.1 on up etc., or, if Boost sees itself reserving intermediate numbers such as that between releases, 1.34.0.1 on up etc. These suggestions are made for a number of reasons. First the enormous time between release 1.33.1 and 1.34 meant many C++ programmers were eagerly awaiting the next release for a long time. Secondly when a serious bug is found after a full Boost release, it may be appropriate to do a subset release to get that bug fixed appropriately and in the hands of as many end-users as possible through normal release channels ( as opposed to CVS and now Subversion, where many organizations will not go for stability reasons ). Finally when a particular subset of Boost has major work done and tested, and end-users are eagerly waiting for the improvements to that subset, that subset can be released as a point release off of a full Boost release. I do realize that Boost developers may not want to look at the releasing of subsets of the full Boost distribution as a viable means to distribute Boost, since a mismatched subset of a particular library with its dependencies can easily lead to end-user headaches unless it is done very carefully. But I think that Boost should look seriously at reducing the monolithic structure of its distribution as a greater number of library are added in the future, and I am very glad Beman Dawes at least suggested this in his paper.

Whoa - HUGE feature creep. I think that Beman's proposal has just the right scope and it shouldn't be expanded. In fact I think it can be simplified a little bit. I believe that with beman's proposal: a) releases can occur much more frequently - on the order of once/month. b) this will ovbviate the need for separate library releases. One thing I failed to mention is the version numbering scheme. Something along the following lines could be used. boost 1.<breaking change index>.<release number> so we (soon) have 1.34.1 every release which doesn't break any interface would be 1.34.2, 1.34.3 .... (even addtion of a new library would just update the last digits). the next release with a breaking change would be 1.35.0 So a library user could guarentee that his application works with any 1.34.3 <= boost version <1.35. This release version would also be available as a library call so that apps could guarentee that DLLS build with other releases can be expected to work. An applicaition developer can also include compile time checks so that if he moves to another boost version, he'll know that he'll have to re-run his test or what ever. Of course, each library author could also have is own similar scheme for making available the library version. But this would very depending on the library and wouldn't necessarily be a boost wide thing. In any case, it would be totally orthogonal to the current proposal. Robert Ramey Edward Diener wrote:
Beman Dawes wrote:
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html
Here are some suggestions to promote the idea that individual Boost libraries can be released between full releases of the entire Boost set of libraries, as relating to your item "It is reasonably easy to release of subsets of Boost."
These suggestions are admittedly all given from this end-user's perspective, rather than a Boost developer's perspective , but the suggestions are well-meant and are not given to be a burden on Boost developers.
....

On 03/06/07, Robert Ramey <ramey@rrsd.com> wrote:
<snip> boost 1.<breaking change index>.<release number>
so we (soon) have 1.34.1
every release which doesn't break any interface would be 1.34.2, 1.34.3 .... (even addtion of a new library would just update the last digits).
the next release with a breaking change would be 1.35.0
Just one minor question: what about when new libraries are added, but there are no breaking changes? Would that matter? -- Darren

Darren Garvey wrote:
On 03/06/07, Robert Ramey <ramey@rrsd.com> wrote:
<snip> boost 1.<breaking change index>.<release number>
so we (soon) have 1.34.1
every release which doesn't break any interface would be 1.34.2, 1.34.3 .... (even addtion of a new library would just update the last digits).
the next release with a breaking change would be 1.35.0
Just one minor question: what about when new libraries are added, but there are no breaking changes? Would that matter?
Not to me. I find the idea of "overloading" the release number with the "compatibility" information so that I would know that all 1.35.x releases have the same interface. Having said that, I'm generally not a fan of overloading identifiers with semantic meaning (this goes for part # etc.) as in general it seems to eventually lead to problems. But still I continue to hope. Hmmm but now I think about it, adding a new library would bump the "compatability" or interface index so My hope for the ability to say my ap is compatible with all 1.35.x versions of boost probably can't be supported. If 1.35.22 added a new library and I depend upon that, I would have to say that my application is compatible with boost versions 1.35.22 - 1.35.999 which is OK but not really that much different then saying my app is compatible with versions 07/04/2008 to 12/12/2008. still I like the idea of having an "interface version number" which would increment each time anything in the boost interface is added to or modified. As a practical matter, I would hope that new boost builds would come out less that 12 times / year so we wouldn't have a HUGE number. Robert Ramey

Robert Ramey wrote:
Whoa - HUGE feature creep.
I think that Beman's proposal has just the right scope and it shouldn't be expanded. In fact I think it can be simplified a little bit.
I believe that with beman's proposal:
a) releases can occur much more frequently - on the order of once/month.
I avoided specifying an exact schedule; we need to have the flexibility to adjust until we find a happy medium that both developers and users are comfortable with. A year ago I was talking about monthly releases. But since then the idea has taken hold that we should always maintain a stable, release-ready, branch. That makes it much less pressing that we release very quickly, so I'm wondering if quarterly releases might be less hectic.
b) this will ovbviate the need for separate library releases.
Yes. Regardless of the exact schedule, doing releases relatively often and on a regular schedule will obviate a lot of the need for separate releases. But, and this is an important point, different sub-communities within Boost may have their own reasons for wishing to schedule their main release asynchronously compared to the main Boost releases. And as Boost continues to grow that becomes more likely. So I really think it is useful to be able to do separate releases, even if that facility isn't used a great deal at first.
One thing I failed to mention is the version numbering scheme. Something along the following lines could be used.
boost 1.<breaking change index>.<release number>
so we (soon) have 1.34.1
every release which doesn't break any interface would be 1.34.2, 1.34.3 .... (even addtion of a new library would just update the last digits).
the next release with a breaking change would be 1.35.0
So a library user could guarentee that his application works with any 1.34.3 <= boost version <1.35. This release version would also be available as a library call so that apps could guarentee that DLLS build with other releases can be expected to work. An applicaition developer can also include compile time checks so that if he moves to another boost version, he'll know that he'll have to re-run his test or what ever.
Of course, each library author could also have is own similar scheme for making available the library version. But this would very depending on the library and wouldn't necessarily be a boost wide thing. In any case, it would be totally orthogonal to the current proposal.
I'd like to think about your numbering suggestion above before commenting on the specifics. But I do think that if we are going to change how we number releases, now might be a good time to do it, and it is certainly a good time to talk about version numbering. Thanks, --Beman

On 2007-06-03, Beman Dawes <bdawes@acm.org> wrote: [about Robert Ramey's suggestion]
I'd like to think about your numbering suggestion above before commenting on the specifics. But I do think that if we are going to change how we number releases, now might be a good time to do it, and it is certainly a good time to talk about version numbering.
And, if there is an intention to change the numbering scheme and release procedure, it might be the time to move to start with a new major number (2.x). It would signify a clean break from the past, and would mean that there wouldn't be some arbitrary "as of version 1.34.1 boost is following the following numbering scheme". Just a thought... phil -- change name before "@" to "phil" for email

Phil Richards wrote:
On 2007-06-03, Beman Dawes <bdawes@acm.org> wrote: [about Robert Ramey's suggestion]
I'd like to think about your numbering suggestion above before commenting on the specifics. But I do think that if we are going to change how we number releases, now might be a good time to do it, and it is certainly a good time to talk about version numbering.
And, if there is an intention to change the numbering scheme and release procedure, it might be the time to move to start with a new major number (2.x). It would signify a clean break from the past, and would mean that there wouldn't be some arbitrary "as of version 1.34.1 boost is following the following numbering scheme".
Heh, if this is an opportunity to change the numbering, let's get rid of that 'major version' entirely, i.e. make the next release '35'. There is nothing versions 1.x and 1.y have in common for x != y, so the '1' is completely meaningless at this point. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

On 2007-06-04, Stefan Seefeld <seefeld@sympatico.ca> wrote:
And, if there is an intention to change the numbering scheme and release procedure, it might be the time to move to start with a new major number (2.x). It would signify a clean break from the past, and would mean that there wouldn't be some arbitrary "as of version 1.34.1 boost is following the following numbering scheme". Heh, if this is an opportunity to change the numbering, let's get rid of that 'major version' entirely, i.e. make the next release '35'. There is nothing versions 1.x and 1.y have in common for x != y, so
Phil Richards wrote: the '1' is completely meaningless at this point.
Fine, but why skip to 35? 2 is the number that comes after 1, and the fact that we are currently at 1.34 doesn't mean that 34 means *that* much. There are only going to be questions about "what happened to boost 2.x to 34.x, eh?" Or, in other words, what I said before :-) phil -- change name before "@" to "phil" for email

Phil Richards wrote:
On 2007-06-04, Stefan Seefeld <seefeld@sympatico.ca> wrote:
Heh, if this is an opportunity to change the numbering, let's get rid of that 'major version' entirely, i.e. make the next release '35'. There is nothing versions 1.x and 1.y have in common for x != y, so the '1' is completely meaningless at this point.
Fine, but why skip to 35? 2 is the number that comes after 1, and the fact that we are currently at 1.34 doesn't mean that 34 means *that* much. There are only going to be questions about "what happened to boost 2.x to 34.x, eh?"
Or, in other words, what I said before :-)
But it is the 1 that has lost meaning, not the 34. Moving from 1.34(.1) to 2, 3, etc. sounds more confusing (disruptive) than moving to 35, 36, etc. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Phil Richards wrote:
On 2007-06-04, Stefan Seefeld <seefeld@sympatico.ca> wrote:
Phil Richards wrote:
And, if there is an intention to change the numbering scheme and release procedure, it might be the time to move to start with a new major number (2.x). It would signify a clean break from the past, and would mean that there wouldn't be some arbitrary "as of version 1.34.1 boost is following the following numbering scheme". Heh, if this is an opportunity to change the numbering, let's get rid of that 'major version' entirely, i.e. make the next release '35'. There is nothing versions 1.x and 1.y have in common for x != y, so the '1' is completely meaningless at this point.
Fine, but why skip to 35?
'35' because it's going to be the 35th major release of Boost. As I said in my other post - which hasn't generated much interest - I think that we're at a point where the lock-step approach is no longer feasible; that is, we can no longer afford to avoid versioning libraries by way of pretending that the leading 1. in front of the release number makes it a version. Put differently, what I'm saying is that Boost can no longer pretend to be a library instead of a compilation. A release should merely be a collection of library versions that have been tested together and known to work. In terms of SVN structure this could look like: trunk/ boost/ foo/ foo.hpp bar/ bar.hpp libs/ foo/ bar/ versions/ foo-1.0/ boost/ libs/ foo-1.1/ foo-2.0/ bar-1.0/ bar-2.0/ releases/ r35/ boost/ libs/ but of course other arrangements are possible. The benefit of this approach is decentralization. The maintainer of foo only modifies trunk/boost/foo, trunk/libs/foo and creates versions/foo-* when he's satisfied with the test results on the trunk. The release manager of r35 only copies library versions from versions/ to releases/r35, does whatever needs to be done to resolve test failures by nagging the maintainers to fix bugs in foo-2.0 creating foo-2.1 (and failing that picks 1.1 instead), and ships when he's satisfied with the test results on r35. It's clear who can commit, where, and when, and a release doesn't block development. It's even possible for r36 to appear before r35. :-)

Peter Dimov wrote:
Phil Richards wrote:
Fine, but why skip to 35?
'35' because it's going to be the 35th major release of Boost. ... Put differently, what I'm saying is that Boost can no longer pretend to be a library instead of a compilation.
A release should merely be a collection of library versions that have been tested together and known to work.
From a successful vendor who releases such "packages", Ubuntu Linux uses what seems to make more sense, dated version number. The latest release being 7.04, for April 2007. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Rene Rivera wrote:
Peter Dimov wrote:
Phil Richards wrote:
Fine, but why skip to 35?
'35' because it's going to be the 35th major release of Boost. ... Put differently, what I'm saying is that Boost can no longer pretend to be a library instead of a compilation.
A release should merely be a collection of library versions that have been tested together and known to work.
From a successful vendor who releases such "packages", Ubuntu Linux uses what seems to make more sense, dated version number. The latest release being 7.04, for April 2007.
The number itself can be arbitrary, 35 is just the logical extension of our traditional way of numbering releases. We can even use names such as gregor3 or witt2. One drawback of using the date is that you don't know it yet when you need to create the release branch.

Peter Dimov wrote:
Rene Rivera wrote:
Peter Dimov wrote:
Phil Richards wrote:
Fine, but why skip to 35? '35' because it's going to be the 35th major release of Boost. ... Put differently, what I'm saying is that Boost can no longer pretend to be a library instead of a compilation.
A release should merely be a collection of library versions that have been tested together and known to work. From a successful vendor who releases such "packages", Ubuntu Linux uses what seems to make more sense, dated version number. The latest release being 7.04, for April 2007.
The number itself can be arbitrary, 35 is just the logical extension of our traditional way of numbering releases. We can even use names such as gregor3 or witt2. One drawback of using the date is that you don't know it yet when you need to create the release branch.
As proposed, there isn't any release branch. There is just a tag for the release on the "stable" branch. But that begs the question of knowing the release number in advance so it can appear in documentation, constants in headers, etc. Thus your point is well taken. --Beman

Rene Rivera wrote:
Peter Dimov wrote:
Phil Richards wrote:
Fine, but why skip to 35?
'35' because it's going to be the 35th major release of Boost. ... Put differently, what I'm saying is that Boost can no longer pretend to be a library instead of a compilation.
A release should merely be a collection of library versions that have been tested together and known to work.
From a successful vendor who releases such "packages", Ubuntu Linux uses what seems to make more sense, dated version number. The latest release being 7.04, for April 2007.
From another successful vendor (who does not release similar collections), Perforce tag their versions with <year>.<release>, e.g. 2007.1 for the first release in 2007. In addition to this, they also use the the latest change no (whatever that is in SVNish) to identify the bugfix level, e.g. 2007.1/123456.
Using such a scheme would lessen the chance of having to rename a release branch due to a missed date (as compared to the year.month approach). Simply versioning Boost by using the previous minor version as the major naturally wouldn't suffer from that problem, but for some reason that option just doesn't feel correct. / Johan

On Mon, Jun 04, 2007 at 05:37:13PM +0300, Peter Dimov wrote:
As I said in my other post - which hasn't generated much interest - I think
Very strong interest from me, at least.
that we're at a point where the lock-step approach is no longer feasible; that is, we can no longer afford to avoid versioning libraries by way of pretending that the leading 1. in front of the release number makes it a version.
Put differently, what I'm saying is that Boost can no longer pretend to be a library instead of a compilation.
CPAN is interesting to compare to. It is a dependency-managing download-build-test-install system. Such a thing is way out of scope for boost right now, but for CPAN it works well.
A release should merely be a collection of library versions that have been tested together and known to work.
In terms of SVN structure this could look like:
trunk/ boost/ foo/ foo.hpp bar/ bar.hpp libs/ foo/ bar/
versions/ foo-1.0/ boost/ libs/ foo-1.1/ foo-2.0/ bar-1.0/ bar-2.0/
releases/ r35/ boost/ libs/
but of course other arrangements are possible.
See my other post in this thread; if you keep headers and src together, they're easier to manage in svn. Also you can assemble a release (simply a list of library versions to be assembled and built together) via a list of svn:externals: boost_1_34_0: iostreams https://svn.boost.org/svn/boost/iostreams/releases/x date_time https://svn.boost.org/svn/boost/date_time/releases/y function https://svn.boost.org/svn/boost/function/releases/z which are versioned in the repository, branchable, mergable, etc. For instance, I could svn cp this boost_1_34_0 directory to my area in the sandbox and tweak it: boost_mybranch: iostreams https://svn.boost.org/svn/boost/iostreams/releases/x date_time https://svn.boost.org/svn/boost/date_time/releases/y function https://svn.boost.org/svn/boost/function/releases/z newlib https://svn.boost.org/svn/sandbox/newlib/releases/w mylib https://svn.boost.org/svn/sandbox/mylib/branches/something
The benefit of this approach is decentralization. The maintainer of foo only
It can be decentralization to the point of chaos. Some policy is necessary to get the releases out the door, but of course the release manager is not hamstrung by the fact that the repository organization won't allow (her)him to do what (s)he needs to do...
modifies trunk/boost/foo, trunk/libs/foo and creates versions/foo-* when he's satisfied with the test results on the trunk. The release manager of r35 only copies library versions from versions/ to releases/r35, does whatever needs to be done to resolve test failures by nagging the maintainers to fix bugs in foo-2.0 creating foo-2.1 (and failing that picks 1.1 instead), and ships when he's satisfied with the test results on r35.
It's clear who can commit, where, and when, and a release doesn't block development. It's even possible for r36 to appear before r35. :-)
-t

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:466405AF.8060602@sympatico.ca...
Phil Richards wrote:
On 2007-06-03, Beman Dawes <bdawes@acm.org> wrote: [about Robert Ramey's suggestion]
I'd like to think about your numbering suggestion above before commenting on the specifics. But I do think that if we are going to change how we number releases, now might be a good time to do it, and it is certainly a good time to talk about version numbering.
And, if there is an intention to change the numbering scheme and release procedure, it might be the time to move to start with a new major number (2.x). It would signify a clean break from the past, and would mean that there wouldn't be some arbitrary "as of version 1.34.1 boost is following the following numbering scheme".
Heh, if this is an opportunity to change the numbering, let's get rid of that 'major version' entirely, i.e. make the next release '35'. There is nothing versions 1.x and 1.y have in common for x != y, so the '1' is completely meaningless at this point.
I don't agree. <major>.<minor>.<patch> scheme has it's virtues. <major> version update occurs rarely. But It may happened. Like for example now if we completely change boost structure. Next major update may be related to the complete rework with Concepts. I would prefer us to start with 2.0.0 Gennadiy

Gennadiy Rozental wrote:
I don't agree. <major>.<minor>.<patch> scheme has it's virtues.
I does, but the semantic annotation that comes with these numbers just isn't there in the context of boost. Therefor a single number would be more honest. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:46644472.3080309@sympatico.ca...
Gennadiy Rozental wrote:
I don't agree. <major>.<minor>.<patch> scheme has it's virtues.
I does, but the semantic annotation that comes with these numbers
What annotations? We are about to make a major change in directory structure/release procedure (I hope). This is a major change. The rest is "routine" releases. Any changes that doesn't affect an interface goes into <patch> domain.
just isn't there in the context of boost. Therefor a single number would be more honest.
Gennadiy

Edward Diener wrote:
Beman Dawes wrote:
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html
Here are some suggestions to promote the idea that individual Boost libraries can be released between full releases of the entire Boost set of libraries, as relating to your item "It is reasonably easy to release of subsets of Boost."
These suggestions are admittedly all given from this end-user's perspective, rather than a Boost developer's perspective , but the suggestions are well-meant and are not given to be a burden on Boost developers.
1) For all libraries a main header file should contain a release number in some human readable format, such as "Boost XXX Version 1.34". This is purely for the purpose of the end-user knowing which version of a subset he has.
That's tricky. A file in a subset release won't necessarily correspond exactly to any version of that file that appears in a full Boost release.
2) For non-header only libraries, versioning should be used to generate appropriate version resources for each shared library being produced. This allows shared libraries to be installed using normal installation packaging programs, such as Installshield, Wise, Microsoft Install itself, as well as providing the end-user the appropriate information about the library. I do realize that libraries are normally encoded with these numbers but I still feel the internal versioning, when it exists for an operating system, should also be used as appropriate, especially as Boost Build allows end-users to turn off the encoding of library names by their versions.
I a little out of my depth here. Troy Straszheim has been working with a big project on his day job that does subset releases, and we are trying to leverage his experience, along with other Boosters who have similar experience. Perhaps some of these folks could comment on version numbering for subsets.
3) Any subset release of a Boost library should contain a package which incorporates all subset dependencies. This ensures that any dependencies of the subset are always distributed with their latest working changes when the subset is distributed. Theoretically one would not need to do this if the dependency had not changed since the last full release, so perhaps this could be relaxed, but I see confusion in distributing subsets with such dependencies in determining if any changes had occurred in the dependencies, as well as confusion on the end-user's part in determining to which release the subset should be installed.
Yes. I've been running under the assumption that "Any subset release of a Boost library should contain a package which incorporates all subset dependencies."
In this way, after a major Boost release, such as 1.34, subsets can be released as necessary for downloads as packages with numbers like 1.34.1 on up etc., or, if Boost sees itself reserving intermediate numbers such as that between releases, 1.34.0.1 on up etc.
These suggestions are made for a number of reasons. First the enormous time between release 1.33.1 and 1.34 meant many C++ programmers were eagerly awaiting the next release for a long time. Secondly when a serious bug is found after a full Boost release, it may be appropriate to do a subset release to get that bug fixed appropriately and in the hands of as many end-users as possible through normal release channels ( as opposed to CVS and now Subversion, where many organizations will not go for stability reasons ). Finally when a particular subset of Boost has major work done and tested, and end-users are eagerly waiting for the improvements to that subset, that subset can be released as a point release off of a full Boost release.
I do realize that Boost developers may not want to look at the releasing of subsets of the full Boost distribution as a viable means to distribute Boost, since a mismatched subset of a particular library with its dependencies can easily lead to end-user headaches unless it is done very carefully. But I think that Boost should look seriously at reducing the monolithic structure of its distribution as a greater number of library are added in the future, and I am very glad Beman Dawes at least suggested this in his paper.
My assumption is that some of the larger sub-groups within Boost (Spirit, for example) might want to release subsets, and we should be doing what we can to enable that. OTOH, I'm not seeing the full Boost group doing subsets, at least in the foreseeable future. Thanks for the comments. The issue of how to identify versions of subsets needs further discussion for sure. --Beman

Beman Dawes wrote:
Edward Diener wrote:
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html Here are some suggestions to promote the idea that individual Boost
Beman Dawes wrote: libraries can be released between full releases of the entire Boost set of libraries, as relating to your item "It is reasonably easy to release of subsets of Boost."
These suggestions are admittedly all given from this end-user's perspective, rather than a Boost developer's perspective , but the suggestions are well-meant and are not given to be a burden on Boost developers.
1) For all libraries a main header file should contain a release number in some human readable format, such as "Boost XXX Version 1.34". This is purely for the purpose of the end-user knowing which version of a subset he has.
That's tricky. A file in a subset release won't necessarily correspond exactly to any version of that file that appears in a full Boost release.
The form of a subset release which I am envisioning has to be some "change" off of a full Boost release, and there needs to be some method of identifying that. I say that because an end-user should be able to "install" a subset release on top of the directory structure which he has for a given release of Boost and everything will continue to work as expected. Furthermore it might sound ambitious but I can well imagine that a subset release may have more than one release off of a major Boost release, so that identifying each time that happens is important. These are the reasons that I think it is imperative to have some form of human readable and understandable versioning which can identify each subset release for the end user as related to a Boost full release.
2) For non-header only libraries, versioning should be used to generate appropriate version resources for each shared library being produced. This allows shared libraries to be installed using normal installation packaging programs, such as Installshield, Wise, Microsoft Install itself, as well as providing the end-user the appropriate information about the library. I do realize that libraries are normally encoded with these numbers but I still feel the internal versioning, when it exists for an operating system, should also be used as appropriate, especially as Boost Build allows end-users to turn off the encoding of library names by their versions.
I a little out of my depth here. Troy Straszheim has been working with a big project on his day job that does subset releases, and we are trying to leverage his experience, along with other Boosters who have similar experience. Perhaps some of these folks could comment on version numbering for subsets.
3) Any subset release of a Boost library should contain a package which incorporates all subset dependencies. This ensures that any dependencies of the subset are always distributed with their latest working changes when the subset is distributed. Theoretically one would not need to do this if the dependency had not changed since the last full release, so perhaps this could be relaxed, but I see confusion in distributing subsets with such dependencies in determining if any changes had occurred in the dependencies, as well as confusion on the end-user's part in determining to which release the subset should be installed.
Yes. I've been running under the assumption that "Any subset release of a Boost library should contain a package which incorporates all subset dependencies."
In this way, after a major Boost release, such as 1.34, subsets can be released as necessary for downloads as packages with numbers like 1.34.1 on up etc., or, if Boost sees itself reserving intermediate numbers such as that between releases, 1.34.0.1 on up etc.
These suggestions are made for a number of reasons. First the enormous time between release 1.33.1 and 1.34 meant many C++ programmers were eagerly awaiting the next release for a long time. Secondly when a serious bug is found after a full Boost release, it may be appropriate to do a subset release to get that bug fixed appropriately and in the hands of as many end-users as possible through normal release channels ( as opposed to CVS and now Subversion, where many organizations will not go for stability reasons ). Finally when a particular subset of Boost has major work done and tested, and end-users are eagerly waiting for the improvements to that subset, that subset can be released as a point release off of a full Boost release.
I do realize that Boost developers may not want to look at the releasing of subsets of the full Boost distribution as a viable means to distribute Boost, since a mismatched subset of a particular library with its dependencies can easily lead to end-user headaches unless it is done very carefully. But I think that Boost should look seriously at reducing the monolithic structure of its distribution as a greater number of library are added in the future, and I am very glad Beman Dawes at least suggested this in his paper.
My assumption is that some of the larger sub-groups within Boost (Spirit, for example) might want to release subsets, and we should be doing what we can to enable that. OTOH, I'm not seeing the full Boost group doing subsets, at least in the foreseeable future.
My suggestion is that any Boost library, if the situation were important enough, could create a subset release between full Boost releases. This would solve the problem which seems to occur quite often in Boost where new features, or serious bug fixes, for a particular library, is constrained to wait for a full Boost release, and the full Boost release is heavily delayed because of the huge amount of effort necessary to have everything regression tested and co-ordinated. The idea of a subset release for a particular library and its dependencies may add more complexity to Boost distributing itslf but it changes the monolothic nature whereby all of Boost must be ready to be released, or none of it gets released. As more libraries are added to Boost, I see such a rigid methodology as an impediment to a more flexible way of developing and distributing Boost. However this is essentially up to the Boost developers to decide. But I think it is important to consider that certain libraries may want to get out changes and innovations which are valuable from the end-user's point of view without waiting for the next full release. Since I am an end-user and not a Boost developer I wish Boost good luck in considering such changes.

Beman Dawes wrote:
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html
This is a very well thought-out piece. I see one minor problem with it: it depends on additional work from the developers, namely, explicit merge requests. I'm afraid that the principle of least resistance will result in work happening on -devel and -stable stagnating due to lack of merge requests. In a non-volunteer ogranization one will have the authority and means to enforce the procedures and prevent this, but I'm not sure about Boost. That said, it might be good time for us to radically rethink the structure of Boost and decouple versions from releases. Versions should be per-library, and releases should be a set of versions. This separation ought to be enforced on a directory tree level, as in: boost/ foo/ foo.hpp bar/ bar.hpp with nothing else allowed in boost/. A separate compat/ directory would hold the current contents of boost/ (header name-wise), giving people a migration path. This should probably come with a stricter dependency management approach than the current "none". Each library should list its dependencies, per-library tests should operate on a subtree formed by the dependencies and not on the full boost/ tree, adding an additional dependency should be frowned upon. We might consider introducing "dependency levels" such that a level N library can depend on level N-1 libraries, but not vice versa. With this organization, several releases can proceed in parallel, it being the sole responsibility of the release manager to pick the correct library subset and their versions such that the result is a stable release. More importantly, it allows developers to concentrate on improving their libraries. Once there one could venture into the direction that packaging a specific release should be the job of the installer and not of the release manager; that is, Boost should package individual library versions, not a monolithic .34.zip. The installer probably ought to also allow upgrading a single library (or adding a new library) should the user so choose.

Peter Dimov wrote:
Beman Dawes wrote:
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html
This is a very well thought-out piece. I see one minor problem with it: it depends on additional work from the developers, namely, explicit merge requests. I'm afraid that the principle of least resistance will result in work happening on -devel and -stable stagnating due to lack of merge requests.
Hmmm - I don't see that happening at all. Developer's want to get their stuff to users. In fact, I expect some anxious developers will want to see their stuff merged into the stable tree before its tested because "its a trivial change". What I like about this proposal is that makes developers work easier instead of adding to it as other proposals do. Robert Ramey

Peter Dimov wrote:
Beman Dawes wrote:
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html
This is a very well thought-out piece. I see one minor problem with it: it depends on additional work from the developers, namely, explicit merge requests. I'm afraid that the principle of least resistance will result in work happening on -devel and -stable stagnating due to lack of merge requests. In a non-volunteer ogranization one will have the authority and means to enforce the procedures and prevent this, but I'm not sure about Boost.
The idea is that merge requests would be something as simple as sending an email to some server which is running a script that automatically checks that all criteria are met and then does the merge. Hopefully that is easy enough that developers won't put off making requests. I personally find the current situation, with very long delays between releases, to be very de-motivating. If releases are done on a more frequent basis, that will be a strong motivation to request merges. But we would certainly need to keep a close watch on the process as it gets going, and make adjustments if developers aren't requesting merges when indicated.
That said, it might be good time for us to radically rethink the structure of Boost and decouple versions from releases. Versions should be per-library, and releases should be a set of versions. This separation ought to be enforced on a directory tree level, as in:
boost/ foo/ foo.hpp bar/ bar.hpp
with nothing else allowed in boost/. A separate compat/ directory would hold the current contents of boost/ (header name-wise), giving people a migration path.
There are additional reasons why we consider at least some change in the Boost directory tree organization. I'll start a separate thread on that topic.
This should probably come with a stricter dependency management approach than the current "none". Each library should list its dependencies, per-library tests should operate on a subtree formed by the dependencies and not on the full boost/ tree, adding an additional dependency should be frowned upon. We might consider introducing "dependency levels" such that a level N library can depend on level N-1 libraries, but not vice versa.
I independently also came to the conclusion that "per-library tests should operate on a subtree formed by the dependencies" and went so far as to write a little program to assign dependency levels to libraries. That has to be done on a library-relative basis so that cycles are dealt with properly. After some more thought, I decided that when a developer requests a test (via the "test on demand" mechanism), it should be possible to request tests on multiple libraries, so that a dependency sub-tree or any portion thereof can be tested. Rather than building dependency lists into the system (which is a fairly heavyweight approach), it might be simpler to give developers a tool to find which libraries are dependent on their library, and then leave it up to the developer how much or little they test against the dependency tree. If a developer who undertests runs the risk that a merge-into-stable request will fail, because merge-into-stable requests fail if they would cause any other library to fail.
With this organization, several releases can proceed in parallel, it being the sole responsibility of the release manager to pick the correct library subset and their versions such that the result is a stable release. More importantly, it allows developers to concentrate on improving their libraries.
Once there one could venture into the direction that packaging a specific release should be the job of the installer and not of the release manager; that is, Boost should package individual library versions, not a monolithic .34.zip. The installer probably ought to also allow upgrading a single library (or adding a new library) should the user so choose.
The larger Boost gets, the more attractive this approach gets. Are we ready for it now? I don't think so. Should we be moving in that direction? Yes! I think we should be thinking in terms of a development environment organization that will support such a parallel development and release mechanism. I'll update development_environment.html accordingly. Thanks, --Beman

Beman Dawes wrote:
After some more thought, I decided that when a developer requests a test (via the "test on demand" mechanism), it should be possible to request tests on multiple libraries, so that a dependency sub-tree or any portion thereof can be tested.
One feature I keep asking for is to allow libraries to be built and tested against an installed boost package. This means that, instead of pulling dependencies out of a boost source tree and running all prerequisite targets (and tests), the library is built and tested stand-alone. Of course the library's developer has to make sure his library is compatible with the version of boost that is to be built / tested against, but otherwise such an isolation would provide tremendous advantages in terms of modularity. (Note that this in itself is only a technical measure; it doesn't imply a change in release policy, for example, though it opens the door to such discussions. :-) ) Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Beman Dawes wrote:
Rather than building dependency lists into the system (which is a fairly heavyweight approach), it might be simpler to give developers a tool to find which libraries are dependent on their library, and then leave it up to the developer how much or little they test against the dependency tree. If a developer who undertests runs the risk that a merge-into-stable request will fail, because merge-into-stable requests fail if they would cause any other library to fail.
I had something different in mind, although upward dependencies are also a topic worth considering. My goal is to prevent people inadvertently introducing dependencies into a library. When I have declared "config assert" as dependencies of "memory" (for example), commits that #include a header from a library other than config/assert/memory should cause tests to fail (they currently do not as they are run against the entire boost tree). This doesn't need to be a heavyweight approach; a simple .dependencies text file with the contents "config assert" can do. We need more discipline in partitioning headers into libraries and there are a number of speed bumps to be hit along the way, but it's possible.

On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote:
The idea is that merge requests would be something as simple as sending an email to some server which is running a script that automatically checks that all criteria are met and then does the merge. [snip] After some more thought, I decided that when a developer requests a test (via the "test on demand" mechanism), it should be possible to request tests on multiple libraries, so that a dependency sub-tree or any portion thereof can be tested.
Rather than building dependency lists into the system (which is a fairly heavyweight approach), it might be simpler to give developers a tool to find which libraries are dependent on their library, and then leave it up to the developer how much or little they test against the dependency tree. If a developer who undertests runs the risk that a merge-into-stable request will fail, because merge-into-stable requests fail if they would cause any other library to fail.
That's three new tools, some of which are non-trivial to develop. All tools are non-trivial to maintain. We clearly need tools to improve the Boost development and release process. The problem is that while good tools can help the process, poor tools can hurt us even more than no tools. We can't build new tools until we've fixed or replaced the existing tools, and we can't build new tools without a solid plan for maintaining those tools. Look at the 1.34 release series... the thing that's been holding us back most of all is that the testing and test reporting tools are broken. 1.34.1 is stalled because we have failures on one platform, but nobody can see what those failures actually are: the test reporting system removed all of the important information. I agree with most of Beman's write-up, but it pre-supposes a robust testing system for Boost that just doesn't exist. I hypothesize that the vast majority of the problems with our release process would go away without a single change to our process, if only we had a robust testing system. We have only so much volunteer time we can spend. At this point, I think our best bet is to spend it making the regression testing infrastructure work well; then we can move on to a new process with its new tools. - Doug

Douglas Gregor wrote:
Look at the 1.34 release series... the thing that's been holding us back most of all is that the testing and test reporting tools are broken. 1.34.1 is stalled because we have failures on one platform, but nobody can see what those failures actually are: the test reporting system removed all of the important information.
I agree with most of Beman's write-up, but it pre-supposes a robust testing system for Boost that just doesn't exist. I hypothesize that the vast majority of the problems with our release process would go away without a single change to our process, if only we had a robust testing system. We have only so much volunteer time we can spend. At this point, I think our best bet is to spend it making the regression testing infrastructure work well; then we can move on to a new process with its new tools.
I know I'm asking for the moon, but it would be really nice if developers have access to a compiler farm like the one SourceForge has but for all compilers that Boost uses. I know there are legal impediments, but such a thing, if plausible at all, would alleviate most of our testing woes. If something breaks, we can test it ourself and see the results immediately and fix it immediately instead of having to wait a day or two for a full test suite to complete. The current system is like having a compiler that takes 2 days to compile. 2c worth... but I thought I'd say it again :P Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman <joel <at> boost-consulting.com> writes: [...]
I know I'm asking for the moon, but it would be really nice if developers have access to a compiler farm like the one SourceForge has but for all compilers that Boost uses. I know there are legal impediments, but such a thing, if plausible at all, would alleviate most of our testing woes. If something breaks, we can test it ourself and see the results immediately and fix it immediately instead of having to wait a day or two for a full test suite to complete. The current system is like having a compiler that takes 2 days to compile.
2c worth... but I thought I'd say it again :P
That was my personal plan. Then someone else won the national lottery ;-) Cheers, Nicola Musatti

Hi, Douglas Gregor wrote:
On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote:
I was going to write this email, but Doug beat me to it. <snip/>
Look at the 1.34 release series... the thing that's been holding us back most of all is that the testing and test reporting tools are broken. 1.34.1 is stalled because we have failures on one platform, but nobody can see what those failures actually are: the test reporting system removed all of the important information.
From my point of view Doug is spot on. The proposal seems to assume infinite resources in testing. The reality is we can not even test one branch reliably and this despite considerable effort by a number of people. With the current setup the process outlined is unworkable. As an example on how bad things are: I would like to merge changes for 1.34.1 one at a time so that I can identify the change that broke something. With the current turn-around time, even when the system works as designed, this is impossible unless we aim for a X-mas release date.
I agree with most of Beman's write-up, but it pre-supposes a robust testing system for Boost that just doesn't exist. I hypothesize that the vast majority of the problems with our release process would go away without a single change to our process, if only we had a robust testing system.
Agreed. Lets build the foundations first.
We have only so much volunteer time we can spend.
<rant> It always strikes me as odd that we spend many man-hours discussing the process while we are spending very little on fixing bugs. In my experience the man-hours available for bug-fixing are severely limited. I want to explicitly exclude Beman here. He fixed the outstanding bugs _and_ spend the time for the paper. This is not the norm so. </rant>
At this point, I think our best bet is to spend it making the regression testing infrastructure work well; then we can move on to a new process with its new tools.
From my experience this is the only promising way forward. You can also rephrase it as: "Let's stabilize something, before we destabilize everything.". We will not ship 1.35.0 within the next year if we do major surgery to our directory structure. It's just not going to happen. I strongly urge us to do something simple and restricted in scope first. That will give the biggest bang for the buck. Thomas -- Thomas Witt witt@acm.org

"Thomas Witt" <witt@acm.org> wrote in message news:f41nhr$u26$1@sea.gmane.org...
Hi,
Douglas Gregor wrote:
On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote:
I was going to write this email, but Doug beat me to it.
<snip/>
Look at the 1.34 release series... the thing that's been holding us back most of all is that the testing and test reporting tools are broken. 1.34.1 is stalled because we have failures on one platform, but nobody can see what those failures actually are: the test reporting system removed all of the important information.
From my point of view Doug is spot on. The proposal seems to assume infinite resources in testing. The reality is we can not even test one branch reliably and this despite considerable effort by a number of people. With the current setup the process outlined is unworkable.
What practical steps do you see to increase the test system stability?
As an example on how bad things are: I would like to merge changes for 1.34.1 one at a time so that I can identify the change that broke something. With the current turn-around time, even when the system works as designed, this is impossible unless we aim for a X-mas release date.
IMO, this is not a responsibility of release manger *whatsoever*. Release manager have to release only the staff that is already tested. No testing, no merging. If I want my changes to be released I will invest time into making sure they passed all the tests.
I agree with most of Beman's write-up, but it pre-supposes a robust testing system for Boost that just doesn't exist. I hypothesize that the vast majority of the problems with our release process would go away without a single change to our process, if only we had a robust testing system.
The robustness of the testing system will increase marginally as soon no one will be able to break someone else without explicit permision from that someone.
Agreed. Lets build the foundations first.
We have only so much volunteer time we can spend.
<rant>
It always strikes me as odd that we spend many man-hours discussing the process while we are spending very little on fixing bugs. In my experience the man-hours available for bug-fixing are severely limited. I want to explicitly exclude Beman here. He fixed the outstanding bugs _and_ spend the time for the paper. This is not the norm so.
No one can enforce developers to make volontir work. IMO the system should not depend on any particular developer schedule. There is always stable version of this particular component others can depend on.
</rant>
At this point, I think our best bet is to spend it making the regression testing infrastructure work well; then we can move on to a new process with its new tools.
From my experience this is the only promising way forward. You can also rephrase it as: "Let's stabilize something, before we destabilize everything.". We will not ship 1.35.0 within the next year if we do major surgery to our directory structure. It's just not going to happen.
I would not be that pessimistic. Directory restructuring is simple enough with snv. What are the problems you invision?
I strongly urge us to do something simple and restricted in scope first. That will give the biggest bang for the buck.
The problem is to decide on what is simple and how will it help. Do you have soomething specific in mind? Gennadiy

on Mon Jun 04 2007, "Gennadiy Rozental" <gennadiy.rozental-AT-thomson.com> wrote:
As an example on how bad things are: I would like to merge changes for 1.34.1 one at a time so that I can identify the change that broke something. With the current turn-around time, even when the system works as designed, this is impossible unless we aim for a X-mas release date.
IMO, this is not a responsibility of release manger *whatsoever*. Release manager have to release only the staff that is already tested. No testing, no merging. If I want my changes to be released I will invest time into making sure they passed all the tests.
I agree that testing and merging should be taken off the table as concerns for the release manager. IIUC, Beman's proposal does that. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

Hi, David Abrahams wrote:
on Mon Jun 04 2007, "Gennadiy Rozental" <gennadiy.rozental-AT-thomson.com> wrote:
As an example on how bad things are: I would like to merge changes for 1.34.1 one at a time so that I can identify the change that broke something. With the current turn-around time, even when the system works as designed, this is impossible unless we aim for a X-mas release date. IMO, this is not a responsibility of release manger *whatsoever*. Release manager have to release only the staff that is already tested. No testing, no merging. If I want my changes to be released I will invest time into making sure they passed all the tests.
This argument is bogus. Eventually somebody has to decide what goes in and what does not. Whether it's going in a release ready branch or a release does not matter. Both need oversight and to some degree coordination. That function has to be centralized it can't be done by single developers. Whether we call that person release manager or not is of little importance to me. In case of a point release it has to be the release manager that coordinates patches.
I agree that testing and merging should be taken off the table as concerns for the release manager.
Let me add a little detail here. It's not the testing and merging as such that is a big burden. Currently merging works pretty well for me. I control the workload by simply asking people to apply *approved* patches for me. I think for a point release the release manager always needs control over what is merged. This part of the job can't be taken off the table. The problem with testing is not monitoring test results. The problem is managing the testing itself. I.e. making sure tests run as needed, making sure only the "right" people submit release tests, debugging bogus results, monitoring the reporting, in case of a genuine regression tracking it to a specific change. These things need to be taken off the table. Most of this can be achieved by fixing the regression test system.
IIUC, Beman's proposal does that.
I think the proposed ideas are worthwhile pursuing, after we dealt with the core problem. Under the assumption of a working regression test system the proposal makes a lot of sense, we just have to make that assumption hold. Currently it does not. Thomas -- Thomas Witt witt@acm.org

"Thomas Witt" <witt@acm.org> wrote in message news:f46pqe$e0f$1@sea.gmane.org...
Hi,
David Abrahams wrote:
on Mon Jun 04 2007, "Gennadiy Rozental" <gennadiy.rozental-AT-thomson.com> wrote:
As an example on how bad things are: I would like to merge changes for 1.34.1 one at a time so that I can identify the change that broke something. With the current turn-around time, even when the system works as designed, this is impossible unless we aim for a X-mas release date. IMO, this is not a responsibility of release manger *whatsoever*. Release manager have to release only the staff that is already tested. No testing, no merging. If I want my changes to be released I will invest time into making sure they passed all the tests.
This argument is bogus. Eventually somebody has to decide what goes in and what does not. Whether it's going in a release ready branch or a release does not matter. Both need oversight and to some degree coordination.
No coordination is required. At least no coordination with release manager. Two developer may need to coordinate their efforts to release new versions of their libs. I am about to post my proposal. You should see it there.
That function has to be centralized it can't be done by single developers.
Why?
Whether we call that person release manager or not is of little importance to me.
In case of a point release it has to be the release manager that coordinates patches.
I strongly disagree. Release manager doesn't need to know anything about patches at all. We night just have different views on what are the boost release procedure is (or should be).
I agree that testing and merging should be taken off the table as concerns for the release manager.
Let me add a little detail here. It's not the testing and merging as such that is a big burden. Currently merging works pretty well for me. I control the workload by simply asking people to apply *approved* patches for me. I think for a point release the release manager always needs control over what is merged. This part of the job can't be taken off the table.
It can be and it should be. Release can't wait for me to commit/merge even single byte.
The problem with testing is not monitoring test results. The problem is managing the testing itself. I.e. making sure tests run as needed, making sure only the "right" people submit release tests, debugging bogus results, monitoring the reporting, in case of a genuine regression tracking it to a specific change. These things need to be taken off the table. Most of this can be achieved by fixing the regression test system.
IMO this is orthogonal to the boost release procedures (or it should be) Gennadiy

Gennadiy Rozental wrote:
It can be and it should be. Release can't wait for me to commit/merge even single byte.
This is true in so many ways :(
The problem with testing is not monitoring test results. The problem is managing the testing itself. I.e. making sure tests run as needed, making sure only the "right" people submit release tests, debugging bogus results, monitoring the reporting, in case of a genuine regression tracking it to a specific change. These things need to be taken off the table. Most of this can be achieved by fixing the regression test system.
IMO this is orthogonal to the boost release procedures (or it should be)
No it's not. It's at the core. No procedure in the world will help you when you have no way of telling what the status of your project is. It's like having the best possible map with no way to tell where you are. It's useless. Thomas -- Thomas Witt witt@acm.org

"David Abrahams" <dave@boost-consulting.com> wrote in message news:87r6opl1ht.fsf@grogan.peloton...
on Mon Jun 04 2007, "Gennadiy Rozental" <gennadiy.rozental-AT-thomson.com> wrote:
As an example on how bad things are: I would like to merge changes for 1.34.1 one at a time so that I can identify the change that broke something. With the current turn-around time, even when the system works as designed, this is impossible unless we aim for a X-mas release date.
IMO, this is not a responsibility of release manger *whatsoever*. Release manager have to release only the staff that is already tested. No testing, no merging. If I want my changes to be released I will invest time into making sure they passed all the tests.
I agree that testing and merging should be taken off the table as concerns for the release manager. IIUC, Beman's proposal does that.
IIUC, not completely. Gennadiy

Thomas Witt wrote:
Hi,
Douglas Gregor wrote:
On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote:
I was going to write this email, but Doug beat me to it.
And I guess you both beat me to it... As I was busy spending all my free time trying to fix bugs for 1.34.1. Although what's below are not my only thoughts on the release procedure...
The proposal seems to assume infinite resources in testing.
AFAICT the it also mandates increasing the testing and release management tools pipeline. And this is something we just don't have the resources to implement at this time. And likely wont have them in the next 6 months. In this respect I find the proposal contradictory. It both says that the tool chain needs to be simplified, at the cost of features, and calls for more tools.
I agree with most of Beman's write-up, but it pre-supposes a robust testing system for Boost that just doesn't exist.
It also pre-supposes a "stable" starting point for ongoing releases. First 1.34.1, will not be such a release. Second, it will take at least 6 months to make a clean and stable release, and that's without adding new libraries. Third, IMO to make a clean, stable, robust 1.35 following the proposal would take more than a year.
Agreed. Lets build the foundations first.
Yes. And some of us have been working hard toward that. I've made the changes to the regression scripts to publish test results to <http://beta.boost.org:8081/>. And Noel is hard at work making it possible to publish to that server directly from bjam+boost.build, and hence making it possible to shorten the testing tool chain.
We will not ship 1.35.0 within the next year if we do major surgery to our directory structure. It's just not going to happen.
There are two other aspect to 1.35.0 that I'm trying to address. In another thread, I raised the question of svn dir structure. And it devolved into the same aspects that this thread devolved to, discussing how to split the sources up as much as possible based on libraries. This is fine, but it doesn't get us any closer to managing the structure we currently have. We need to concentrate on making this simpler first! Which brings up the second item, the website. One of the simplifications for releases is to separate the website content from the release itself. (that was my rant) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:46646B35.5050709@gmail.com...
Thomas Witt wrote:
Hi,
Douglas Gregor wrote:
On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote:
I was going to write this email, but Doug beat me to it.
And I guess you both beat me to it... As I was busy spending all my free time trying to fix bugs for 1.34.1. Although what's below are not my only thoughts on the release procedure...
The proposal seems to assume infinite resources in testing.
Which particular part?
AFAICT the it also mandates increasing the testing and release management tools pipeline. And this is something we just don't have the resources to implement at this time. And likely wont have them in the next 6 months. In this respect I find the proposal contradictory. It both says that the tool chain needs to be simplified, at the cost of features, and calls for more tools.
I agree with most of Beman's write-up, but it pre-supposes a robust testing system for Boost that just doesn't exist.
It also pre-supposes a "stable" starting point for ongoing releases. First 1.34.1, will not be such a release. Second, it will take at least 6 months to make a clean and stable release, and that's without adding new libraries. Third, IMO to make a clean, stable, robust 1.35 following the proposal would take more than a year.
Can we get strait to the point? What is required to make stable release? (Complete list) Why 1.34.0 is not stable?
We will not ship 1.35.0 within the next year if we do major surgery to our directory structure. It's just not going to happen.
There are two other aspect to 1.35.0 that I'm trying to address. In another thread, I raised the question of svn dir structure. And it devolved into the same aspects that this thread devolved to, discussing how to split the sources up as much as possible based on libraries. This is fine, but it doesn't get us any closer to managing the structure we currently have. We need to concentrate on making this simpler first!
I believe spliting the directory structure will our life way simple in many prospectives. What complications do you see?
Which brings up the second item, the website. One of the simplifications for releases is to separate the website content from the release itself. (that was my rant)
Yes. I believe this is the way to go. Gennadiy

Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:46646B35.5050709@gmail.com...
Thomas Witt wrote:
Hi,
Douglas Gregor wrote:
On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote: I was going to write this email, but Doug beat me to it. And I guess you both beat me to it... As I was busy spending all my free time trying to fix bugs for 1.34.1. Although what's below are not my only thoughts on the release procedure...
The proposal seems to assume infinite resources in testing.
Which particular part?
On-demand testing, testing of breaking-stable branch, continuous testing of stable branch, all with high-availability and high-. Currently we can only manage partial testing of *1* branch, in one build variation. And now we are talking of testing at least three branches at once.
Can we get strait to the point?
What is required to make stable release? (Complete list) Why 1.34.0 is not stable?
Complete, interesting thought :-) I can't say I have such a complete list. But perhaps this will give you and idea: * Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1. * The inspection reports 193 non-license problems, and *1059* license problems. * We don't test the build and install process. * We don't test libraries against an installed release. * We don't test release versions, even though this is the most used variant by users. * We don't test, to any effective means, 64 bit architectures. * We don't test, to any effective means, multi-cpu architectures.
I believe spliting the directory structure will our life way simple in many prospectives. What complications do you see?
It increases the number of combinations that need testing. And in complicates the build and testing infrastructure. Both of which increase the likelihood of instability. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Rene Rivera wrote:
On-demand testing, testing of breaking-stable branch, continuous testing of stable branch, all with high-availability and high-.
Oops... "...and high-response." -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:46647897.5030204@gmail.com...
The proposal seems to assume infinite resources in testing.
Which particular part?
On-demand testing, testing of breaking-stable branch, continuous testing of stable branch, all with high-availability and high-. Currently we can only manage partial testing of *1* branch, in one build variation. And now we are talking of testing at least three branches at once.
My solution doesn't require ANY of that. Let me repeat NONE. Well, high-availability/quick responce would be nice. But it's optional. It can be done at later stage. Every library is tested against particular set of dependencies selected by developer. But only *one* per lib. It does require additional disk space for source tree copy. I don't believe it major requirement these days.
Can we get strait to the point?
What is required to make stable release? (Complete list) Why 1.34.0 is not stable?
Complete, interesting thought :-) I can't say I have such a complete list. But perhaps this will give you and idea:
* Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1.
I see only 6 bugs assigned to 1.34.1. To be frank with you I don;t see why do we need to hurry with releasing them.
* The inspection reports 193 non-license problems, and *1059* license problems.
This is not a showstopper IMO. 1.34.0 in the same state isn't it?
* We don't test the build and install process.
What do you want to test? In any case it doesn't make release "unstable"
* We don't test libraries against an installed release.
What do you mean?
* We don't test release versions, even though this is the most used variant by users.
We shouldn't be doing this at all IMO. NO testing during release.
* We don't test, to any effective means, 64 bit architectures.
* We don't test, to any effective means, multi-cpu architectures.
Would be nice ... in future releases. It doesn't make current unstable.
I believe spliting the directory structure will our life way simple in many prospectives. What complications do you see?
It increases the number of combinations that need testing. And in complicates the build and testing infrastructure. Both of which increase the likelihood of instability.
No. They don't. We are going to be testing *single* combination per library. Let's me clarify again: do you believe 1.34.0 can't be used as stable starting point? If not, why? Gennadiy

Gennadiy Rozental wrote:
* Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1.
I see only 6 bugs assigned to 1.34.1. To be frank with you I don;t see why do we need to hurry with releasing them.
I'm not sure I understand the question. Why these bugs are assigned to 1.34.1 ? Because they are regressions / showstoppers that have the highest priority. Or why the people aren't focusing on 1.35 now ? (I guess because it's always more fun to focus on features an not bug fixes.) Etc.
* We don't test the build and install process.
What do you want to test? In any case it doesn't make release "unstable"
The release (well, in fact, packaging) process was retarded because a substantial number of bugs only turned up during that very last phase, simply because that wasn't tested at all. Had packaging (etc.) be part of the regular testing procedure those bugs weren't present, at that time in the release process.
* We don't test libraries against an installed release.
What do you mean?
In the sake of modularity (for example), take a boost library X, and test it in isolation, against its prerequisite boost libraries installed, not part of the same source tree.
* We don't test release versions, even though this is the most used variant by users.
We shouldn't be doing this at all IMO. NO testing during release.
You lost me. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4664821B.5060506@sympatico.ca...
Gennadiy Rozental wrote:
* Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1.
I see only 6 bugs assigned to 1.34.1. To be frank with you I don't see why do we need to hurry with releasing them.
I'm not sure I understand the question. Why these bugs are assigned to 1.34.1 ? Because they are regressions / showstoppers that have the highest priority.
284 pool::purge_memory() does not reset next_size assigned shammah Bugs Boost 1.34.1 None 958 [doc] The "Getting Started" page mentions incorrect library names new dave Bugs Boost 1.34.1 Building Boost 964 [filesystem] missing documentation or bad links new -- Bugs Boost 1.34.1 filesystem 965 [doc] boost::variant tutorial - final example uses v1,v2 should be seq1,seq2 assigned ebf Bugs Boost 1.34.1 variant 982 x64 documentation for windows new -- Support Requests Boost 1.34.1 Building Boost 991 [pool] new -- Bugs Boost 1.34.1 None ' Which of these are showstoppers?
Or why the people aren't focusing on 1.35 now ? (I guess because it's always more fun to focus on features an not bug fixes.) Etc.
Why don't we fix'em in 1.35?
* We don't test the build and install process.
What do you want to test? In any case it doesn't make release "unstable"
The release (well, in fact, packaging) process was retarded because a substantial number of bugs only turned up during that very last phase, simply because that wasn't tested at all. Had packaging (etc.) be part of the regular testing procedure those bugs weren't present, at that time in the release process.
I guess it would be nice. Do you know how implement these tests in practice (I mean without human involvement)? In any case This problem is separate doesn't make boost source code unstable, which was the point of this list.
* We don't test libraries against an installed release.
What do you mean?
In the sake of modularity (for example), take a boost library X, and test it in isolation, against its prerequisite boost libraries installed, not part of the same source tree.
Again I don't see how is it related to the stability of the boost source code, but my approach supports this naturally
* We don't test release versions, even though this is the most used variant by users.
We shouldn't be doing this at all IMO. NO testing during release.
You lost me.
That's the problem. No one seems to make an effort to read what I propose. My solution assumes that no testing is done during release, none whatsoever. Only components that already tested and individually released by developers will go into umbrella boost release. Gennadiy

Gennadiy Rozental wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4664821B.5060506@sympatico.ca...
Gennadiy Rozental wrote:
* Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1. I see only 6 bugs assigned to 1.34.1. To be frank with you I don't see why do we need to hurry with releasing them. I'm not sure I understand the question. Why these bugs are assigned to 1.34.1 ? Because they are regressions / showstoppers that have the highest priority.
284 pool::purge_memory() does not reset next_size assigned shammah Bugs Boost 1.34.1 None 958 [doc] The "Getting Started" page mentions incorrect library names new dave Bugs Boost 1.34.1 Building Boost 964 [filesystem] missing documentation or bad links new -- Bugs Boost 1.34.1 filesystem 965 [doc] boost::variant tutorial - final example uses v1,v2 should be seq1,seq2 assigned ebf Bugs Boost 1.34.1 variant 982 x64 documentation for windows new -- Support Requests Boost 1.34.1 Building Boost 991 [pool] new -- Bugs Boost 1.34.1 None
'
Which of these are showstoppers?
I have no clue. I didn't assign them to 1.34.1.
Or why the people aren't focusing on 1.35 now ? (I guess because it's always more fun to focus on features an not bug fixes.) Etc.
Why don't we fix'em in 1.35?
because that is mere terminology. The question is not how we call the next release, but what to focus on now. Should the scope be regression / bug fixes, or new features ? (It is evident that for us developers new features are always more appealing. But that's not how we can build a reputation for high-quality software.
* We don't test the build and install process. What do you want to test? In any case it doesn't make release "unstable" The release (well, in fact, packaging) process was retarded because a substantial number of bugs only turned up during that very last phase, simply because that wasn't tested at all. Had packaging (etc.) be part of the regular testing procedure those bugs weren't present, at that time in the release process.
I guess it would be nice. Do you know how implement these tests in practice (I mean without human involvement)?
As Rene and Doug point out, the first thing to do is run the same set of tests that are already there, but not against a full boost source tree, but against a installed boost version. That lets us detect errors introduced during installation.
That's the problem. No one seems to make an effort to read what I propose. My solution assumes that no testing is done during release, none whatsoever. Only components that already tested and individually released by developers will go into umbrella boost release.
OK, that's an interesting point, but only displaces the problem. Then we need to discuss the release procedure of those components, and talk about how these are stabilized. Etc. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4664D6EE.5060106@sympatico.ca...
Why don't we fix'em in 1.35?
because that is mere terminology. The question is not how we call the next release, but what to focus on now. Should the scope be regression / bug fixes, or new features ? (It is evident that for us developers new features are always more appealing. But that's not how we can build a reputation for high-quality software.
I don't really want to go into what is more important and how we need to build something. In my proposition there is no much space for "we" at all. Every individual developer have to make a decision independently. We (as a comunity) can't wait for someone to fix the bug, if that someone is opted to work on something else first. And we (as a comunity of volontiers) do not have much leverage to enforce it. So my position is: STOP thinking about boost as a whole. We can't fight for the world peace. We accept library if at the time of the review we find it worth it. We (or rather some of us) provide a facilties for the developer to maintain the library. If library is a "good citizen" we present it as a part of aproved boost libs set. If library is not maintained it's either freezed in some stable state or droppped altogether. The only thing we can enforce: you can't break some other boost lib (good citizen! - remember). Otherwise you on your own.
* We don't test the build and install process. What do you want to test? In any case it doesn't make release "unstable" The release (well, in fact, packaging) process was retarded because a substantial number of bugs only turned up during that very last phase, simply because that wasn't tested at all. Had packaging (etc.) be part of the regular testing procedure those bugs weren't present, at that time in the release process.
I guess it would be nice. Do you know how implement these tests in practice (I mean without human involvement)?
As Rene and Doug point out, the first thing to do is run the same set of tests that are already there, but not against a full boost source tree, but against a installed boost version. That lets us detect errors introduced during installation.
I don't believe inventing new tests we have to run is our priority here. I don't argue it can be usefull, but it's definetly "later stage". And another point: IMO we can't practically test everything that is usefull. Let's say we start releasing subsets. Do we want to check how tests behave if particular subset is installed? Two intersecting subsets? Two independent subsets? We need to draw a line somewhere between what is usefull and what is required. If regression testing setup is made easy enough hopefully we'll get some volontiers to test some "custom" usefull tests.
That's the problem. No one seems to make an effort to read what I propose. My solution assumes that no testing is done during release, none whatsoever. Only components that already tested and individually released by developers will go into umbrella boost release.
OK, that's an interesting point, but only displaces the problem. Then we
What problem?
need to discuss the release procedure of those components, and talk about how these are stabilized. Etc.
Yes. That what second part of my proposal deals with. Gennadiy

Gennadiy Rozental wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4664821B.5060506@sympatico.ca...
Gennadiy Rozental wrote:
284 pool::purge_memory() does not reset next_size assigned shammah Bugs Boost 1.34.1 None 958 [doc] The "Getting Started" page mentions incorrect library names new dave Bugs Boost 1.34.1 Building Boost 964 [filesystem] missing documentation or bad links new -- Bugs Boost 1.34.1 filesystem 965 [doc] boost::variant tutorial - final example uses v1,v2 should be seq1,seq2 assigned ebf Bugs Boost 1.34.1 variant 982 x64 documentation for windows new -- Support Requests Boost 1.34.1 Building Boost 991 [pool] new -- Bugs Boost 1.34.1 None
'
Which of these are showstoppers?
Non, they don't have to be. They are likely to be low risk changes that improve quality. That's what 1.34.1 is about. Thomas -- Thomas Witt witt@acm.org

Gennadiy Rozental wrote:
* We don't test release versions, even though this is the most used variant by users. We shouldn't be doing this at all IMO. NO testing during release. You lost me.
Stefan is talking about testing release (rather than debug) builds of the release branch, and Gennadiy is talking about testing the release branch (regardless of debug/release build) See http://video.google.com/videoplay?docid=-9180512665135657036 ("We are sinking")
That's the problem. No one seems to make an effort to read what I propose. My solution assumes that no testing is done during release, none whatsoever. Only components that already tested and individually released by developers will go into umbrella boost release.
That's what I'm proposing, too:-) And, yes, in theory it shouldn't be necessary to explicitly test the release branch. In practice, stuff happens, so we will still need to test it. The debug vs. release test issue is probably worth looking at again once we get going. --Beman

"Beman Dawes" <bdawes@acm.org> wrote in message news:f47tn9$s20$1@sea.gmane.org...
That's the problem. No one seems to make an effort to read what I propose. My solution assumes that no testing is done during release, none whatsoever. Only components that already tested and individually released by developers will go into umbrella boost release.
That's what I'm proposing, too:-)
And, yes, in theory it shouldn't be necessary to explicitly test the release branch. In practice, stuff happens, so we will still need to test it.
Can you explain for me: 1. What is the release branch and why we need this notion? 2. What staff can happened so we will have to do testing during release? Gennadiy

Stefan Seefeld wrote:
Gennadiy Rozental wrote:
The release (well, in fact, packaging) process was retarded because a substantial number of bugs only turned up during that very last phase, simply because that wasn't tested at all. Had packaging (etc.) be part of the regular testing procedure those bugs weren't present, at that time in the release process.
FWIW, it's not the packaging it's the install that gives us trouble. Thomas PS: Watch your language. -- Thomas Witt witt@acm.org

Thomas Witt wrote:
Stefan Seefeld wrote:
The release (well, in fact, packaging) process was retarded because a substantial number of bugs only turned up during that very last phase, simply because that wasn't tested at all.
PS: Watch your language.
re·tard 1 (rĭ-tärd') Pronunciation Key v. re·tard·ed, re·tard·ing, re·tards v. tr. To cause to move or proceed slowly; delay or impede. v. intr. To be delayed. -- Eric Niebler Boost Consulting www.boost-consulting.com

Eric Niebler wrote:
re·tard 1 (rĭ-tärd') Pronunciation Key v. re·tard·ed, re·tard·ing, re·tards
v. tr. To cause to move or proceed slowly; delay or impede.
v. intr. To be delayed.
Point taken. Crawling-under-a-rock-ly yours Thomas -- Thomas Witt witt@acm.org

Thomas Witt wrote:
Stefan Seefeld wrote:
Gennadiy Rozental wrote:
The release (well, in fact, packaging) process was retarded because a substantial number of bugs only turned up during that very last phase, simply because that wasn't tested at all. Had packaging (etc.) be part of the regular testing procedure those bugs weren't present, at that time in the release process.
FWIW, it's not the packaging it's the install that gives us trouble.
Thomas
PS: Watch your language.
Sorry Thomas, it didn't even cross my mind that 'retarded' had those two meanings (not to mention that my use was the less frequent one !). I may be too much influenced by Quebec culture. ;-) Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
* We don't test the build and install process. What do you want to test? In any case it doesn't make release "unstable"
The release (well, in fact, packaging) process was retarded because a substantial number of bugs only turned up during that very last phase, simply because that wasn't tested at all. Had packaging (etc.) be part of the regular testing procedure those bugs weren't present, at that time in the release process.
Thomas Witt has made the point strongly that a number of serious problems getting the release ready had to do with packaging, documentation, and other issues that are not currently covered by testing. So, yes, let's try to increase "testing" to cover a broader spectrum of problems. That will have to evolved over time, and in a way that integrates with our more traditional testing. The inspection reports are a start. Let's see what else we can come up with! Thanks, --Beman

Beman Dawes wrote:
Thomas Witt has made the point strongly that a number of serious problems getting the release ready had to do with packaging, documentation, and other issues that are not currently covered by testing.
So, yes, let's try to increase "testing" to cover a broader spectrum of problems. That will have to evolved over time, and in a way that integrates with our more traditional testing.
The inspection reports are a start. Let's see what else we can come up with!
It would be a good thing if the installation procedure would be tested. A release manager must feel uncomfortable, knowing that the code might be ok, but not knowing whether the software actually can be installed. (IIUC, this is one of the reasons for having the release candidates.) A second step would be to test whether the installed versions behave as intended (e.g. regarding auto-linking). Regards, m

On Jun 4, 2007, at 5:06 PM, Gennadiy Rozental wrote:
Every library is tested against particular set of dependencies selected by developer. But only *one* per lib. It does require additional disk space for source tree copy. I don't believe it major requirement these days.
I thought that too, but you are wrong. One of the most common failures with regression testers is that they run out of hard drive space, because testing Boost... just a single tree... requires tens of gigabytes.
* We don't test the build and install process.
What do you want to test? In any case it doesn't make release "unstable"
In an ideal world, we would: (1) Build all of Boost, as a user would (2) Install Boost (3) Build tests against the installed Boost, then (4) Run those tests
* We don't test release versions, even though this is the most used variant by users.
We shouldn't be doing this at all IMO. NO testing during release.
I believe Rene means the "release" variant, i.e., with optimizations turned on. This also saves a *ton* of disk space. Also, testing with shared libraries rather than dynamic saves a lot of disk space. - Doug

Douglas Gregor wrote:
On Jun 4, 2007, at 5:06 PM, Gennadiy Rozental wrote:
* We don't test release versions, even though this is the most used variant by users. We shouldn't be doing this at all IMO. NO testing during release.
I believe Rene means the "release" variant, i.e., with optimizations turned on. This also saves a *ton* of disk space. Also, testing with shared libraries rather than dynamic saves a lot of disk space.
Yes, I mean building and testing with optimizations on. Most likely we would want to build the profile variant so that we could get both optimizations and debug symbols. But that introduces the large disk space requirements again. My point is that we currently *only* test what is useful to library authors. And we essentially give users the cold shoulder. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:46649B3C.4060204@gmail.com...
Douglas Gregor wrote:
On Jun 4, 2007, at 5:06 PM, Gennadiy Rozental wrote:
* We don't test release versions, even though this is the most used variant by users. We shouldn't be doing this at all IMO. NO testing during release.
I believe Rene means the "release" variant, i.e., with optimizations turned on. This also saves a *ton* of disk space. Also, testing with shared libraries rather than dynamic saves a lot of disk space.
Yes, I mean building and testing with optimizations on. Most likely we would want to build the profile variant so that we could get both optimizations and debug symbols. But that introduces the large disk space requirements again. My point is that we currently *only* test what is useful to library authors. And we essentially give users the cold shoulder.
From what I understand there is no problems adding these tests into test suite. No "boost-wide" decision is required. We may just add encouraging statement to the testing procedures docs.
Gennadiy

Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:46649B3C.4060204@gmail.com...
Douglas Gregor wrote:
On Jun 4, 2007, at 5:06 PM, Gennadiy Rozental wrote:
* We don't test release versions, even though this is the most used variant by users. We shouldn't be doing this at all IMO. NO testing during release. I believe Rene means the "release" variant, i.e., with optimizations turned on. This also saves a *ton* of disk space. Also, testing with shared libraries rather than dynamic saves a lot of disk space. Yes, I mean building and testing with optimizations on. Most likely we would want to build the profile variant so that we could get both optimizations and debug symbols. But that introduces the large disk space requirements again. My point is that we currently *only* test what is useful to library authors. And we essentially give users the cold shoulder.
From what I understand there is no problems adding these tests into test suite.
Your understanding is incorrect.
No "boost-wide" decision is required.
We need to decide that the optimized variant is a release requirement. Then we need to acquire testing resources for each platform we support. Then we need to manage the testing resources to cover both debug and release for all platforms such that we get timely testing results. And we need to ensure that library authors fix all the places where the rely on testing only in debug mode. We've gone over this before, so I suggest people search the testing and dev list archives.
We may just add encouraging statement to the testing procedures docs.
Encouraging isn't enough. We have to explicitly ask if we want anything approaching a reliable procedure. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Rene Rivera wrote:
Gennadiy Rozental wrote:
From what I understand there is no problems adding these tests into test suite.
Your understanding is incorrect.
Hm, I guess I forgot to mention. Which testing variant is tested is *not* under the control of library authors. It is controlled by testers with the guidance of the release manager. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664A916.8020504@gmail.com...
Rene Rivera wrote:
Gennadiy Rozental wrote:
From what I understand there is no problems adding these tests into test suite.
Your understanding is incorrect.
Hm, I guess I forgot to mention. Which testing variant is tested is *not* under the control of library authors. It is controlled by testers with the guidance of the release manager.
don't I have an ability to specify build variant in run rule's requirement? Gennadiy

Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664A916.8020504@gmail.com...
Rene Rivera wrote:
Gennadiy Rozental wrote:
From what I understand there is no problems adding these tests into test suite. Your understanding is incorrect. Hm, I guess I forgot to mention. Which testing variant is tested is *not* under the control of library authors. It is controlled by testers with the guidance of the release manager.
don't I have an ability to specify build variant in run rule's requirement?
Sure, you could. But it would not have an effect on whether the release variant is tested, only which of your tests get run. For example, if you you changed all your tests to require the release variant, none of you lib would get tested. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664B0FB.8000501@gmail.com...
Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664A916.8020504@gmail.com...
Rene Rivera wrote:
Gennadiy Rozental wrote:
From what I understand there is no problems adding these tests into test suite. Your understanding is incorrect. Hm, I guess I forgot to mention. Which testing variant is tested is *not* under the control of library authors. It is controlled by testers with the guidance of the release manager.
don't I have an ability to specify build variant in run rule's requirement?
Sure, you could. But it would not have an effect on whether the release variant is tested, only which of your tests get run. For example, if you you changed all your tests to require the release variant, none of you lib would get tested.
So you are saying that there is a separate entity that decides which tests from my test suite to run? Hmm I did not know that. Why is it done this way? Where is it done? Gennadiy

on Mon Jun 04 2007, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664A916.8020504@gmail.com...
Rene Rivera wrote:
Gennadiy Rozental wrote:
From what I understand there is no problems adding these tests into test suite. Your understanding is incorrect. Hm, I guess I forgot to mention. Which testing variant is tested is *not* under the control of library authors. It is controlled by testers with the guidance of the release manager.
don't I have an ability to specify build variant in run rule's requirement?
Sure, you could. But it would not have an effect on whether the release variant is tested, only which of your tests get run. For example, if you you changed all your tests to require the release variant, none of you lib would get tested.
There used to be a "default build" argument to most rules that would control which variant is built by default. Is that gone now? -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams wrote:
There used to be a "default build" argument to most rules that would control which variant is built by default. Is that gone now?
You make it sound as if only one variant was built by default. Was that ever the case ? Without specific arguments all variants are built (though not tested, AFAICT). I wish that was changed so only one variant is built by default, and people would have to request explicitly non-default variants. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

on Wed Jun 06 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
David Abrahams wrote:
There used to be a "default build" argument to most rules that would control which variant is built by default. Is that gone now?
You make it sound as if only one variant was built by default.
I never intended to say that. s/variant/variant(s)/, OK? -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams wrote:
on Wed Jun 06 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
David Abrahams wrote:
There used to be a "default build" argument to most rules that would control which variant is built by default. Is that gone now? You make it sound as if only one variant was built by default.
I never intended to say that. s/variant/variant(s)/, OK?
Fine. Though I would wish the build system would be changed to comply with your original spelling. :-) Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664A688.7030101@gmail.com...
Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:46649B3C.4060204@gmail.com...
Douglas Gregor wrote:
On Jun 4, 2007, at 5:06 PM, Gennadiy Rozental wrote:
* We don't test release versions, even though this is the most used variant by users. We shouldn't be doing this at all IMO. NO testing during release. I believe Rene means the "release" variant, i.e., with optimizations turned on. This also saves a *ton* of disk space. Also, testing with shared libraries rather than dynamic saves a lot of disk space. Yes, I mean building and testing with optimizations on. Most likely we would want to build the profile variant so that we could get both optimizations and debug symbols. But that introduces the large disk space requirements again. My point is that we currently *only* test what is useful to library authors. And we essentially give users the cold shoulder.
From what I understand there is no problems adding these tests into test suite.
Your understanding is incorrect.
You mean Boost.Build doesn't support it?
No "boost-wide" decision is required.
We need to decide that the optimized variant is a release requirement.
1. It's kinda orthogonal to the whole "development environment" problem. If you believe it's required - it is going to be required in any case. 2. In my personal opinion release variant shouldn't be release requirement boost wide. Each developer may decide for oneself. I don't insist though.
Then we need to acquire testing resources for each platform we support. Then we need to manage the testing resources to cover both debug and release for all platforms such that we get timely testing results. And we need to ensure that library authors fix all the places where the rely on testing only in debug mode. We've gone over this before, so I suggest people search the testing and dev list archives.
Test resources management is important but separate topic IMO.
We may just add encouraging statement to the testing procedures docs.
Encouraging isn't enough. We have to explicitly ask if we want anything approaching a reliable procedure.
I am afraid it's never ending story. Some users require "release variant". Some require static libs some require shared libs. some require level of optimization 4, some 2. And what about all this different STLs we got around. In general IMO we *shouldn't* strive to test against any possible user's environment. Users will have to run unit tests we provide and report issues if there are any. Library developer than may add the test case and fix it. Gennadiy

"Douglas Gregor" <doug.gregor@gmail.com> wrote in message news:FC0BF0AD-CC21-430D-98B4-3C4FE793440F@osl.iu.edu...
On Jun 4, 2007, at 5:06 PM, Gennadiy Rozental wrote:
Every library is tested against particular set of dependencies selected by developer. But only *one* per lib. It does require additional disk space for source tree copy. I don't believe it major requirement these days.
I thought that too, but you are wrong. One of the most common failures with regression testers is that they run out of hard drive space, because testing Boost... just a single tree... requires tens of gigabytes.
Umm.. so what? My 3 year old desktop got 300 gig. Server sytems should have an access to even larger resources. Anyway. The disk usage issue is manageble on Make system level. During testing if library X depends on library A:N1 make system pulls branch N1 from snv if one is not present.Once library X testing is completed all non-HEAD branches of libraries X depends on are removed. It will add some time to the testing, but it can be optional and turned on in test host configuration. Another direction is minimize amount to space used by temporary files. If we move toward keeping test results in DB like external host. All files produced during testing can be removed.
* We don't test the build and install process.
What do you want to test? In any case it doesn't make release "unstable"
In an ideal world, we would: (1) Build all of Boost, as a user would (2) Install Boost (3) Build tests against the installed Boost, then (4) Run those tests
Do you know any tools that allows this kind of automated testing? This is "nice to have", but not showstopper IMO.
* We don't test release versions, even though this is the most used variant by users.
We shouldn't be doing this at all IMO. NO testing during release.
I believe Rene means the "release" variant, i.e., with optimizations turned on. This also saves a *ton* of disk space. Also, testing with shared libraries rather than dynamic saves a lot of disk space.
Boost.Test now supports testing with shared libs on all platforms. I can't say about other libs. As for "release variant" testing, I don't see what stoppes each developer from adding appropriate test modules to the test suite. Gennadiy

Gennadiy Rozental wrote:
Another direction is minimize amount to space used by temporary files. If we move toward keeping test results in DB like external host. All files produced during testing can be removed.
I covered this during the Testing Boost session at BoostCon. The result files are minuscule in comparison with the building (obj, lib, dll, exe, a, etc.) files, and they have very limited lifetime on the testers machine. I suggest people go read the slides and notes to hopefully understand what the current state of testing is. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Peter Dimov wrote:
Rene Rivera wrote:
I suggest people go read the slides and notes to hopefully understand what the current state of testing is.
Link please?
I mentioned it last week, but here it is again... http://boostcon.com/community/wiki/show/public/2007/ -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664A875.3060401@gmail.com...
Gennadiy Rozental wrote:
Another direction is minimize amount to space used by temporary files. If we move toward keeping test results in DB like external host. All files produced during testing can be removed.
I covered this during the Testing Boost session at BoostCon. The result files are minuscule in comparison with the building (obj, lib, dll, exe, a, etc.) files, and they have very limited lifetime on the testers machine.
Than how come we end up with tens of gigs required for testing? Gennadiy

Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664A875.3060401@gmail.com...
Gennadiy Rozental wrote:
Another direction is minimize amount to space used by temporary files. If we move toward keeping test results in DB like external host. All files produced during testing can be removed. I covered this during the Testing Boost session at BoostCon. The result files are minuscule in comparison with the building (obj, lib, dll, exe, a, etc.) files, and they have very limited lifetime on the testers machine.
Than how come we end up with tens of gigs required for testing?
Because the compiled products, the obj, lib, dll, and exe are huge. They are huge because it's C++ and has a large amount of debugging symbol data, because templates generate log type names. One of my long standing goals for testing, which we didn't cover during the testing session, is to make it possible for testers to limit how much they test. And hence make it possible for people with less testing resources to contribute. And hopefully getting larger coverage for testing. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664AFB4.8050008@gmail.com...
Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664A875.3060401@gmail.com...
Gennadiy Rozental wrote:
Another direction is minimize amount to space used by temporary files. If we move toward keeping test results in DB like external host. All files produced during testing can be removed. I covered this during the Testing Boost session at BoostCon. The result files are minuscule in comparison with the building (obj, lib, dll, exe, a, etc.) files, and they have very limited lifetime on the testers machine.
Than how come we end up with tens of gigs required for testing?
Because the compiled products, the obj, lib, dll, and exe are huge. They are huge because it's C++ and has a large amount of debugging symbol data, because templates generate log type names.
Why do we keep these once test is completed? Gennadiy

On Jun 4, 2007, at 8:37 PM, Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664AFB4.8050008@gmail.com...
Because the compiled products, the obj, lib, dll, and exe are huge. They are huge because it's C++ and has a large amount of debugging symbol data, because templates generate log type names.
Why do we keep these once test is completed?
Because when some small number of files change in Boost, we only want to rebuild those objects, libraries, and executables that are actually affected. That's what would give us improved turnaround time from commit to test results. - Doug

"Douglas Gregor" <doug.gregor@gmail.com> wrote in message news:6C59A26A-304C-49F8-9559-C0DE31B44163@osl.iu.edu...
On Jun 4, 2007, at 8:37 PM, Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664AFB4.8050008@gmail.com...
Because the compiled products, the obj, lib, dll, and exe are huge. They are huge because it's C++ and has a large amount of debugging symbol data, because templates generate log type names.
Why do we keep these once test is completed?
Because when some small number of files change in Boost, we only want to rebuild those objects, libraries, and executables that are actually affected. That's what would give us improved turnaround time from commit to test results.
Umm.. This looks like an area we can enhance. Can't we keep last revision number/last update time along with results? Gennadiy

On Jun 4, 2007, at 9:05 PM, Gennadiy Rozental wrote:
"Douglas Gregor" <doug.gregor@gmail.com> wrote in message news:6C59A26A-304C-49F8-9559-C0DE31B44163@osl.iu.edu...
On Jun 4, 2007, at 8:37 PM, Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664AFB4.8050008@gmail.com...
Because the compiled products, the obj, lib, dll, and exe are huge. They are huge because it's C++ and has a large amount of debugging symbol data, because templates generate log type names.
Why do we keep these once test is completed?
Because when some small number of files change in Boost, we only want to rebuild those objects, libraries, and executables that are actually affected. That's what would give us improved turnaround time from commit to test results.
Umm.. This looks like an area we can enhance. Can't we keep last revision number/last update time along with results?
I don't believe this is possible, and nothing short of an actual, working implementation of this idea would convince me otherwise. Build/test systems have always worked this way for a very good reason: the only way to avoid re-building something when it hasn't changed is to keep it around. Most of those intermediate files are needed again for minimal rebuilds. - Doug

"Douglas Gregor" <doug.gregor@gmail.com> wrote in message news:0E214EFF-1861-440C-A08F-A76A7B0C0971@osl.iu.edu...
On Jun 4, 2007, at 9:05 PM, Gennadiy Rozental wrote:
"Douglas Gregor" <doug.gregor@gmail.com> wrote in message news:6C59A26A-304C-49F8-9559-C0DE31B44163@osl.iu.edu...
On Jun 4, 2007, at 8:37 PM, Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664AFB4.8050008@gmail.com...
Because the compiled products, the obj, lib, dll, and exe are huge. They are huge because it's C++ and has a large amount of debugging symbol data, because templates generate log type names.
Why do we keep these once test is completed?
Because when some small number of files change in Boost, we only want to rebuild those objects, libraries, and executables that are actually affected. That's what would give us improved turnaround time from commit to test results.
Umm.. This looks like an area we can enhance. Can't we keep last revision number/last update time along with results?
I don't believe this is possible, and nothing short of an actual, working implementation of this idea would convince me otherwise. Build/test systems have always worked this way for a very good reason: the only way to avoid re-building something when it hasn't changed is to keep it around. Most of those intermediate files are needed again for minimal rebuilds.
I guess you right. Indeed it's maybe nontrivial. With that in mind we need to create set of requirements for testing system. The solution may be in some flexibility. By default we do what we do now (I still believe 20-30 gigs is not a big deal). We may have to support another configuration which cleans completely after test is done (essentially forgetting what was tested) for the system that will run tests by request. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
I still believe 20-30 gigs is not a big deal
That depends. My desktop machine (which I use for running Boost tests) only has 20Gb free at the moment, with 30Gb devoted to tests for the 1.34 branch, and that's after upgrading the hard disk recently. I wouldn't want any more space to be consumed by boost tests. If I hadn't upgraded the hard disk (which I did for other reasons), I wouldn't be able to afford the space for the boost tests. Upgrading disks does cost money, even if it is not so much in comparison to other computing hardware. Anthony -- Anthony Williams Just Software Solutions Ltd - http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

on Mon Jun 04 2007, "Gennadiy Rozental" <gennadiy.rozental-AT-thomson.com> wrote:
"Douglas Gregor" <doug.gregor@gmail.com> wrote in message news:FC0BF0AD-CC21-430D-98B4-3C4FE793440F@osl.iu.edu...
On Jun 4, 2007, at 5:06 PM, Gennadiy Rozental wrote:
Every library is tested against particular set of dependencies selected by developer. But only *one* per lib. It does require additional disk space for source tree copy. I don't believe it major requirement these days.
I thought that too, but you are wrong. One of the most common failures with regression testers is that they run out of hard drive space, because testing Boost... just a single tree... requires tens of gigabytes.
Umm.. so what? My 3 year old desktop got 300 gig. Server sytems should have an access to even larger resources.
All the arguments in the world don't change reality. We have testers for whom the disk usage of our tests has posed a real problem. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

In an ideal world, we would: (1) Build all of Boost, as a user would (2) Install Boost (3) Build tests against the installed Boost, then (4) Run those tests
Yessssssssssss!!!!! As part of (2), please include documentation. As part of (4), please include testing against the release components, which is the thing that users are encouraged to use for production. IMHO testing release, and then debugging failures with the debug builds is the way to go. The current behavior of testing debug, and not testing release is highly suspect to me. Maybe also include (5) Get accurate summaries of the local results, and perhaps be able to contribute them via email to a boost testing list. -benjamin

Benjamin Kosnik wrote:
IMHO testing release, and then debugging failures with the debug builds is the way to go. The current behavior of testing debug, and not testing release is highly suspect to me.
We've had cases in the past where a release build works while a debug build leads to an internal compiler error. Of course we also had cases where a debug build works and a release build crashes. So to be on the safe side we really need to test both. I agree that if we have to pick one, release makes more sense from a QA perspective. It's a lot slower, though.

Benjamin Kosnik wrote:
In an ideal world, we would: (1) Build all of Boost, as a user would (2) Install Boost (3) Build tests against the installed Boost, then (4) Run those tests
Yessssssssssss!!!!!
As part of (2), please include documentation.
That makes only sense if you want to test the build/installation mechanism for the documentation. IMHO, this is not coupled closely to a platform (except for the availability of external tools) and should not be part of a regular testing schedule run by all testers.
As part of (4), please include testing against the release components, which is the thing that users are encouraged to use for production.
That makes sense, if it does not exclude testing against debug versions.
IMHO testing release, and then debugging failures with the debug builds is the way to go.
No. The debug versions contain additional code that checks against certain problems. Not using that code will effectively hide a significant number of problems.
The current behavior of testing debug, and not testing release is highly suspect to me.
Agreed. This is to some extent caused by lack of resources.
Maybe also include
(5) Get accurate summaries of the local results, and perhaps be able to contribute them via email to a boost testing list.
I don't understand this point. What is an "accurate summary"? Why should we publish these summaries via email? Either they're accurate and unsuitably large for email (and for being read by humans) or they're short and thus lack accuracy. Regards, m

Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:46647897.5030204@gmail.com...
The proposal seems to assume infinite resources in testing. Which particular part? On-demand testing, testing of breaking-stable branch, continuous testing of stable branch, all with high-availability and high-. Currently we can only manage partial testing of *1* branch, in one build variation. And now we are talking of testing at least three branches at once.
My solution doesn't require ANY of that. Let me repeat NONE.
Gennadiy, with all due respect, I wasn't talking about your solution. I was talking about Beman's proposal. I think it is a wasted effort to consider proposals and solutions that I can read concise documentation for. It's almost impossible to comment on them otherwise. So for all those thinking that their way is better, please write it up and we can consider them individually.
Well, high-availability/quick responce would be nice. But it's optional.
According to Beman it's essentially required.
* Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1.
I see only 6 bugs assigned to 1.34.1. To be frank with you I don;t see why do we need to hurry with releasing them.
The *whole* page is 1.34.0 issues, and only 6 of the 47 are slated to be fixed. One of them in particular, the one I've been trying to fix for days now #1025, gives incomplete installs for Windows Cygwin and MinGW users. This is something we don't detect, because we don't test Boost from a users point of view.
* The inspection reports 193 non-license problems, and *1059* license problems.
This is not a showstopper IMO. 1.34.0 in the same state isn't it?
I didn't say anything about "show stopper". I said it doesn't make it "stable" according to Beman's definition in which he states that clearing the inspection results is part of making it stable.
* We don't test the build and install process.
What do you want to test? In any case it doesn't make release "unstable"
I think first part of that was already answered. Yes it makes it "unstable" from the POV of users. If you can't guarantee that users will get an install they can use, how can it be stable?
* We don't test libraries against an installed release.
What do you mean?
I think Doug answered this... We don't test the *using* of Boost libraries. We only partially test the conformance of the code to expectations of library authors.
* We don't test, to any effective means, 64 bit architectures.
* We don't test, to any effective means, multi-cpu architectures.
Would be nice ... in future releases. It doesn't make current unstable.
It makes it unstable as soon as user asks why Boost doesn't work in 64-bit mode. And it makes it unstable when users complain that their program crashes in a Boost library when they run it on a dual-core, or dual-cpu, machine both of which are common now.
Let's me clarify again: do you believe 1.34.0 can't be used as stable starting point? If not, why?
Yes, it can't be used for a stable starting point. Because it is not proven stable, with testing, under the conditions users would operate under. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:4664A17F.6080204@gmail.com...
Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:46647897.5030204@gmail.com...
The proposal seems to assume infinite resources in testing. Which particular part? On-demand testing, testing of breaking-stable branch, continuous testing of stable branch, all with high-availability and high-. Currently we can only manage partial testing of *1* branch, in one build variation. And now we are talking of testing at least three branches at once.
My solution doesn't require ANY of that. Let me repeat NONE.
Gennadiy, with all due respect, I wasn't talking about your solution. I was talking about Beman's proposal. I think it is a wasted effort to consider proposals and solutions that I can read concise documentation for. It's almost impossible to comment on them otherwise. So for all those thinking that their way is better, please write it up and we can consider them individually.
Most of is here: http://article.gmane.org/gmane.comp.lib.boost.devel/158491 and some follow-up posts [.....] But you are write. I should put it somewhere publically available.
Let's me clarify again: do you believe 1.34.0 can't be used as stable starting point? If not, why?
Yes, it can't be used for a stable starting point. Because it is not proven stable, with testing, under the conditions users would operate under.
I got what you are trying to say. What I am trying to say is though it's good and in general right direction, I don't see it happening without support of several yet unknown new tools. I believe we should phase it. and this particular requirements can be phased for later stages. Gennadiy

on Mon Jun 04 2007, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
* The inspection reports 193 non-license problems, and *1059* license problems.
This is not a showstopper IMO. 1.34.0 in the same state isn't it?
I didn't say anything about "show stopper". I said it doesn't make it "stable" according to Beman's definition in which he states that clearing the inspection results is part of making it stable.
IMO that is an important criterion but using the term "stable" to describe it is wrong. Stability implies to me that the API and implementation are somehow relatively unchanging, not that the release is "good." Is there precedent for the use of "stable" in this way? -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

Rene Rivera wrote:
Gennadiy Rozental wrote:
"Rene Rivera" <grafikrobot@gmail.com> wrote in message news:46646B35.5050709@gmail.com...
Thomas Witt wrote:
Hi,
Douglas Gregor wrote:
On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote: I was going to write this email, but Doug beat me to it. And I guess you both beat me to it... As I was busy spending all my free time trying to fix bugs for 1.34.1. Although what's below are not my only thoughts on the release procedure...
The proposal seems to assume infinite resources in testing. Which particular part?
On-demand testing, testing of breaking-stable branch, continuous testing of stable branch, all with high-availability and high-. Currently we can only manage partial testing of *1* branch, in one build variation. And now we are talking of testing at least three branches at once.
But under the proposal we do not have to test entire branches. Instead of testing all rows for all columns, we only test a usually small number of rows against some of the columns. Together with a reduction in the number of machines running release branch tests (as others have also suggested), the net effect may well be a drop in the total testing load.
Can we get strait to the point?
What is required to make stable release? (Complete list) Why 1.34.0 is not stable?
Complete, interesting thought :-) I can't say I have such a complete list. But perhaps this will give you and idea:
* Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1.
* The inspection reports 193 non-license problems, and *1059* license problems.
* We don't test the build and install process.
* We don't test libraries against an installed release.
* We don't test release versions, even though this is the most used variant by users.
* We don't test, to any effective means, 64 bit architectures.
* We don't test, to any effective means, multi-cpu architectures.
Stable doesn't mean perfect. It only means no worse than on the last release. And in practice each release will be better, so the bar for stability keeps getting higher. Eventually, all of the issues you mention will be dealt with, but releases should not be held waiting for that state of perfection.
I believe spliting the directory structure will our life way simple in many prospectives. What complications do you see?
It increases the number of combinations that need testing. And in complicates the build and testing infrastructure. Both of which increase the likelihood of instability.
The Boosters as a whole should only be concerned with testing for full releases. If particular projects want to do library specific releases, it is their responsibility to do the testing of those releases. --Beman

Beman Dawes wrote:
Rene Rivera wrote:
* Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1.
* The inspection reports 193 non-license problems, and *1059* license problems.
* We don't test the build and install process.
* We don't test libraries against an installed release.
* We don't test release versions, even though this is the most used variant by users.
* We don't test, to any effective means, 64 bit architectures.
* We don't test, to any effective means, multi-cpu architectures.
Stable doesn't mean perfect.
If that's the case you need to make that clear in your descriptions. Because it seems to say the opposite. But I wasn't talking about perfection either. Most of the list above isn't a "we need to fix something", but a "we have no clue what needs fixing". This is because we just don't know what the bugs really are. Since we don't test most of what people use Boost for. Hence I personally can't call the Boost release "stable" under most definitions, except perhaps to mean "unchanging". -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

What is required to make stable release? (Complete list) Why 1.34.0 is not stable?
Complete, interesting thought :-) I can't say I have such a complete list. But perhaps this will give you and idea:
Thanks Rene. This list was very useful, as was the link to your presentation at boostcon.
* Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1.
* The inspection reports 193 non-license problems, and *1059* license problems.
Why can't these be fixed up for 1.34.1?
* We don't test the build and install process.
* We don't test libraries against an installed release.
* We don't test release versions, even though this is the most used variant by users.
Yep. IMHO, this is contrary to most SWE best practices.
* We don't test, to any effective means, 64 bit architectures.
I'm not quite sure what you're getting at here: can you elaborate? Certainly, targets that are native 64bit like alpha, x86-64, ppc64, s390x are tested. Do you mean tested and reported on the boost regression site? If so, I agree, some reports for 64bit arches (big/little endian) would be helpful to developers without access to this hardware.
* We don't test, to any effective means, multi-cpu architectures.
Again, as above. This can be tested on a local basis. Or am I confused here? Are you talking about mt testing on single, dual, quad+ systems? Or some other kind of testing? I think a more decentralized testing/reporting procedure would help things greatly. Too much of the reporting is centralized. -benjamin

Benjamin Kosnik wrote:
What is required to make stable release? (Complete list) Why 1.34.0 is not stable? Complete, interesting thought :-) I can't say I have such a complete list. But perhaps this will give you and idea:
Thanks Rene. This list was very useful, as was the link to your presentation at boostcon.
* Bugs attributed 1.34.0 <http://tinyurl.com/2cn7g6>, and only a small number of them are targeted for 1.34.1.
* The inspection reports 193 non-license problems, and *1059* license problems.
Why can't these be fixed up for 1.34.1?
Because to many developers don't give a s... I tried to address this early on in the 1.34 cycle. To no avail.
* We don't test the build and install process.
* We don't test libraries against an installed release.
* We don't test release versions, even though this is the most used variant by users.
Yep. IMHO, this is contrary to most SWE best practices.
All to true. It's just that apart from release testing nobody has stepped up to do it. Release testing was cut from 1.34 to avoid further delays. It's unfortunate, but we can't solve all problems at once. Thomas -- Thomas Witt witt@acm.org

Thomas Witt wrote:
Benjamin Kosnik wrote:
* The inspection reports 193 non-license problems, and *1059* license problems.
Why can't these be fixed up for 1.34.1?
Because to many developers don't give a s... I tried to address this early on in the 1.34 cycle. To no avail.
Hmm - there's a little more to it. The inspection report flags all files without a license. In the case of the serialization library there are data files which contain test data to run some of the test. These are flagged as "license problem". These files are produced by the serialization library and I didn't think it appropriate to alter the library to insert such a message. I've considered the message advisory. I don't think that is wrong.
* We don't test the build and install process.
I mentioned this before and its a valide point and I don't think it would be all that hard.
* We don't test libraries against an installed release.
Fixing this is the cornerstone of Beman's proposal.
* We don't test release versions, even though this is the most used variant by users.
Beman's proposal suggests improvements in the test procedure which I would hope would address this. That is a test request would specify which variants are desired. Robert Ramey
Yep. IMHO, this is contrary to most SWE best practices.
All to true. It's just that apart from release testing nobody has stepped up to do it. Release testing was cut from 1.34 to avoid further delays. It's unfortunate, but we can't solve all problems at once.
Thomas

on Thu Jun 07 2007, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
Thomas Witt wrote:
Benjamin Kosnik wrote:
* The inspection reports 193 non-license problems, and *1059* license problems.
Why can't these be fixed up for 1.34.1?
Because to many developers don't give a s... I tried to address this early on in the 1.34 cycle. To no avail.
Hmm - there's a little more to it. The inspection report flags all files without a license. In the case of the serialization library there are data files which contain test data to run some of the test.
Are they binary or text data? If the latter, I think a license is appropriate. If binary, I think we need something we can mark up that lists the binary files covered under the license. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams wrote:
Are they binary or text data? If the latter, I think a license is appropriate. If binary, I think we need something we can mark up that lists the binary files covered under the license.
This should be trivial after the switch to subversion. We could maintain a license property for each binary file and we could automatically create a license statement based on that property. In fact, we could expand this mechanism to cover all files in the repository. Based on that, we could create a subset of Boost that does not contain any non-BSL code. This should make lawyers happy. Regards, m

Thomas Witt wrote:
Benjamin Kosnik wrote:
* The inspection reports 193 non-license problems, and *1059* license problems.
Why can't these be fixed up for 1.34.1?
Because to many developers don't give a s... I tried to address this early on in the 1.34 cycle. To no avail.
Would it be possible to get some sort of blanket permission from library maintainers with license problems to update the license in their files? If so, this seems like a very automatable task. (I know there might be some problems with making this happen, but it seems worth asking.) John
Thomas

Rene Rivera said: (by the date of Mon, 04 Jun 2007 14:42:45 -0500)
Agreed. Lets build the foundations first.
Yes. And some of us have been working hard toward that. I've made the changes to the regression scripts to publish test results to <http://beta.boost.org:8081/>. And Noel is hard at work making it possible to publish to that server directly from bjam+boost.build, and hence making it possible to shorten the testing tool chain.
looks really good, congratulations. Are you planning something like "request _this_particular_ test, quick" ? When someone is *now* really trying to fight with some particular compiler, giving his request a highest priority will speed up everything in general. He might not have time tommorow to get back to this. And things will have to wait a week for his next try... Regular regression test will go their own way, with lower priority. -- Janek Kozicki |

On Jun 4, 2007, at 4:59 PM, Janek Kozicki wrote:
Rene Rivera said: (by the date of Mon, 04 Jun 2007 14:42:45 -0500)
Agreed. Lets build the foundations first.
Yes. And some of us have been working hard toward that. I've made the changes to the regression scripts to publish test results to <http://beta.boost.org:8081/>. And Noel is hard at work making it possible to publish to that server directly from bjam+boost.build, and hence making it possible to shorten the testing tool chain.
looks really good, congratulations.
Are you planning something like "request _this_particular_ test, quick" ?
I would really rather see "someone checked something in, go re-test Boost and post the results" However, re-testing Boost takes a while. Part of this is because Boost is big, part of it is because we don't consider the testing burden at all. For example, the Serialization library itself builds 460 separate test cases, totalling about 741MB of code. I suspect that we could get the same coverage from 50 well-designed test cases, with other big improvements: compilation/testing would run about 9 times faster, so we would be able to turn around tests faster. Disk space utilization would be cut into 1/9th of what it is now, reducing the number of problems resulting from "out of disk space" errors. As testing requirements decrease, it becomes easier for more testers to provide regression testing, so we get better results. - Doug

Hi Doug, On Jun 4, 2007, at 4:59 PM, Douglas Gregor wrote:
On Jun 4, 2007, at 4:59 PM, Janek Kozicki wrote:
Rene Rivera said: (by the date of Mon, 04 Jun 2007 14:42:45 -0500)
Agreed. Lets build the foundations first.
Yes. And some of us have been working hard toward that. I've made the changes to the regression scripts to publish test results to <http://beta.boost.org:8081/>. And Noel is hard at work making it possible to publish to that server directly from bjam+boost.build, and hence making it possible to shorten the testing tool chain.
looks really good, congratulations.
Are you planning something like "request _this_particular_ test, quick" ?
I would really rather see "someone checked something in, go re-test Boost and post the results"
However, re-testing Boost takes a while. Part of this is because Boost is big, part of it is because we don't consider the testing burden at all.
And part of it is the inability to run tests in parallel. Anyone have a feel for what it would take to support running the tests in parallel? -- Noel

K. Noel Belcourt wrote:
And part of it is the inability to run tests in parallel. Anyone have a feel for what it would take to support running the tests in parallel?
What do you mean by parallel? Do you mean running bjam with say a "-j4"? If so, the only requirement is that we fix the parallel bjam output bug you already fixed for the non-Windows bjam. So, theoretically you could start running parallel tests now by using the bjam you fixed and passing "--bjam-options=-j4" to regression.py (or directly to bjam if you don't use the python script). -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

On Jun 4, 2007, at 8:09 PM, Rene Rivera wrote:
K. Noel Belcourt wrote:
And part of it is the inability to run tests in parallel. Anyone have a feel for what it would take to support running the tests in parallel?
What do you mean by parallel? Do you mean running bjam with say a "- j4"?
Yup.
If so, the only requirement is that we fix the parallel bjam output bug you already fixed for the non-Windows bjam. So, theoretically you could start running parallel tests now by using the bjam you fixed and passing "--bjam-options=-j4" to regression.py (or directly to bjam if you don't use the python script).
Hot Dog! I'm all over that. -- Noel

On Jun 4, 2007, at 3:42 PM, Rene Rivera wrote:
Yes. And some of us have been working hard toward that. I've made the changes to the regression scripts to publish test results to <http://beta.boost.org:8081/>. And Noel is hard at work making it possible to publish to that server directly from bjam+boost.build, and hence making it possible to shorten the testing tool chain.
Very nice. Is that Dart or Dart2? While we're working on the CMake build system [*], we've been relying on Dart2 to make sure that things are going well on a couple of platforms. Troy set up nightly regression tests on a couple of platforms and, more importantly, continuous testing on a few platforms. After one checks code into Subversion, the continuous testers would pick it up, re-build and re-test as appropriate, and post the results immediately. It is very, very nice to work with, and we should strive to have something similar. [*] I don't want to discuss CMake in this thread, because it will not be productive. When we're ready, we'll start a separate thread.
We will not ship 1.35.0 within the next year if we do major surgery to our directory structure. It's just not going to happen.
There are two other aspect to 1.35.0 that I'm trying to address. In another thread, I raised the question of svn dir structure. And it devolved into the same aspects that this thread devolved to, discussing how to split the sources up as much as possible based on libraries. This is fine, but it doesn't get us any closer to managing the structure we currently have.
Agreed.
We need to concentrate on making this simpler first! Which brings up the second item, the website. One of the simplifications for releases is to separate the website content from the release itself. (that was my rant)
Yes, the web site. From a tool perspective, we're in great shape: updates to the website in Subversion trigger an immediate update on beta.boost.org, so developers need only check in their changes and they'll show up. With a bit of volunteer effort, we could easily migrate all of the content on boost.org over to beta.boost.org and make it live. That would make the task of keeping the web site up-to- date far, far easier. - Doug

Douglas Gregor wrote:
On Jun 4, 2007, at 3:42 PM, Rene Rivera wrote:
Yes. And some of us have been working hard toward that. I've made the changes to the regression scripts to publish test results to <http://beta.boost.org:8081/>. And Noel is hard at work making it possible to publish to that server directly from bjam+boost.build, and hence making it possible to shorten the testing tool chain.
Very nice. Is that Dart or Dart2?
Dart2. At some point I'll bug you about MySQL access to use the more scalable DB for the results.
[*] I don't want to discuss CMake in this thread, because it will not be productive. When we're ready, we'll start a separate thread.
Same here :-)
With a bit of volunteer effort, we could easily migrate all of the content on boost.org over to beta.boost.org and make it live.
Yep... People, if you want an easy way to contribute to Boost, this is it. Even though I enjoy working on the web site, I have essentially no time right now :-(
That would make the task of keeping the web site up-to- date far, far easier.
Well that part is already easy, thanks to you for setting up the svn/web bridge. I was thinking of how it would make the Boost release itself easier since it would remove just about all the work from the release manager in this area. Basically, it's one less thing to worry about, for everyone, during releases. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

on Mon Jun 04 2007, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
Douglas Gregor wrote:
On Jun 4, 2007, at 3:42 PM, Rene Rivera wrote:
Yes. And some of us have been working hard toward that. I've made the changes to the regression scripts to publish test results to <http://beta.boost.org:8081/>. And Noel is hard at work making it possible to publish to that server directly from bjam+boost.build, and hence making it possible to shorten the testing tool chain.
Very nice. Is that Dart or Dart2?
Dart2. At some point I'll bug you about MySQL access to use the more scalable DB for the results.
I thought PostGreSQL is generally acknowledged to be the more scalable option(?) -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams wrote:
on Mon Jun 04 2007, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
Dart2. At some point I'll bug you about MySQL access to use the more scalable DB for the results.
I thought PostGreSQL is generally acknowledged to be the more scalable option(?)
Debatable, but my comparison is against the Derby Java DB that Dart2 uses by default. We have MySQL installed already, so it makes more sense to spend time switching to it then spend time installing another db and switching to it ;-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Rene Rivera wrote:
Thomas Witt wrote:
Hi,
Douglas Gregor wrote:
On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote: I was going to write this email, but Doug beat me to it.
And I guess you both beat me to it... As I was busy spending all my free time trying to fix bugs for 1.34.1. Although what's below are not my only thoughts on the release procedure...
The proposal seems to assume infinite resources in testing.
AFAICT the it also mandates increasing the testing and release management tools pipeline.
That isn't the idea. Rather the current lengthly pipeline is gradually retired in favor of shorter independent pipelines.
I agree with most of Beman's write-up, but it pre-supposes a robust testing system for Boost that just doesn't exist.
It also pre-supposes a "stable" starting point for ongoing releases. First 1.34.1, will not be such a release. Second, it will take at least 6 months to make a clean and stable release, and that's without adding new libraries. Third, IMO to make a clean, stable, robust 1.35 following the proposal would take more than a year.
By definition, the last release is always considered stable. Stable doesn't mean perfect, it just means good enough to build upon.
Agreed. Lets build the foundations first.
Yes. And some of us have been working hard toward that. I've made the changes to the regression scripts to publish test results to <http://beta.boost.org:8081/>. And Noel is hard at work making it possible to publish to that server directly from bjam+boost.build, and hence making it possible to shorten the testing tool chain.
That will be a nice help!
We will not ship 1.35.0 within the next year if we do major surgery to our directory structure. It's just not going to happen.
I agree. OTOH, I'd like to do minor surgery on a small number of libraries, particularly moving the headers into the library's <root>/libs tree (and presumably replacing any headers in the the <root>/boost tree with forwarding headers).
There are two other aspect to 1.35.0 that I'm trying to address. In another thread, I raised the question of svn dir structure. And it devolved into the same aspects that this thread devolved to, discussing how to split the sources up as much as possible based on libraries. This is fine, but it doesn't get us any closer to managing the structure we currently have. We need to concentrate on making this simpler first! Which brings up the second item, the website. One of the simplifications for releases is to separate the website content from the release itself. (that was my rant)
Could you explain that thought a bit? By "website content" do you mean portions of the website that are no tied to a particular library? Thanks, --Beman

Beman Dawes wrote:
By definition, the last release is always considered stable. Stable doesn't mean perfect, it just means good enough to build upon.
OK, but that isn't what I understood from your description:
The definition of stable includes passing all regression tests on release-critical compilers (or marked up accordingly), and also passing all other stability measurements. If a library depends on other libraries, it is only stable if those libraries are stable.
Stability criteria include not only regression test failures, but also inspection failures, tool-chain errors, configuration errors, missing files, and any other detectable errors that reduce release quality or impact development environment stability.
-- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Thomas Witt wrote:
... The proposal seems to assume infinite resources in testing.
I must not have been clear. The proposal would do much less testing than is currently done. Under the current scheme, testing is done even when no one requests a test for a particular library and compiler. Under the proposal, tests only get run when someone requests a test. So for example, a Windows developer would only likely request tests on non-Windows platforms.
The reality is we can not even test one branch reliably and this despite considerable effort by a number of people. With the current setup the process outlined is unworkable.
Yes, of course. The current testing setup (where unstable code is allowed to be committed to the release branch) is hopeless and should be abandoned.
As an example on how bad things are: I would like to merge changes for 1.34.1 one at a time so that I can identify the change that broke something. With the current turn-around time, even when the system works as designed, this is impossible unless we aim for a X-mas release date.
Yes, that's why I'm saying we need to abandon the shotgun approach to testing everything that causes the long turnaround times.
<rant>
It always strikes me as odd that we spend many man-hours discussing the process while we are spending very little on fixing bugs. In my experience the man-hours available for bug-fixing are severely limited. I want to explicitly exclude Beman here. He fixed the outstanding bugs _and_ spend the time for the paper. This is not the norm so.
</rant>
At this point, I think our best bet is to spend it making the regression testing infrastructure work well; then we can move on to a new process with its new tools.
From my experience this is the only promising way forward. You can also rephrase it as: "Let's stabilize something, before we destabilize everything.". We will not ship 1.35.0 within the next year if we do major surgery to our directory structure. It's just not going to happen.
I disagree. As long as we start from a release (presumably 1.34.1) as the stable branch, and only allow known-good changes to it, it should always be ready to ship. Period. When the agree upon ship date arrives, changes to any libraries that didn't make it into the release just have to wait until the next release. Changes to the directory structure (and anything else) are done on a library-by-library basis. So such big changes take several release cycles to roll out. That's all.
I strongly urge us to do something simple and restricted in scope first. That will give the biggest bang for the buck.
We need to change to Subversion so we can get better repository control. We need to begin the next release from the last release, and not apply changes to the release branch unless they are known good. I see those changes as "simple and restricted in scope". I'd also like to see one or two small libraries change their directory structure, so we can verify that the new structure works in practice before we start changing directory structures for all libraries. --Beman

"Beman Dawes" <bdawes@acm.org> wrote in message news:f47qsj$m3j$1@sea.gmane.org...
Thomas Witt wrote:
... The proposal seems to assume infinite resources in testing.
I must not have been clear. The proposal would do much less testing than is currently done. Under the current scheme, testing is done even when no one requests a test for a particular library and compiler. Under the proposal, tests only get run when someone requests a test. So for example, a Windows developer would only likely request tests on non-Windows platforms.
1. I don't think it's correct expectation. Even if I run the test locally on windows box, I still a) don't have access to all configurations b) want to run the tests in non-development environment. My local setup can be somehow different. I always prefer to run the test though regression test suite even if it passes locally. 2. The procedure for on-demand test request doesn't seem to be easy to implement and will require significant investment in both development and maintenance of the tools. 3. In many (most?) cases I don't have an idea what configurations exists. We can't force developers to send dozens of test requests for system they know nothing about. IMO regular "shotgun" testing is still going to be in big demand and we can't afford not to support it. In general I am not sure that investment in on-demand testing support will give us visible improvement. Gennadiy

Douglas Gregor wrote:
On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote:
The idea is that merge requests would be something as simple as sending an email to some server which is running a script that automatically checks that all criteria are met and then does the merge. [snip] After some more thought, I decided that when a developer requests a test (via the "test on demand" mechanism), it should be possible to request tests on multiple libraries, so that a dependency sub-tree or any portion thereof can be tested.
Rather than building dependency lists into the system (which is a fairly heavyweight approach), it might be simpler to give developers a tool to find which libraries are dependent on their library, and then leave it up to the developer how much or little they test against the dependency tree. If a developer who undertests runs the risk that a merge-into-stable request will fail, because merge-into-stable requests fail if they would cause any other library to fail.
That's three new tools, some of which are non-trivial to develop. All tools are non-trivial to maintain.
Some tools, particularly those that are dependency based, are interesting to speculate about but are not essential. They can be safely ignored for now. And some of the needs may be met by off-the-shelf tools we don't have to develop or maintain. The most critical new (to us) tool would be test-on-demand. I've been very deliberately focusing on figuring out what is needed rather than where we get the tool or how the details work. Now that the need seem fairly firmly defined, we can start looking at what tools are available to meet those needs.
We clearly need tools to improve the Boost development and release process. The problem is that while good tools can help the process, poor tools can hurt us even more than no tools. We can't build new tools until we've fixed or replaced the existing tools, and we can't build new tools without a solid plan for maintaining those tools.
I'm tired of waiting. For what I'm proposing, bjam is good enough as it stands now. The downstream reporting system is orthogonal to test-on-demand.
Look at the 1.34 release series... the thing that's been holding us back most of all is that the testing and test reporting tools are broken. 1.34.1 is stalled because we have failures on one platform, but nobody can see what those failures actually are: the test reporting system removed all of the important information.
I agree with most of Beman's write-up, but it pre-supposes a robust testing system for Boost that just doesn't exist.
That may be true for the whole system testing and reporting release managers care about, but for for a developer wanting to test a single library bjam works pretty well, and I suspect it will work well for tests on a small number of dependent libraries too. But regardless, the test-on-demand system should be independent of how the testing is actually run. If we change to a different build system or test execution framework, it would nice if the procedures as seen by a developer don't change much.
I hypothesize that the vast majority of the problems with our release process would go away without a single change to our process, if only we had a robust testing system.
I think you have to change the process at least enough so that a stable branch is always maintained and developers can test their library's development branch on deman against the stable branch. The current "wild-west" problems in the trunk would not go away just because the testing system worked better, was more responsive, etc.
We have only so much volunteer time we can spend. At this point, I think our best bet is to spend it making the regression testing infrastructure work well; then we can move on to a new process with its new tools.
I hope you aren't counting Subversion as a "new" tool! And what about starting the next release from the last release, rather than branching the development truck? That is a new process, but it is one we can start right away. In general, I'd like to increment into new processes as we get them figured out, and new tools when we find something that will better support our processes. --Beman

Beman Dawes wrote:
Some tools, particularly those that are dependency based, are interesting to speculate about but are not essential. They can be safely ignored for now. And some of the needs may be met by off-the-shelf tools we don't have to develop or maintain.
The most critical new (to us) tool would be test-on-demand. I've been very deliberately focusing on figuring out what is needed rather than where we get the tool or how the details work. Now that the need seem fairly firmly defined, we can start looking at what tools are available to meet those needs.
An excellent tool for test automation (and more) is the buildbot project (http://buildbot.net/trac). It allows to set up a variety of schedulers, triggered by a mix of timers as well as external events (such as check-in notification). It also allows to run builders (such as running test suites) on demand, with suitable authentication in place. As it happens, Rene has been working on a prototype to drive boost testing by that.
I hypothesize that the vast majority of the problems with our release process would go away without a single change to our process, if only we had a robust testing system.
I think you have to change the process at least enough so that a stable branch is always maintained and developers can test their library's development branch on deman against the stable branch. The current "wild-west" problems in the trunk would not go away just because the testing system worked better, was more responsive, etc.
I don't quite agree. I strongly believe people would be more responsive to failure notification if these reports would 1) be more reliable 2) provide more information about the failure (and context) 3) be more timely, to provide stronger evidence as to the likely cause Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Beman Dawes wrote:
I think you have to change the process at least enough so that a stable branch is always maintained and developers can test their library's development branch on deman against the stable branch. The current "wild-west" problems in the trunk would not go away just because the testing system worked better, was more responsive, etc.
I don't quite agree. I strongly believe people would be more responsive to failure notification if these reports would
1) be more reliable 2) provide more information about the failure (and context) 3) be more timely, to provide stronger evidence as to the likely cause
FWIW I agree with both of you ;-). I think Beman is right in saying that they won't go away. At the same time I think they would be a lot less severe given Stefan's conditions. This goes back to my theory that people do boost part time and usually have a (somewhat) fixed time budget for that. We are left with the choice of having them spend their time tying to figure out how things work or having them do actual productive work. You might even argue that productive work is more fun so they are more likely to extend their time budget. Thomas -- Thomas Witt witt@acm.org

Stefan Seefeld wrote:
Beman Dawes wrote:
The most critical new (to us) tool would be test-on-demand. I've been very deliberately focusing on figuring out what is needed rather than where we get the tool or how the details work. Now that the need seem fairly firmly defined, we can start looking at what tools are available to meet those needs.
An excellent tool for test automation (and more) is the buildbot project (http://buildbot.net/trac). It allows to set up a variety of schedulers, triggered by a mix of timers as well as external events (such as check-in notification). It also allows to run builders (such as running test suites) on demand, with suitable authentication in place.
The descriptions of buildbot I've read always seem to assume that code on a particular branch is tested. But what I'm suggesting is that in general the stable branch is used, but that the development branch for the library under test is switched to for the duration of the test. Thus the stable branch is not altered (except for the working copy on the machine doing the testing.) Do you know if buildbot can be set to update to a different branch for the library to be tested? Thanks, --Beman

Beman Dawes wrote:
Do you know if buildbot can be set to update to a different branch for the library to be tested?
If you can define the version control commands to do it, as a human, then it can be done programatically by BuildBot. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Beman Dawes wrote:
Do you know if buildbot can be set to update to a different branch for the library to be tested?
Yes, easily ! Here is the python.org buildbot status page for all the builders. You'll notice all the different branches that get tested: http://www.python.org/dev/buildbot/all/ Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

On Sun, Jun 03, 2007 at 06:35:29PM +0300, Peter Dimov wrote: [snip]
That said, it might be good time for us to radically rethink the structure of Boost and decouple versions from releases. Versions should be per-library, and releases should be a set of versions. This separation ought to be enforced on a directory tree level, as in:
boost/ foo/ foo.hpp bar/ bar.hpp
with nothing else allowed in boost/. A separate compat/ directory would hold the current contents of boost/ (header name-wise), giving people a migration path.
Agree, each component C's headers should be accessible only via an include path boost/C for the same reason that boost as a whole is meant to be accessible only via boost/. There has been spotty discussion of this as it relates to source control in the subversion threads.
This should probably come with a stricter dependency management approach than the current "none". Each library should list its dependencies, per-library tests should operate on a subtree formed by the dependencies and not on the full boost/ tree, adding an additional dependency should be frowned upon. We might consider introducing "dependency levels" such that a level N library can depend on level N-1 libraries, but not vice versa.
Agree. If you organize projects correctly you can get help managing/enforcing these dependencies. A library's headers, source, test, need to be together in source control (in one directory) so it is easy to version them independently. Projects have: (in directory BOOST_ROOT, say) iostreams/ include boost/ # contains only dir 'iostreams' iostreams/ *.hpp # notice each project has its own include dir src/ ... test/ ... variant/ include/ boost/ # contains only dir 'variant' variant/ *.hpp test/ ... If a component lists its dependencies as DEPENDS(iostreams variant) then BOOST_ROOT/iostreams/include and BOOST_ROOT/variant/include are added to the compiler's search path when it is built. Same thing happens at link time with whatever libraries the dependencies provide. This forces component authors to maintain their dependency lists... otherwise their component won't compile. It does not force authors to keep their dependency lists neatly pruned, but oh well. The dependency tree is easily accessible to automatic tools. However it is somewhat harder to navigate the sources.
With this organization, several releases can proceed in parallel, it being the sole responsibility of the release manager to pick the correct library subset and their versions such that the result is a stable release. More importantly, it allows developers to concentrate on improving their libraries.
It becomes very easy to do many of the things in Beman's document, like back out changes in a library high (low?) in the dependency tree that break everything else, create arbitrary subdistributions, and incorporate new libraries into stable releases. The interface is straightforward enough that users developing boost-dependent code can easily manage their dependencies. Things can get tedious when several libraries become coupled, and in order to work on a 'branch', you must branch each of the individual components.
Once there one could venture into the direction that packaging a specific release should be the job of the installer and not of the release manager; that is, Boost should package individual library versions, not a monolithic .34.zip. The installer probably ought to also allow upgrading a single library (or adding a new library) should the user so choose.
Yeah, I imagine you could get as fancy as you want here. -t

troy d. straszheim wrote:
(in directory BOOST_ROOT, say)
iostreams/ include boost/ # contains only dir 'iostreams' iostreams/ *.hpp # notice each project has its own include dir src/ ... test/ ...
variant/ include/ boost/ # contains only dir 'variant' variant/ *.hpp test/ ...
This works and is a good solution from a testing standpoint... but I don't support it for purely selfish reasons; it breaks my "CVS HEAD" use case. :-) I'd still like to have a 'trunk' from which I can 'svn update'.

"Peter Dimov" <pdimov@mmltd.net> wrote in message news:010801c7a6c9$b39badb0$6407a80a@pdimov2...
troy d. straszheim wrote:
(in directory BOOST_ROOT, say)
iostreams/ include boost/ # contains only dir 'iostreams' iostreams/ *.hpp # notice each project has its own include dir src/ ... test/ ...
variant/ include/ boost/ # contains only dir 'variant' variant/ *.hpp test/ ...
This works and is a good solution from a testing standpoint... but I don't support it for purely selfish reasons; it breaks my "CVS HEAD" use case. :-) I'd still like to have a 'trunk' from which I can 'svn update'.
1. Why can't you do it from root? 2. With independent development and svn externals you don't need to do it as frequently. you do update in your local directory and it updates all you depend on. Gennadiy

Gennadiy Rozental wrote:
"Peter Dimov" <pdimov@mmltd.net> wrote in message news:010801c7a6c9$b39badb0$6407a80a@pdimov2...
I'd still like to have a 'trunk' from which I can 'svn update'.
1. Why can't you do it from root? 2. With independent development and svn externals you don't need to do it as frequently. you do update in your local directory and it updates all you depend on.
I'm focused on getting ~85% of the benefits with ~30% of the effort and a clear migration path. My current development model is sync against CVS HEAD, do work, commit, check test results, fix. My use model is sync against CVS HEAD, compile project, yell at whoever introduced a regression in the boost component I'm using. This works well for me and I'd like to keep working in a similar way. The structure and organization I have in mind is doable incrementally. There is one relatively painful step of reorganizing the boost directory structure, but it's process-independent. After that, developers can continue working against the trunk as they do now. Once the test matrix for a library is green, the developer can create a version tag. This can be done automatically as you suggest, but it can also be done manually as SVN makes it relatively easy. The dependency management can also be introduced at a later date. It is not as fine grained as in your proposal - you can't depend on a specific version - and this is intentional, to keep things simple. This step requires the test infrastructure to be updated to allow testing a specific library and only pull a subtree. A release also doesn't require any new tools; it can be done manually by the release manager. We may be able to streamline it with tools in the future, of course.

"Peter Dimov" <pdimov@mmltd.net> wrote in message news:012301c7a6cc$59a39360$6407a80a@pdimov2...
Gennadiy Rozental wrote:
"Peter Dimov" <pdimov@mmltd.net> wrote in message news:010801c7a6c9$b39badb0$6407a80a@pdimov2...
I'd still like to have a 'trunk' from which I can 'svn update'.
1. Why can't you do it from root? 2. With independent development and svn externals you don't need to do it as frequently. you do update in your local directory and it updates all you depend on.
I'm focused on getting ~85% of the benefits with ~30% of the effort and a clear migration path.
My current development model is sync against CVS HEAD, do work, commit, check test results, fix. My use model is sync against CVS HEAD, compile project, yell at whoever introduced a regression in the boost component I'm using. This works well for me and I'd like to keep working in a similar way.
IMO this is not the desirable scheme in general. Actually this is exactly what we *should not* be doing IMO. Let's say I am testing some workarounds for particular compiler. It might break in some other place. I don't see why you need to bother. I may not have time to fix it right away and/or I may still to do more testing, try different things. What you should be doing instead is point to "released"/stable version of my library and do your own developemnt without minding mine. Once you done you switch to my latest. If it works you release new version of your library. If not, you * either release version of your library that depends on older version of mine. * or yell at me and * either I fix it * or I yell at you back and you fix it * or we find some other solution The important point here is that until we find this solution the algorithm that select versions for boost umbrella release wont pick any new releases I made. This is an incentive for me to make sure my new version works for all who depend on me.
The structure and organization I have in mind is doable incrementally. There is one relatively painful step of reorganizing the boost directory structure, but it's process-independent. After that, developers can continue working against the trunk as they do now. Once the test matrix for a library is green, the developer can create a version tag. This can be done automatically as you suggest, but it can also be done manually as SVN makes it relatively easy.
What directory structure are you talking about?
The dependency management can also be introduced at a later date. It is not as fine grained as in your proposal - you can't depend on a specific version - and this is intentional, to keep things simple.
What value does it have then? This is what we got now "informally".
This step requires the test infrastructure to be updated to allow testing a specific library and only pull a subtree.
If there is no way to point to older version of my depenencies, what the poiont of subtree pulling? They all going to be the same.
A release also doesn't require any new tools; it can be done manually by the release manager. We may be able to streamline it with tools in the future, of course.
Release still require all libraries to be tested together, right? Gennadiy

Gennadiy Rozental wrote:
"Peter Dimov" <pdimov@mmltd.net> wrote in message news:012301c7a6cc$59a39360$6407a80a@pdimov2...
My current development model is sync against CVS HEAD, do work, commit, check test results, fix. My use model is sync against CVS HEAD, compile project, yell at whoever introduced a regression in the boost component I'm using. This works well for me and I'd like to keep working in a similar way.
IMO this is not the desirable scheme in general. Actually this is exactly what we *should not* be doing IMO.
It works for me. As a Boost user, I simply don't use Boost components whose HEAD versions are unstable. As a Boost developer, if a dependency takes too much time to stabilize, I sever ties with it and reimplement the parts I need. This is rare since I have low tolerance for dependencies anyway. :-) I understand that this mindset may be unusual. Still, I find the idea that the trunk is assumed to be unstable a bit odd. The trunk should be stable and everyone should work to keep it that way.
What directory structure are you talking about?
boost/ foo/ foo.hpp bar/ bar.hpp the key point being that a library is not allowed to put anything else in boost/.
The dependency management can also be introduced at a later date. It is not as fine grained as in your proposal - you can't depend on a specific version - and this is intentional, to keep things simple.
What value does it have then? This is what we got now "informally".
The value of the dependency management is to allow subsets of Boost to be tested and released.
This step requires the test infrastructure to be updated to allow testing a specific library and only pull a subtree.
If there is no way to point to older version of my depenencies, what the poiont of subtree pulling? They all going to be the same.
The point of subtree pulling is to verify that the library can be built against its dependencies, not against the full boost/ tree as we currently do.
A release also doesn't require any new tools; it can be done manually by the release manager. We may be able to streamline it with tools in the future, of course.
Release still require all libraries to be tested together, right?
Right. The release process basically consists of integration testing.

"Peter Dimov" <pdimov@mmltd.net> wrote in message news:016b01c7a6d2$7722c9a0$6407a80a@pdimov2...
Gennadiy Rozental wrote:
"Peter Dimov" <pdimov@mmltd.net> wrote in message news:012301c7a6cc$59a39360$6407a80a@pdimov2...
My current development model is sync against CVS HEAD, do work, commit, check test results, fix. My use model is sync against CVS HEAD, compile project, yell at whoever introduced a regression in the boost component I'm using. This works well for me and I'd like to keep working in a similar way.
IMO this is not the desirable scheme in general. Actually this is exactly what we *should not* be doing IMO.
It works for me.
As a Boost user, I simply don't use Boost components whose HEAD versions are unstable.
An "average" Boost user is primarily interrested in latest released version.
As a Boost developer, if a dependency takes too much time to stabilize, I sever ties with it and reimplement the parts I need. This is rare since I have low tolerance for dependencies anyway. :-)
What if you depend on serialization or GUI lib or XML parser. It might not be possible to "reimplement" all your dependencies. And this is not a good practive in general IMO. Since you are causing breakage to "single definition rule" on library level.
I understand that this mindset may be unusual. Still, I find the idea that the trunk is assumed to be unstable a bit odd. The trunk should be stable and everyone should work to keep it that way.
If trunk is stable, how do I test my development? If I am done with my development when can I put it into "stable" trunk? What if I break something? What if N libraries merged their changes the same time. How long will it take t osort it out?
What directory structure are you talking about?
boost/
foo/ foo.hpp
bar/ bar.hpp
the key point being that a library is not allowed to put anything else in boost/.
That's good goal. I support it. But this tree could exist as a reflection of actual tree (using svn externals): boost/ foo/ -> foo/truck/boost/foo foo.hpp -> foo/truck/boost/foo.hpp bar/ -> foo/truck/boost/bar bar.hpp -> foo/truck/boost/bar.hpp How run svn update in this directory and you pull all you need.
The dependency management can also be introduced at a later date. It is not as fine grained as in your proposal - you can't depend on a specific version - and this is intentional, to keep things simple.
What value does it have then? This is what we got now "informally".
The value of the dependency management is to allow subsets of Boost to be tested and released.
How can I release and test my subset if I can't compile with truck version of library I depend on. I don't really care about latest changes? I would be happy to work with last stable version (last boost release)
This step requires the test infrastructure to be updated to allow testing a specific library and only pull a subtree.
If there is no way to point to older version of my depenencies, what the poiont of subtree pulling? They all going to be the same.
The point of subtree pulling is to verify that the library can be built against its dependencies, not against the full boost/ tree as we currently do.
I still don't see the difference. What do you win by pullig npart of the tree? Disk space? Every time you put you get the same files. Just different subsets.
A release also doesn't require any new tools; it can be done manually by the release manager. We may be able to streamline it with tools in the future, of course.
Release still require all libraries to be tested together, right?
Right. The release process basically consists of integration testing.
And this is what we should be avoiding. There should not be testing stage during release. Let's take a look on this from different prospective: what in my proposal you find incorrect? What are the problems for you personally if we do this? Can you list all? Gennadiy

Gennadiy Rozental wrote:
"Peter Dimov" <pdimov@mmltd.net> wrote in message news:016b01c7a6d2$7722c9a0$6407a80a@pdimov2...
Gennadiy Rozental wrote:
"Peter Dimov" <pdimov@mmltd.net> wrote in message news:012301c7a6cc$59a39360$6407a80a@pdimov2...
My current development model is sync against CVS HEAD, do work, commit, check test results, fix. My use model is sync against CVS HEAD, compile project, yell at whoever introduced a regression in the boost component I'm using. This works well for me and I'd like to keep working in a similar way.
IMO this is not the desirable scheme in general. Actually this is exactly what we *should not* be doing IMO.
It works for me.
As a Boost user, I simply don't use Boost components whose HEAD versions are unstable.
An "average" Boost user is primarily interrested in latest released version.
Almost; the user is typically interested in a version that works and contains the libraries s/he needs. A suitable release may not exist (yet).
As a Boost developer, if a dependency takes too much time to stabilize, I sever ties with it and reimplement the parts I need. This is rare since I have low tolerance for dependencies anyway. :-)
What if you depend on serialization or GUI lib or XML parser. It might not be possible to "reimplement" all your dependencies.
In this case I'll fix these libraries myself.
And this is not a good practive in general IMO. Since you are causing breakage to "single definition rule" on library level.
There is no observable downside to this conceptual breakage, whereas the breakage resulting from a failed dependency is quite visible.
I understand that this mindset may be unusual. Still, I find the idea that the trunk is assumed to be unstable a bit odd. The trunk should be stable and everyone should work to keep it that way.
If trunk is stable, how do I test my development?
You make an incremental change, test it locally, then commit to trunk? I didn't mean "stable" as in "guaranteed to pass all tests", more like as "stable enough for practical use".
If I am done with my development when can I put it into "stable" trunk? What if I break something? What if N libraries merged their changes the same time. How long will it take t osort it out?
Large commits are always a problem. My suggestion is that we should simply avoid large commits. If the incremental steps towards the goal are visible in the trunk, they can be tested and the problems can be fixed as they appear, rather than as one large batch at the end of the day.
That's good goal. I support it. But this tree could exist as a reflection of actual tree (using svn externals): boost/ foo/ -> foo/truck/boost/foo foo.hpp -> foo/truck/boost/foo.hpp
bar/ -> foo/truck/boost/bar bar.hpp -> foo/truck/boost/bar.hpp
How run svn update in this directory and you pull all you need.
As I understand it there are technical problems with svn:externals that make the above not work as well as it could. But it's possible.
How can I release and test my subset if I can't compile with truck version of library I depend on. I don't really care about latest changes? I would be happy to work with last stable version (last boost release)
You can compose a release by using a specific library version. It should be possible to use the version from the last release as a starting point.
I still don't see the difference. What do you win by pullig npart of the tree?
Test failures if a library #includes "boost/foo/something.hpp" without foo being listed as a dependency. Without this check, dependencies tend to find their way into your library while you're asleep. :-)
Right. The release process basically consists of integration testing.
And this is what we should be avoiding. There should not be testing stage during release.
Let's take a look on this from different prospective: what in my proposal you find incorrect?
I see nothing incorrect per se in your proposal; it's quite good. But how do we get there from here? How many extra tools do we need? Can we implement it in small steps?

"Peter Dimov" <pdimov@mmltd.net> wrote in message news:018901c7a6d9$6d1e8820$6407a80a@pdimov2...
Gennadiy Rozental wrote:
"Peter Dimov" <pdimov@mmltd.net> wrote in message news:016b01c7a6d2$7722c9a0$6407a80a@pdimov2...
Gennadiy Rozental wrote:
"Peter Dimov" <pdimov@mmltd.net> wrote in message news:012301c7a6cc$59a39360$6407a80a@pdimov2...
My current development model is sync against CVS HEAD, do work, commit, check test results, fix. My use model is sync against CVS HEAD, compile project, yell at whoever introduced a regression in the boost component I'm using. This works well for me and I'd like to keep working in a similar way.
IMO this is not the desirable scheme in general. Actually this is exactly what we *should not* be doing IMO.
It works for me.
As a Boost user, I simply don't use Boost components whose HEAD versions are unstable.
An "average" Boost user is primarily interrested in latest released version.
Almost; the user is typically interested in a version that works and contains the libraries s/he needs. A suitable release may not exist (yet).
This is the way things are now. In a future we hopefully make boost releases more frequent and users don't have to go into undesirable territory of unreleases libraries.
As a Boost developer, if a dependency takes too much time to stabilize, I sever ties with it and reimplement the parts I need. This is rare since I have low tolerance for dependencies anyway. :-)
What if you depend on serialization or GUI lib or XML parser. It might not be possible to "reimplement" all your dependencies.
In this case I'll fix these libraries myself.
Well, you maybe the one who can easily fix any boost library one uses. An "average" boost developer (me including) can't in general. There is also a huge problem with two developers making independent changes to the same component.
And this is not a good practive in general IMO. Since you are causing breakage to "single definition rule" on library level.
There is no observable downside to this conceptual breakage, whereas the breakage resulting from a failed dependency is quite visible.
There are quite observable downsides: * You may cause ODR violation to anyone who is using both yours and dependant library. * Your code most probably is not tested * You are requires to maintain this code now, inncluding porting on new compilers. * Users might be confised. The see class Foo used in your code. They expect it to be from library libFoo. But thre libFoo docs mention different interfaces. libFoo developer now have t ofigure out where this discrepency comes from. and so on.
I understand that this mindset may be unusual. Still, I find the idea that the trunk is assumed to be unstable a bit odd. The trunk should be stable and everyone should work to keep it that way.
If trunk is stable, how do I test my development?
You make an incremental change, test it locally, then commit to trunk?
I don't care about local testing for this discussion. I do my testing on one compiler I have access to. Now I need to test agains 30 other configurations. It may take me month or two to clear up all the problems.
I didn't mean "stable" as in "guaranteed to pass all tests", more like as "stable enough for practical use".
This is one of those "grey" terms I believe we have to avoid as a plague. Every one have different understanding what "stable enough" means. We need formal definitions that do not leave a space for doubt. Last release version of boost is stable. No one ever going to make any changes to it to disrupt my testing. And until I am realy to test against new version of dependancies I would stick with it. Then I can add them one by one.
If I am done with my development when can I put it into "stable" trunk? What if I break something? What if N libraries merged their changes the same time. How long will it take t osort it out?
Large commits are always a problem. My suggestion is that we should simply avoid large commits. If the incremental steps towards the goal are visible
How do you invision this to happend ;)) With 100+ developers working simultaniously around the world. Will I have to get a ticket to commit my changes?
in the trunk, they can be tested and the problems can be fixed as they appear, rather than as one large batch at the end of the day.
That's good goal. I support it. But this tree could exist as a reflection of actual tree (using svn externals): boost/ foo/ -> foo/truck/boost/foo foo.hpp -> foo/truck/boost/foo.hpp
bar/ -> foo/truck/boost/bar bar.hpp -> foo/truck/boost/bar.hpp
How run svn update in this directory and you pull all you need.
As I understand it there are technical problems with svn:externals that make the above not work as well as it could. But it's possible.
The problem is technical, meaning it should be easiest for us to fix ;)
How can I release and test my subset if I can't compile with truck version of library I depend on. I don't really care about latest changes? I would be happy to work with last stable version (last boost release)
You can compose a release by using a specific library version. It should be possible to use the version from the last release as a starting point.
I can compose whatever I wont. How will I test it?
I still don't see the difference. What do you win by pullig npart of the tree?
Test failures if a library #includes "boost/foo/something.hpp" without foo being listed as a dependency. Without this check, dependencies tend to find their way into your library while you're asleep. :-)
This is achieved in my solution diferrent (easier IMO) way. If library A is missing from library B dependency list, the A/<version>/boost is not added to the list of includes and B fails to compile. This is the safest possible solution, don't you think?
Right. The release process basically consists of integration testing.
And this is what we should be avoiding. There should not be testing stage during release.
Let's take a look on this from different prospective: what in my proposal you find incorrect?
I see nothing incorrect per se in your proposal; it's quite good. But how do we get there from here? How many extra tools do we need? Can we implement it in small steps?
I don't believe we need any extra tools. We potencionally need to make a change in make system to support explicit dependency specification. And we need to change release packaging script to select components that are ready for release. For example, in comparison with Beman proposal it doesn't require any scripts/human efforts to maintain stable branch.

As a Boost developer, if a dependency takes too much time to stabilize, I sever ties with it and reimplement the parts I need. This is rare since I have low tolerance for dependencies anyway. :-)
What if you depend on serialization or GUI lib or XML parser. It might not be possible to "reimplement" all your dependencies. And this is not a good practive in general IMO. Since you are causing breakage to "single definition rule" on library level.
There are really only two possibilities: 1) fix the GUI lib, or 2) sever ties with it, reimplement the parts you need, and explain in the documentation how a GUI lib can be hooked by the user.
I understand that this mindset may be unusual. Still, I find the idea that the trunk is assumed to be unstable a bit odd. The trunk should be stable and everyone should work to keep it that way.
If trunk is stable, how do I test my development?
If trunk is not stable, what motivation do you have for testing your code?
If I am done with my development when can I put it into "stable" trunk?
As soon as you're reasonably sure that it wont break anything.
What if I break something?
Then you have everyone screaming, and you hope that you can do a quick fix before someone reverts your changes to make the trunk stable again.
What if N libraries merged their changes the same time.
This is not possible, changes are atomic.
How long will it take t osort it out?
It'll certainly take less time to sort out compared to if the trunk is unstable (and everyone is more tolerant to bad commits.) Emil Dotchevski.

"Emil Dotchevski" <emildotchevski@hotmail.com> wrote in message news:BAY110-DAV9D1AAB78C539607616C5ED4210@phx.gbl...
As a Boost developer, if a dependency takes too much time to stabilize, I sever ties with it and reimplement the parts I need. This is rare since I have low tolerance for dependencies anyway. :-)
What if you depend on serialization or GUI lib or XML parser. It might not be possible to "reimplement" all your dependencies. And this is not a good practive in general IMO. Since you are causing breakage to "single definition rule" on library level.
There are really only two possibilities:
1) fix the GUI lib, or
2) sever ties with it, reimplement the parts you need, and explain in the documentation how a GUI lib can be hooked by the user.
or use older version of GUI library that you know works.
I understand that this mindset may be unusual. Still, I find the idea that the trunk is assumed to be unstable a bit odd. The trunk should be stable and everyone should work to keep it that way.
If trunk is stable, how do I test my development?
If trunk is not stable, what motivation do you have for testing your code?
I don't care about trunk in general at all. I don't believe we need a notion of boost trunk whatsoever. I've got trunk version of my library I need to test. And my library depends on particular versions (possible truck) of other components.
If I am done with my development when can I put it into "stable" trunk?
As soon as you're reasonably sure that it wont break anything.
How can I be "reasonably sure" (yet another "grey" term) until I run the tests. And to run the tests I need to commit the changes. This is chicken and egg problem.
What if I break something?
Then you have everyone screaming, and you hope that you can do a quick fix before someone reverts your changes to make the trunk stable again.
What If I am working on port for new compiler? I don't want anyone to test agains my trunk version until I am done. And it may take me month (beasue I went on vacation right in a middle;0)
What if N libraries merged their changes the same time.
This is not possible, changes are atomic.
Within an our? Some cozy satuday evening ....
How long will it take t osort it out?
It'll certainly take less time to sort out compared to if the trunk is unstable (and everyone is more tolerant to bad commits.)
*Why* anyone but me should care about my bad commit? In reliable system no one should. Gennadiy

----- Original Message ----- From: "Gennadiy Rozental" <gennadiy.rozental@thomson.com> To: <boost@lists.boost.org> Sent: Monday, June 04, 2007 12:51 PM Subject: Re: [boost] Boost Development Environment proposal
"Emil Dotchevski" <emildotchevski@hotmail.com> wrote in message news:BAY110-DAV9D1AAB78C539607616C5ED4210@phx.gbl...
As a Boost developer, if a dependency takes too much time to stabilize, I sever ties with it and reimplement the parts I need. This is rare since I have low tolerance for dependencies anyway. :-)
What if you depend on serialization or GUI lib or XML parser. It might not be possible to "reimplement" all your dependencies. And this is not a good practive in general IMO. Since you are causing breakage to "single definition rule" on library level.
There are really only two possibilities:
1) fix the GUI lib, or
2) sever ties with it, reimplement the parts you need, and explain in the documentation how a GUI lib can be hooked by the user.
or use older version of GUI library that you know works.
If your goal is to commit your changes at any price, sure, that's the third possibility. In reality, choosing this third option means you'll be doing more work in the long run, because at a later time you still have to switch to the last version of the GUI lib, at which point you're presented with 1) and 2) again.
I understand that this mindset may be unusual. Still, I find the idea that the trunk is assumed to be unstable a bit odd. The trunk should be stable and everyone should work to keep it that way.
If trunk is stable, how do I test my development?
If trunk is not stable, what motivation do you have for testing your code?
I don't care about trunk in general at all. I don't believe we need a notion of boost trunk whatsoever. I've got trunk version of my library I need to test. And my library depends on particular versions (possible truck) of other components.
If I am done with my development when can I put it into "stable" trunk?
As soon as you're reasonably sure that it wont break anything.
How can I be "reasonably sure" (yet another "grey" term) until I run the tests. And to run the tests I need to commit the changes. This is chicken and egg problem.
Even if you can't run all tests by yourself, you can run at least some tests to be "reasonably sure". And if we require that HEAD is stable, this is an additional motivation for people to be more careful when committing changes.
What if I break something?
Then you have everyone screaming, and you hope that you can do a quick fix before someone reverts your changes to make the trunk stable again.
What If I am working on port for new compiler? I don't want anyone to test agains my trunk version until I am done. And it may take me month (beasue I went on vacation right in a middle;0)
So what's the hurry to commit your changes then? :) The more extensive the refactoring you're doing is, the more important it is for you to update often, so you don't deviate from everyone else's work too much. At some point you are "reasonably sure" that your current version is stable enough, and you commit.
What if N libraries merged their changes the same time.
This is not possible, changes are atomic.
Within an our? Some cozy satuday evening ....
In my opinion, the only way to deal with rapidly changing code base is to sync often. This can only work if you know that HEAD is stable (but of course if HEAD is bad, you can always sync to the previous version, or even revert the bad commit someone else did.)
How long will it take t osort it out?
It'll certainly take less time to sort out compared to if the trunk is unstable (and everyone is more tolerant to bad commits.)
*Why* anyone but me should care about my bad commit?
If your changes are relevant to anything, people will care about your (bad or good) commits.
In reliable system no one should.
I don't see how a system with high tolerance to bad commits can produce consistently good results. Emil Dotchevski

"Emil Dotchevski" <emildotchevski@hotmail.com> wrote in message news:BAY110-DAV118A20B18F62537D70D809D4210@phx.gbl...
What if you depend on serialization or GUI lib or XML parser. It might not be possible to "reimplement" all your dependencies. And this is not a good practive in general IMO. Since you are causing breakage to "single definition rule" on library level.
There are really only two possibilities:
1) fix the GUI lib, or
2) sever ties with it, reimplement the parts you need, and explain in the documentation how a GUI lib can be hooked by the user.
or use older version of GUI library that you know works.
If your goal is to commit your changes at any price, sure, that's the third possibility.
I don't like this sticker "at any price". It have some negative connotation. Otherwise, yes. If I don't care about latest changes.
In reality, choosing this third option means you'll be doing more work in the long run, because at a later time you still have to switch to the last version of the GUI lib, at which point you're presented with 1) and 2) again.
Why more? EXACTLY the same amount. I need to make my changes and I need to make sure it works with your changes. I just propose to split this efforts in time and do it in two independent steps.
How can I be "reasonably sure" (yet another "grey" term) until I run the tests. And to run the tests I need to commit the changes. This is chicken and egg problem.
Even if you can't run all tests by yourself, you can run at least some tests to be "reasonably sure".
And if we require that HEAD is stable, this is an additional motivation for people to be more careful when committing changes.
How "reasonably sure" I have to be? And how "more careful"? 65% careful? With highly portable development, when you library is targeted to work on 30 different configurations, you can't be sure in anything. Nor you can't be careful. The only way to be really sure is *run the test*. To run the test you need to commit the changes.
What if I break something?
Then you have everyone screaming, and you hope that you can do a quick fix before someone reverts your changes to make the trunk stable again.
What If I am working on port for new compiler? I don't want anyone to test agains my trunk version until I am done. And it may take me month (because I went on vacation right in a middle;0)
So what's the hurry to commit your changes then? :)
To test them! ;) I was doing porting, then I went on vacation, then I came back and continued.
The more extensive the refactoring you're doing is, the more important it is for you to update often, so you don't deviate from everyone else's work too much. At some point you are "reasonably sure" that your current version is stable enough, and you commit.
You may be more or less sure that your own test will pass (rather less - nothing is sure until code is committed and tested on all platforms). But what about 1000 other tests that you never run from different components that depend on you?
What if N libraries merged their changes the same time.
This is not possible, changes are atomic.
Within an our? Some cozy Saturday evening ....
In my opinion, the only way to deal with rapidly changing code base is to sync often. This can only work if you know that HEAD is stable (but of course if HEAD is bad, you can always sync to the previous version, or even revert the bad commit someone else did.)
I strongly disagree. This "catch the train" will lead us nowhere. Everyone should be doing development at their own pace and I don't have to worry about other peoples and their need to run some tests nor they need about mine. This decoupling in "the only way" IMO.
How long will it take t osort it out?
It'll certainly take less time to sort out compared to if the trunk is unstable (and everyone is more tolerant to bad commits.)
*Why* anyone but me should care about my bad commit?
If your changes are relevant to anything, people will care about your (bad or good) commits.
Why? They shouldn't. Until I am done with my changes.
In reliable system no one should.
I don't see how a system with high tolerance to bad commits can produce consistently good results.
Easily ;) Do you see any practical problem with approach I promote? Gennadiy

In reality, choosing this third option means you'll be doing more work in the long run, because at a later time you still have to switch to the last version of the GUI lib, at which point you're presented with 1) and 2) again.
Why more? EXACTLY the same amount. I need to make my changes and I need to make sure it works with your changes. I just propose to split this efforts in time and do it in two independent steps.
It's more, because you had to spend time to make your changes work with the old version of the lib, then you need to make it work with the new version.
How can I be "reasonably sure" (yet another "grey" term) until I run the tests. And to run the tests I need to commit the changes. This is chicken and egg problem.
Even if you can't run all tests by yourself, you can run at least some tests to be "reasonably sure".
And if we require that HEAD is stable, this is an additional motivation for people to be more careful when committing changes.
How "reasonably sure" I have to be? And how "more careful"? 65% careful? With highly portable development, when you library is targeted to work on 30 different configurations, you can't be sure in anything. Nor you can't be careful. The only way to be really sure is *run the test*. To run the test you need to commit the changes.
To run the tests on platform X, a tester who can run the tests for you needs to get your changes. Are you suggesting that the only way this could happen is to commit the changes to HEAD? The only thing you're accomplishing by committing your changes to HEAD is that they get merged with commits other people are making. This is a good thing, but it's better to instead sync with everyone else's (stable!) changes, and only commit your changes when you are reasonably sure they're good.
What if I break something?
Then you have everyone screaming, and you hope that you can do a quick fix before someone reverts your changes to make the trunk stable again.
What If I am working on port for new compiler? I don't want anyone to test agains my trunk version until I am done. And it may take me month (because I went on vacation right in a middle;0)
So what's the hurry to commit your changes then? :)
To test them! ;) I was doing porting, then I went on vacation, then I came back and continued.
Is there a problem with not committing to HEAD before your changes have been tested?
The more extensive the refactoring you're doing is, the more important it is for you to update often, so you don't deviate from everyone else's work too much. At some point you are "reasonably sure" that your current version is stable enough, and you commit.
You may be more or less sure that your own test will pass (rather less - nothing is sure until code is committed and tested on all platforms). But what about 1000 other tests that you never run from different components that depend on you?
In that case, you can't be reasonably sure your changes are stable, and therefore you need to wait for them to be tetsed before you commit them.
What if N libraries merged their changes the same time.
This is not possible, changes are atomic.
Within an our? Some cozy Saturday evening ....
In my opinion, the only way to deal with rapidly changing code base is to sync often. This can only work if you know that HEAD is stable (but of course if HEAD is bad, you can always sync to the previous version, or even revert the bad commit someone else did.)
I strongly disagree. This "catch the train" will lead us nowhere. Everyone should be doing development at their own pace and I don't have to worry about other peoples and their need to run some tests nor they need about mine. This decoupling in "the only way" IMO.
If you could "not worry" about everyone else's changes, I'd agree -- but you can't. Sooner or later, you will have to face other developer's commits, and you'll have to make your code work with them. I think that the more you postpone this, the harder it is to accomplish, and the higher the risk for your changes to break HEAD -- in particular if someone else has committed an extensive (but bug-free) change.
How long will it take t osort it out?
It'll certainly take less time to sort out compared to if the trunk is unstable (and everyone is more tolerant to bad commits.)
*Why* anyone but me should care about my bad commit?
If your changes are relevant to anything, people will care about your (bad or good) commits.
Why? They shouldn't. Until I am done with my changes.
Right, so don't commit your changes until you're done with them.
In reliable system no one should.
I don't see how a system with high tolerance to bad commits can produce consistently good results.
Easily ;) Do you see any practical problem with approach I promote?
Yes, it makes producing stable releases harder. In your world, to make a stable release, you start with the current HEAD, assume nothing about it, run tests, fix bugs, run tests, etc. until it's stable, at which point you release it and start working on migrating bug fixes back to HEAD. For the next release, you start with no assumptions, and so forth (if I understand you correctly.) If you start with the assumption that "HEAD should always be stable", you'd be making more frequent releases, and everyone would be updating more often to stay in sync with the more frequent releases. I think you are afraid that the more frequent updates will slow you down, but I think that in the long run you're going to save time because you "see" changes that affect your work sooner and can deal with them locally without involving anyone else (besides testers, obviously you can't run all tests yourself.) Emil Dotchevski

"Emil Dotchevski" <emildotchevski@hotmail.com> wrote in message news:BAY110-DAV160C3BB33542732B238C65D4210@phx.gbl...
In reality, choosing this third option means you'll be doing more work in the long run, because at a later time you still have to switch to the last version of the GUI lib, at which point you're presented with 1) and 2) again.
Why more? EXACTLY the same amount. I need to make my changes and I need to make sure it works with your changes. I just propose to split this efforts in time and do it in two independent steps.
It's more, because you had to spend time to make your changes work with the old version of the lib, then you need to make it work with the new version.
How can I be "reasonably sure" (yet another "grey" term) until I run the tests. And to run the tests I need to commit the changes. This is chicken and egg problem.
Even if you can't run all tests by yourself, you can run at least some tests to be "reasonably sure".
And if we require that HEAD is stable, this is an additional motivation for people to be more careful when committing changes.
How "reasonably sure" I have to be? And how "more careful"? 65% careful? With highly portable development, when you library is targeted to work on 30 different configurations, you can't be sure in anything. Nor you can't be careful. The only way to be really sure is *run the test*. To run the test you need to commit the changes.
To run the tests on platform X, a tester who can run the tests for you needs to get your changes. Are you suggesting that the only way this could happen is to commit the changes to HEAD?
I don't care where. I just need them to be tested. Current systems only tests HEAD.
Is there a problem with not committing to HEAD before your changes have been tested?
Yes. I do not know other way to test my changes.
The more extensive the refactoring you're doing is, the more important it is for you to update often, so you don't deviate from everyone else's work too much. At some point you are "reasonably sure" that your current version is stable enough, and you commit.
You may be more or less sure that your own test will pass (rather less - nothing is sure until code is committed and tested on all platforms). But what about 1000 other tests that you never run from different components that depend on you?
In that case, you can't be reasonably sure your changes are stable, and therefore you need to wait for them to be tetsed before you commit them.
How can anyone test against my changes if thay are not commited?
In my opinion, the only way to deal with rapidly changing code base is to sync often. This can only work if you know that HEAD is stable (but of course if HEAD is bad, you can always sync to the previous version, or even revert the bad commit someone else did.)
I strongly disagree. This "catch the train" will lead us nowhere. Everyone should be doing development at their own pace and I don't have to worry about other peoples and their need to run some tests nor they need about mine. This decoupling in "the only way" IMO.
If you could "not worry" about everyone else's changes, I'd agree -- but you can't. Sooner or later, you will have to face other developer's commits, and you'll have to make your code work with them. I think that the more you postpone this, the harder it is to accomplish, and the higher the risk for your changes to break HEAD -- in particular if someone else has committed an extensive (but bug-free) change.
That's really my problem. If I am doing my own development I don't want to be bothered with making sure it works with latest commits in other parts of boost. If I am in maintanence stage, I will always test against latest version of the dependant components and catch the issues as soon as it appeared
How long will it take t osort it out?
It'll certainly take less time to sort out compared to if the trunk is unstable (and everyone is more tolerant to bad commits.)
*Why* anyone but me should care about my bad commit?
If your changes are relevant to anything, people will care about your (bad or good) commits.
Why? They shouldn't. Until I am done with my changes.
Right, so don't commit your changes until you're done with them.
In reliable system no one should.
I don't see how a system with high tolerance to bad commits can produce consistently good results.
Easily ;) Do you see any practical problem with approach I promote?
Yes, it makes producing stable releases harder.
In your world, to make a stable release, you start with the current HEAD, assume nothing about it, run tests, fix bugs, run tests, etc. until it's stable, at which point you release it and start working on migrating bug fixes back to HEAD. For the next release, you start with no assumptions, and so forth (if I understand you correctly.)
Did you read my post here: http://article.gmane.org/gmane.comp.lib.boost.devel/158491 In "my world" no migrating occures at all. And there is no HEAD (at least in global sence) Gennadiy

on Mon Jun 04 2007, "Emil Dotchevski" <emildotchevski-AT-hotmail.com> wrote:
What if I break something?
Then you have everyone screaming, and you hope that you can do a quick fix before someone reverts your changes to make the trunk stable again.
What If I am working on port for new compiler? I don't want anyone to test agains my trunk version until I am done. And it may take me month (beasue I went on vacation right in a middle;0)
So what's the hurry to commit your changes then? :)
The more extensive the refactoring you're doing is, the more important it is for you to update often, so you don't deviate from everyone else's work too much.
If that's true, it implies we have libraries with coupled implementation details and/or we have libraries with highly unstable APIs. The former should never happen IMO and the latter should happen rarely, with changes publicized well in advance. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams wrote:
"Emil Dotchevski" <emildotchevski-AT-hotmail.com> wrote:
The more extensive the refactoring you're doing is, the more important it is for you to update often, so you don't deviate from everyone else's work too much.
If that's true, it implies we have libraries with coupled implementation details [, that] should never happen IMO
Hmm. It's quite easy to depend by accident on implementation details. My favourite example is a component that doesn't document that it returns things in a particular order. However in version 1, for ease of implementation, it stores things in a std::map - and thus actually returns them in ascending order; in version 2 it upgrades to an unordered_map - and returns things in an arbitrary order. The result is that a dependant component breaks. The fault is in the dependant component (it relied on undocumentend behaviour), but finding that before the change can only be done by the most intensive code review (and extensive documentation). Having said all that, I agree that libraries with coupled implementation details should be avoided - but we need to cope with accidental coupling. -- Martin Bonner Project Leader PI SHURLOK LTD Telephone: +44 1223 441434 / 203894 (direct) Fax: +44 1223 203999 Email: martin.bonner@pi-shurlok.com www.pi-shurlok.com

on Wed Jun 06 2007, "Martin Bonner" <Martin.Bonner-AT-pi-shurlok.com> wrote:
David Abrahams wrote:
"Emil Dotchevski" <emildotchevski-AT-hotmail.com> wrote:
The more extensive the refactoring you're doing is, the more important it is for you to update often, so you don't deviate from everyone else's work too much.
If that's true, it implies we have libraries with coupled implementation details [, that] should never happen IMO
Hmm. It's quite easy to depend by accident on implementation details. My favourite example is a component that doesn't document that it returns things in a particular order. However in version 1, for ease of implementation, it stores things in a std::map - and thus actually returns them in ascending order; in version 2 it upgrades to an unordered_map - and returns things in an arbitrary order. The result is that a dependant component breaks. The fault is in the dependant component (it relied on undocumentend behaviour), but finding that before the change can only be done by the most intensive code review (and extensive documentation).
Having said all that, I agree that libraries with coupled implementation details should be avoided - but we need to cope with accidental coupling.
Probably. I only said "should," not "does" never happen. In any case, it will be rare, and in the context of the discussion, if your library depends on my implementation details, it deserves to break when I update my library. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams wrote:
on Wed Jun 06 2007, "Martin Bonner" <Martin.Bonner-AT-pi-shurlok.com> wrote:
David Abrahams wrote:
"Emil Dotchevski" <emildotchevski-AT-hotmail.com> wrote:
The more extensive the refactoring you're doing is, the more important it is for you to update often, so you don't deviate from everyone else's work too much.
If that's true, it implies we have libraries with coupled implementation details [, that] should never happen IMO
Hmm. It's quite easy to depend by accident on implementation details. My favourite example is a component that doesn't document that it returns things in a particular order. However in version 1, for ease of implementation, it stores things in a std::map - and thus actually returns them in ascending order; in version 2 it upgrades to an unordered_map - and returns things in an arbitrary order. The result is that a dependant component breaks. The fault is in the dependant component (it relied on undocumentend behaviour), but finding that before the change can only be done by the most intensive code review (and extensive documentation).
Having said all that, I agree that libraries with coupled implementation details should be avoided - but we need to cope with accidental coupling.
Probably. I only said "should," not "does" never happen. In any case, it will be rare, and in the context of the discussion, if your library depends on my implementation details, it deserves to break when I update my library.
Hypothetically speaking - I'm not saying that this will be a problem in practice - you can't break your dependencies (on the -stable branch) even when they deserve it, since the automatic merge script will reject your change as causing regressions.

on Mon Jun 04 2007, "Emil Dotchevski" <emildotchevski-AT-hotmail.com> wrote:
In reliable system no one should.
I don't see how a system with high tolerance to bad commits can produce consistently good results.
It depends where you're committing things. One of the best reasons for branching in a traditional version control setup is to give authors a place to check in their partially-finished (i.e. "broken") work. That _improves_ results in numerous ways. Obviously, there has to be some kind of check in the system for bad commits, but only those that a library author declares to be "good," and thus, ready for release. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

in the previous thread, On Wed, Jun 06, 2007 at 08:49:14AM -0400, David Abrahams wrote:
[snip]
It depends where you're committing things. One of the best reasons for branching in a traditional version control setup is to give authors a place to check in their partially-finished (i.e. "broken") work. That _improves_ results in numerous ways. Obviously, there has to be some kind of check in the system for bad commits, but only those that a library author declares to be "good," and thus, ready for release.
Since we're talking about devel vs. stable and what the meaning of 'trunk' really is, I found Linus Torvald's google tech talk on git (which is source control for the linux kernel) to be *very* interesting (fairly entertaining as well). http://www.youtube.com/watch?v=4XpnKHJAok8 He places a very high value on the ability to * branch at any time * merge easily * commit/branch/merge locally (not in the 'central' repository) Interesting the emphasis on git's being distributed... there is no 'central repository'. -t

On Wed, Jun 06, 2007 at 12:33:16PM -0400, troy d straszheim wrote:
in the previous thread, On Wed, Jun 06, 2007 at 08:49:14AM -0400, David Abrahams wrote:
[snip]
It depends where you're committing things. One of the best reasons for branching in a traditional version control setup is to give authors a place to check in their partially-finished (i.e. "broken") work. That _improves_ results in numerous ways. Obviously, there has to be some kind of check in the system for bad commits, but only those that a library author declares to be "good," and thus, ready for release.
Since we're talking about devel vs. stable and what the meaning of 'trunk' really is, I found Linus Torvald's google tech talk on git (which is source control for the linux kernel) to be *very* interesting (fairly entertaining as well).
http://www.youtube.com/watch?v=4XpnKHJAok8
He places a very high value on the ability to
* branch at any time * merge easily * commit/branch/merge locally (not in the 'central' repository)
Interesting the emphasis on git's being distributed... there is no 'central repository'.
yes, git is really powerful. it took some time to enter in my fingers, but i am now wishing to have something to be heavvy branched/merged.. :) i find it perfect for kernel development, but in many other contexts you end to use it as super-doped cvs/svn (probably it is just me burned with cvs/svn..). moreover, it is not clear (to me) how it behaves under windows, expecially with all those SHA1 digests and the crlf differences between unix/windows worlds.. cheers domenico -----[ Domenico Andreoli, aka cavok --[ http://www.dandreoli.com/gpgkey.asc ---[ 3A0F 2F80 F79C 678A 8936 4FEE 0677 9033 A20E BC50

2007/6/6, Domenico Andreoli <cavokz@gmail.com>: <snip/>
moreover, it is not clear (to me) how it behaves under windows, expecially with all those SHA1 digests and the crlf differences between unix/windows worlds..
Linus said that it relied heavily on posix file systems in a way that wasn't easily ported. There would at least be efficiency degradation (if it matters). He also spoke of 22000 files during each merge. It seemed like he handled the complete kernel as one unit. As with KDE I didn't get the handling of sublibraries. Are they not released (tagged) independently of the whole? I've always thought that system releases should be built on smaller releases. Is that not the case anymore? I could have been misunderstanding something. /$

troy d straszheim wrote:
Since we're talking about devel vs. stable and what the meaning of 'trunk' really is, I found Linus Torvald's google tech talk on git (which is source control for the linux kernel) to be *very* interesting (fairly entertaining as well).
That talk actually made me aware of Monotone, which uses boost !

on Wed Jun 06 2007, troy d straszheim <troy-AT-resophonic.com> wrote:
in the previous thread, On Wed, Jun 06, 2007 at 08:49:14AM -0400, David Abrahams wrote:
[snip]
It depends where you're committing things. One of the best reasons for branching in a traditional version control setup is to give authors a place to check in their partially-finished (i.e. "broken") work. That _improves_ results in numerous ways. Obviously, there has to be some kind of check in the system for bad commits, but only those that a library author declares to be "good," and thus, ready for release.
Since we're talking about devel vs. stable and what the meaning of 'trunk' really is, I found Linus Torvald's google tech talk on git (which is source control for the linux kernel) to be *very* interesting (fairly entertaining as well).
I actually took the time to watch this talk. It is, as you imply, extremely enlightening.
He places a very high value on the ability to
* branch at any time * merge easily
Yeah; I get the impression that GIT even deals correctly with fragments of code moving across files. He notes that some people use GIT to solve SVN's merging deficiencies, which I find interesting.
* commit/branch/merge locally (not in the 'central' repository)
Interesting the emphasis on git's being distributed... there is no 'central repository'.
This part I have some trouble buying. In Linus' world, *his* repository is central... well, at least, it's the master from which releases are spun. In a project where we don't have a single arbiter for what goes into a release, I'm not sure we can have a master. Also, although he claims never to do backups, it's clear from Linus' talk that he has a complicated system with layers of firewalls, etc., protecting his data... which means that in a project like ours, individuals can't "play master" with the same level of reliability that Linus does. ...but I might be missing something :) -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

I am a big fan of DVCS systems, so I felt the need to chime in. See below. On 07/06/2007, at 16:21, David Abrahams wrote:
on Wed Jun 06 2007, troy d straszheim <troy-AT-resophonic.com> wrote:
[...]
* commit/branch/merge locally (not in the 'central' repository)
Interesting the emphasis on git's being distributed... there is no 'central repository'.
This part I have some trouble buying. In Linus' world, *his* repository is central... well, at least, it's the master from which releases are spun. In a project where we don't have a single arbiter for what goes into a release, I'm not sure we can have a master.
It is true that, by definition, there is no central repository. But by convenience, and for "big" projects, there will always be a more- or-less central repository. I have yet to see a project using a DVCS that does not declare a specific server to be the central one. (But, for example, I maintain some personal files in a DVCS system and I have no "central" repository; I just sync all the ones I have from time to time to get the updates.) But that's a good thing. This "central" repository has the blessing from the project maintainers, and it is supposed to contain decent code because only "official developers" are allowed to push to it after reviewing what they are pushing. However, everybody is free to start its own public repository if he wants to, to publish his changes. And this is easier than ever: no need to deal with patches, tarballs, or any other way to simply distribute changes for a project. Just pull from there and you get this or that experimental or non-official work. And what many people fails to realize: a DVCS may not bring _you_ (the "official" developers) any advantage over a centralized one. But it does really bring advantages to outsiders. They can just clone the central repository, do any changes to it without having to make them public, and painlessly keep it in sync with the master copy at the central repository. (I'm currently doing that to maintain some changes I'm working on for Linux/PS3.) Imagine, for example, the developer of a new library for Boost. He could integrate its library straight into the repository (a local copy, that is) from the very beginning and develop it in-place. Even when doing so, he could easily sync with the latest changes on the server *without losing any history*. And when somebody had to review it, he could just pull from his modified tree and get a modified Boost distribution with the extra library built-in. Later on, after this new library had been reviewed, the maintainers of the central repository could just have to pull from this developer's repository to get the new library *and* all associated development history. And at last, I'd like to suggest you to look at Monotone too (http:// www.monotone.ca/); its tutorial is enlightening -- it convinced me after a quick read that DVCS were just The Way to go. (Or if you want to play some more with Git, use the Cogito interface, which is simpler and more user-friendly, yet remains compatible with Git at the server level.) Kind regards, -- Julio M. Merino Vidal <jmmv84@gmail.com>

I have just watch it... It really open my eyes! GIT is a short age project... but I bet it will be a _very_ good choice in two or so years It seems easy to transform a subversion repository to GIT so we can change again later :) Regards Matias

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Julio M. Merino Vidal Sent: Thursday, June 07, 2007 4:49 PM Subject: Re: [boost] torvalds on branching/merging (was Boost DevelopmentEnvironment proposal)
On 07/06/2007, at 16:21, David Abrahams wrote:
on Wed Jun 06 2007, troy d straszheim
<troy-AT-resophonic.com> wrote: [...]
* commit/branch/merge locally (not in the 'central' repository)
Interesting the emphasis on git's being distributed... there is no 'central repository'.
This part I have some trouble buying. In Linus' world, *his* repository is central... well, at least, it's the master from which releases are spun. In a project where we don't have a single arbiter for what goes into a release, I'm not sure we can have a master.
And what many people fails to realize: a DVCS may not bring _you_ (the "official" developers) any advantage over a centralized one. But it does really bring advantages to outsiders. They can just clone the central repository, do any changes to it without having to make them public, and painlessly keep it in sync with the master copy at the central repository. (I'm currently doing that to maintain some changes I'm working on for Linux/PS3.)
I think those who prefer to use a DVCS can just use SVK on top of subversion (http://svk.bestpractical.com/). Thus you can have the best of both worlds. I also assume that you need a BDFL or a cabal, for a completely distributed repository to work. IMHO, such a project structure is not desirable and maybe dangerous for ongoing support (this naturally depends on the actual BDFL or cabal). I just can't imagine, say, Debian getting anything useful done by relying _exclusively_ on a DVCS. cheers, aa -- Andreas Ames | Programmer | Comergo GmbH | ames AT avaya DOT com Sitz der Gesellschaft: Stuttgart Registergericht: Amtsgericht Stuttgart - HRB 22107 Geschäftsführer: Andreas von Meyer zu Knonow, Udo Bühler, Thomas Kreikemeier

On Thu, Jun 07, 2007 at 04:49:01PM +0200, Julio M. Merino Vidal wrote:
I am a big fan of DVCS systems, so I felt the need to chime in. See below.
Hey Julio, Thanks a lot for chiming in. I found your comments very interesting (will comment directly in a minute though I don't have much to add). I do have some questions about the talk... I'm going back it and came across a section at ~45:00 where Linus mentions 'superprojects'. I'm trying to determine if this parallels the way I have been using subversion+externals. He is talking about performance and points out that, if your repository is very large, initial clone operations will be slow, since you have to download the entire history of this repository. He goes on to say that if you have a multiple components, you should put them in separate repositories. You then have a 'superproject' that contains 'pointers' to other projects. You keep separate projects separate. If you have a library, for instance, that is used by several components, you would put this in a separate repository. He mentions that the UI is still rough in this area. If you know about this stuff, could you discuss it a bit? Specificially, I'm wondering: - what does the syntax look like (just for the sake of concreteness)? - If I branch my superproject, what happens to my subprojects? anything? - If I was working on a superproject and discovered that I needed to branch four components simultaneously (e.g. they were somehow interdependent and i wanted to detangle them), how would I do it? - How widespread is the use of superprojects in the kernel.... how many projects does the toplevel superproject contain? Thanks in advance, -t

On Thu, 07 Jun 2007 10:21:10 -0400, David Abrahams wrote:
Yeah; I get the impression that GIT even deals correctly with fragments of code moving across files.
I believe that that impression is incorrect. Because GIT tracks file state only (no explicit rename or copy tracking), it use a similarity comparison between states to try and identify when a rename actually occurred so it can track the "history" of content. If a fragment is moved, the similarity check will not identify that fragment. The two files will end up being viewed as completely independent by all parts of GIT including the merge algorithms.
He notes that some people use GIT to solve SVN's merging deficiencies, which I find interesting.
Indeed it does. I am stuck with CVSNT until SVN 1.5 comes out with merge tracking added for precisely this reason. Why SVN and not GIT? Because I (have to) use Windows for my day job and GIT support under Windows in its infancy. Native support under Windows is just about non-existant (mingw and cygwin are the only ways at the moment, and they require extra bits to support the shell/perl scripts in GIT).
This part I have some trouble buying. In Linus' world, *his* repository is central... well, at least, it's the master from which releases are spun. In a project where we don't have a single arbiter for what goes into a release, I'm not sure we can have a master.
I think the point with GIT is that *any* GIT repository can be referred to as a master because there is no real difference between the one that started it and the ones cloned from it. Development can occur in either repository and either can act as "client" (slave, or whatever) to the other.
Also, although he claims never to do backups, it's clear from Linus' talk that he has a complicated system with layers of firewalls, etc., protecting his data... which means that in a project like ours, individuals can't "play master" with the same level of reliability that Linus does.
The rest of the world is Linus' backup. pihl -- Change name before @ on From: to phil for direct email.

2007/6/7, Phil Richards <news@derived-software.ltd.uk>:
On Thu, 07 Jun 2007 10:21:10 -0400, David Abrahams wrote:
Yeah; I get the impression that GIT even deals correctly with fragments of code moving across files.
I believe that that impression is incorrect. Because GIT tracks file state only (no explicit rename or copy tracking), it use a similarity comparison between states to try and identify when a rename actually occurred so it can track the "history" of content. If a fragment is moved, the similarity check will not identify that fragment. The two files will end up being viewed as completely independent by all parts of GIT including the merge algorithms.
I got the same impression as David. This was highlighted as something special to GIT. /$

on Thu Jun 07 2007, "Henrik Sundberg" <storangen-AT-gmail.com> wrote:
2007/6/7, Phil Richards <news@derived-software.ltd.uk>:
On Thu, 07 Jun 2007 10:21:10 -0400, David Abrahams wrote:
Yeah; I get the impression that GIT even deals correctly with fragments of code moving across files.
I believe that that impression is incorrect. Because GIT tracks file state only (no explicit rename or copy tracking), it use a similarity comparison between states to try and identify when a rename actually occurred so it can track the "history" of content. If a fragment is moved, the similarity check will not identify that fragment. The two files will end up being viewed as completely independent by all parts of GIT including the merge algorithms.
I got the same impression as David. This was highlighted as something special to GIT. /$
It would be good to get a definitive answer about that. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

2007/6/7, David Abrahams <dave@boost-consulting.com>:
on Thu Jun 07 2007, "Henrik Sundberg" <storangen-AT-gmail.com> wrote:
2007/6/7, Phil Richards <news@derived-software.ltd.uk>:
On Thu, 07 Jun 2007 10:21:10 -0400, David Abrahams wrote:
Yeah; I get the impression that GIT even deals correctly with fragments of code moving across files.
I believe that that impression is incorrect. Because GIT tracks file state only (no explicit rename or copy tracking), it use a similarity comparison between states to try and identify when a rename actually occurred so it can track the "history" of content. If a fragment is moved, the similarity check will not identify that fragment. The two files will end up being viewed as completely independent by all parts of GIT including the merge algorithms.
I got the same impression as David. This was highlighted as something special to GIT. /$
It would be good to get a definitive answer about that.
The GitFaq for "Why does git not track renames?" says: http://git.or.cz/gitwiki/GitFaq#head-f7dc61b87eab4db58fe90ce48cc1d47fd50e6be... On a second note, tracking renames is really just a special case of tracking how content moves in the tree. In some cases, you may instead be interested in querying when a function was added or moved to a different file. By only relying on the ability to recreate this information when needed, Git aims to provide a more flexible way to track how your tree is changing. /$

On Fri, 08 Jun 2007 20:49:49 +0200, Henrik Sundberg wrote:
2007/6/7, David Abrahams <dave@boost-consulting.com>:
on Thu Jun 07 2007, "Henrik Sundberg" <storangen-AT-gmail.com> wrote:
2007/6/7, Phil Richards <news@derived-software.ltd.uk>:
On Thu, 07 Jun 2007 10:21:10 -0400, David Abrahams wrote:
Yeah; I get the impression that GIT even deals correctly with fragments of code moving across files.
I believe that that impression is incorrect. Because GIT tracks file state only (no explicit rename or copy tracking), it use a similarity comparison between states to try and identify when a rename actually occurred so it can track the "history" of content. If a fragment is moved, the similarity check will not identify that fragment. The two files will end up being viewed as completely independent by all parts of GIT including the merge algorithms.
I got the same impression as David. This was highlighted as something special to GIT. /$ It would be good to get a definitive answer about that.
The GitFaq for "Why does git not track renames?" says: [...]
Yes, but this is substantially different from being able to *track* fragments being moved. Of course, if we are talking about *large* fragments, then, yes, it works. If we are talking about a single 10 line function being moved between two 1000 line long files, then, no, it does not track it in any real sense. I would classify "deals correctly with fragments of code moving across files" as being automatically able to apply a change made in one branch to that 10 line function after it has been moved to another file in a different branch. I'm pretty sure that no state-based analysis of files could do that... (which is all that GIT does, can do, or wants to do). What GIT will allow you to do is do a search for fragments of files, and identify where they appear... it's not quite the same thing. pihl -- Change name before @ on From: to phil for direct email.

My MacPorts describes GIT as "the stupid content tracker"... Has anybody used GIT - outside of Linux kernel development? Should we not focus our energy elsewhere; something more pragmatic, such as porting Boost to D? ;-) I was pretty impressed - as well - by GIT when testing it out, and even more so by testing 'darcs' out. But, one seems to always need a central repository anyways, which kind of contradicts - and perhaps even counteracts - the distributed nature of these systems. As David Abrahams points out, these distributed attempts often has a - officially or not - central node, which mirrors the quasi-shared development of those projects... /David On Jun 7, 2007, at 4:02 PM, Henrik Sundberg wrote:
2007/6/7, Phil Richards <news@derived-software.ltd.uk>:
On Thu, 07 Jun 2007 10:21:10 -0400, David Abrahams wrote:
Yeah; I get the impression that GIT even deals correctly with fragments of code moving across files.
I believe that that impression is incorrect. Because GIT tracks file state only (no explicit rename or copy tracking), it use a similarity comparison between states to try and identify when a rename actually occurred so it can track the "history" of content. If a fragment is moved, the similarity check will not identify that fragment. The two files will end up being viewed as completely independent by all parts of GIT including the merge algorithms.
I got the same impression as David. This was highlighted as something special to GIT. /$ _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost

On 07/06/2007, at 21:46, Phil Richards wrote:
On Thu, 07 Jun 2007 10:21:10 -0400, David Abrahams wrote:
Yeah; I get the impression that GIT even deals correctly with fragments of code moving across files.
I believe that that impression is incorrect. Because GIT tracks file state only (no explicit rename or copy tracking), it use a similarity comparison between states to try and identify when a rename actually occurred so it can track the "history" of content. If a fragment is moved, the similarity check will not identify that fragment. The two files will end up being viewed as completely independent by all parts of GIT including the merge algorithms.
I would like to point out that recent (FSVO recent) versions of Monotone (those with "rosters" support) have complete support to track everything, which includes renames, moves and changes to directories. -- Julio M. Merino Vidal <jmmv84@gmail.com>

on Thu Jun 07 2007, Phil Richards <news-AT-derived-software.ltd.uk> wrote:
This part I have some trouble buying. In Linus' world, *his* repository is central... well, at least, it's the master from which releases are spun. In a project where we don't have a single arbiter for what goes into a release, I'm not sure we can have a master.
I think the point with GIT is that *any* GIT repository can be referred to as a master because there is no real difference between the one that started it and the ones cloned from it. Development can occur in either repository and either can act as "client" (slave, or whatever) to the other.
Sure. My point is that if you're going to release a body of code, you have to at some point agree where the master repo for that code is. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

on Thu Jun 07 2007, Phil Richards <news-AT-derived-software.ltd.uk> wrote:
Also, although he claims never to do backups, it's clear from Linus' talk that he has a complicated system with layers of firewalls, etc., protecting his data... which means that in a project like ours, individuals can't "play master" with the same level of reliability that Linus does.
The rest of the world is Linus' backup.
So he claims. And yet, he says he keeps his email so well-hidden that he can't touch it when he travels. So clearly he's not replicating all his data everywhere. It's fine not to back up if you're sure that 50 other people you trust -- or even 5 -- are constantly replicating your data in real time. Most of us won't have that assurance. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams wrote:
on Thu Jun 07 2007, Phil Richards <news-AT-derived-software.ltd.uk> wrote:
Also, although he claims never to do backups, it's clear from Linus' talk that he has a complicated system with layers of firewalls, etc., protecting his data... which means that in a project like ours, individuals can't "play master" with the same level of reliability that Linus does.
The rest of the world is Linus' backup.
So he claims. And yet, he says he keeps his email so well-hidden that he can't touch it when he travels. So clearly he's not replicating all his data everywhere. It's fine not to back up if you're sure that 50 other people you trust -- or even 5 -- are constantly replicating your data in real time. Most of us won't have that assurance.
I can understand how GIT works fine if you're Linus and everyone pulls from you all the time so you have backups. But what about the guy off playing with some feature that is in its infancy? Surely noone cares about that yet and so noone is pulling it. Does GIT automatically push other people's repositorys onto your machine even if you don't care about them or does your work just not get backed up until someone notices it? Also to be able to have access to the history of a project when you a disconnected that means all the history needs to be on your machine, right? That seems like it would have to be unnecessarily large. I'm just not convinced yet. Thanks, Michael Marcin

On Thu, Jun 07, 2007 at 10:02:05PM -0500, Michael Marcin wrote:
David Abrahams wrote:
on Thu Jun 07 2007, Phil Richards <news-AT-derived-software.ltd.uk> wrote:
Also, although he claims never to do backups, it's clear from Linus' talk that he has a complicated system with layers of firewalls, etc., protecting his data... which means that in a project like ours, individuals can't "play master" with the same level of reliability that Linus does.
The rest of the world is Linus' backup.
So he claims. And yet, he says he keeps his email so well-hidden that he can't touch it when he travels. So clearly he's not replicating all his data everywhere. It's fine not to back up if you're sure that 50 other people you trust -- or even 5 -- are constantly replicating your data in real time. Most of us won't have that assurance.
I can understand how GIT works fine if you're Linus and everyone pulls from you all the time so you have backups. But what about the guy off playing with some feature that is in its infancy? Surely noone cares about that yet and so noone is pulling it. Does GIT automatically push other people's repositorys onto your machine even if you don't care about them or does your work just not get backed up until someone notices it?
you may always set a fare number of backups around and push your work on them. say also that some of them might be the publicly visible ones other people pulls your stuff from. say also that some branches are here and some are there, maybe by the different clients you are working with. this a power which need to be in balance with some control, your discipline. -----[ Domenico Andreoli, aka cavok --[ http://www.dandreoli.com/gpgkey.asc ---[ 3A0F 2F80 F79C 678A 8936 4FEE 0677 9033 A20E BC50

troy d straszheim wrote:
in the previous thread, On Wed, Jun 06, 2007 at 08:49:14AM -0400, David Abrahams wrote:
[snip]
It depends where you're committing things. One of the best reasons for branching in a traditional version control setup is to give authors a place to check in their partially-finished (i.e. "broken") work. That _improves_ results in numerous ways. Obviously, there has to be some kind of check in the system for bad commits, but only those that a library author declares to be "good," and thus, ready for release.
Since we're talking about devel vs. stable and what the meaning of 'trunk' really is, I found Linus Torvald's google tech talk on git (which is source control for the linux kernel) to be *very* interesting (fairly entertaining as well).
http://www.youtube.com/watch?v=4XpnKHJAok8
He places a very high value on the ability to
* branch at any time * merge easily * commit/branch/merge locally (not in the 'central' repository)
Git sounds very interesting, but with boost in subversion, it is relatively simple for anybody interested as group or individuals to do many of these things now. Check out SVK, which extend the functionality of subversion on the client side with local repositories/depot, to add many of the features Torvalds talk about. I am not claiming anything about its quality or feasibility for others, as I am not a user, but it seems like a tool I would use rather than basic CVS/SVN. http://svk.bestpractical.com/view/SVKForSubversion http://perlcabal.org/~audreyt/svk-overview.png http://svkbook.elixus.org/nightly/en/index.html
Interesting the emphasis on git's being distributed... there is no 'central repository'.
Well, as long as everybody looks at one of the repositories as the "official", it does really not make that much of a difference, does it? You can have distribution of repositories forming implicit branches, even with the more tool inherent concept of centralized repository you find in the SVN/SVK model. -- Bjørn

On Fri, Jun 08, 2007 at 01:07:35AM +0200, Bjørn Roald wrote:
Git sounds very interesting, but with boost in subversion, it is relatively simple for anybody interested as group or individuals to do many of these things now. Check out SVK, which extend the functionality of subversion on the client side with local repositories/depot, to add many of the features Torvalds talk about. I am not claiming anything about its quality or feasibility for others, as I am not a user, but it seems like a tool I would use rather than basic CVS/SVN.
boh.. svk, i tried to use it without much success... it looks like the meta-cvs of subversion... better to forget. git is a (scary) charm not suited for boost development. i hope merge tracking will be soon released _into_ subversion. monotone, darc, aegis, perforce.. i didn't know there were so many VCS around... when did they come out? :) cheers domenico -----[ Domenico Andreoli, aka cavok --[ http://www.dandreoli.com/gpgkey.asc ---[ 3A0F 2F80 F79C 678A 8936 4FEE 0677 9033 A20E BC50

On Fri, 2007-06-08 at 03:08 +0200, Domenico Andreoli wrote:
git is a (scary) charm not suited for boost development. i hope merge tracking will be soon released _into_ subversion.
Its meant to be there late summer. They are already beta-testing as far as I understand.
monotone, darc, aegis, perforce.. i didn't know there were so many VCS around... when did they come out? :)
When a problem remains unsolved, solutions prosper! Sohail

Domenico Andreoli wrote: [snip]
git is a (scary) charm not suited for boost development. i hope merge tracking will be soon released _into_ subversion.
Did someone mention merge tracking? I'll say Perforce (not a salespitch, I'm just a satisfied user).
monotone, darc, aegis, perforce.. i didn't know there were so many VCS around... when did they come out? :)
Well, this was what triggered my post in the first place. Perforce came out some ten+ years ago :-) Enough OT for now / Johan

On Fri, Jun 08, 2007 at 03:08:43AM +0200, Domenico Andreoli wrote:
On Fri, Jun 08, 2007 at 01:07:35AM +0200, Bj?rn Roald wrote:
Git sounds very interesting, but with boost in subversion, it is relatively simple for anybody interested as group or individuals to do many of these things now. Check out SVK, which extend the functionality of subversion on the client side with local repositories/depot, to add many of the features Torvalds talk about. I am not claiming anything about its quality or feasibility for others, as I am not a user, but it seems like a tool I would use rather than basic CVS/SVN.
boh.. svk, i tried to use it without much success... it looks like the meta-cvs of subversion... better to forget.
I had the same experience with svk, I thoroughly agree. -t

Bjørn Roald wrote:
He places a very high value on the ability to
* branch at any time * merge easily * commit/branch/merge locally (not in the 'central' repository)
Git sounds very interesting, but with boost in subversion, it is relatively simple for anybody interested as group or individuals to do many of these things now. Check out SVK, which extend the functionality of subversion on the client side with local repositories/depot, to add many of the features Torvalds talk about. I am not claiming anything about its quality or feasibility for others, as I am not a user, but it seems like a tool I would use rather than basic CVS/SVN.
I'm using SVK regularly, in particular to maintain patches to open-source project while they are not yet approved for commit. It works just fine for me. Not that if all developers can create personal branches on SVN server, you don't even need SVK much, especially as upcoming SVN 1.5 will have merge tracking. - Volodya

on Mon Jun 04 2007, "Gennadiy Rozental" <gennadiy.rozental-AT-thomson.com> wrote:
I understand that this mindset may be unusual. Still, I find the idea that the trunk is assumed to be unstable a bit odd. The trunk should be stable and everyone should work to keep it that way.
If trunk is stable, how do I test my development?
If trunk is not stable, what motivation do you have for testing your code?
I don't care about trunk in general at all. I don't believe we need a notion of boost trunk whatsoever.
I've been asking myself why we're still talking about a "trunk," too. Can somebody explain why, and what it's supposed to mean? -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

On Wed, Jun 06, 2007 at 08:40:52AM -0400, David Abrahams wrote:
I've been asking myself why we're still talking about a "trunk," too. Can somebody explain why, and what it's supposed to mean?
Heh, nobody bit on this one. I'm feeling brave, I'll give it a try. 'trunk', as we CVS users typically understand, is data. It's where you commit if you're a developer. It's a single branch in a single repository. If you want the latest stuff, you check out the 'trunk'. But this sense of 'trunk' also implies the typical CVS development process: everyone synchronizes their work to that one branch, and when that is stable, a branch is made for release; great effort is made to keep all development on one copy of the source tree, as branching/merging is typically difficult and expensive; etc. Since CVS is made to support development in a particular way, that way is fundamental to our understanding of 'trunk'. So to talk about the 'trunk' in a CVS-less environment is to either be vague or to imply that the development process is CVS like. You're talking about a location, and given is that the tools will dictate workflow (it's the trunk, of course, everybody knows what the trunk is for, there's no other way to do things)... but in the absence of CVS that isn't the case. You're leaving the workflow undefined. I thought about this question as I rewatched the talk this morning: Linus is extolling the virtues of being distributed and outlines a scenario, the release process, wherein the development group is happily developing along on their branch. The verification group is pulling down code (that is, merging code into their branch) from the development group's branch as they like, tweaking a bit, doing their thing. The testing group, similarly, pulls down (merges) code from the verification group, tests, and gets releases out the door. As he describes it, all three groups are working, all the time. Upstream groups are for the most part blissfully unaware of what is happening later in the pipeline, except, presumably, to the extent that they get interrupted to assist with particularly nasty bugs or whatever. (You could imagine a need to merge code backwards up the pipeline if the code that the development group was producing diverged enough from the testing group's code that merging stopped working.) There is a constant flow of code through this pipeline; predominantly, but not exclusively, in one direction. So, where's the trunk? Do all of these branches represent one composite trunk? The testing group may have patches that the development group never knows about, as testing is always merging changes in to what they've already released. So you could say that there is no 'trunk' in the scenario above... Perhaps there are many, all along this pipeline, a composite 'trunk', many locations in many repositories. But that's not the whole story since those repositories are useless without the pipeline behavior that makes them 'trunk'. Think separation of data structures and algorithms. Trunk is a process *and* the data that it operates on. There are many varieties, and we tend to imply the CVS kind. I notice some paralllel between the devel-verification-testing pipeline that Torvalds describes and the proposed stable-devel boost pipeline. The abolishment of 'trunk' in favor of these other terms might make good sense; at least the terminology won't excite incorrect assumptions about the process. There. I gave it a shot. -t

on Fri Jun 08 2007, troy d straszheim <troy-AT-resophonic.com> wrote:
On Wed, Jun 06, 2007 at 08:40:52AM -0400, David Abrahams wrote:
I've been asking myself why we're still talking about a "trunk," too. Can somebody explain why, and what it's supposed to mean?
Heh, nobody bit on this one. I'm feeling brave, I'll give it a try.
<snip>
There. I gave it a shot.
So was this your answer?
So to talk about the 'trunk' in a CVS-less environment is to either be vague or to imply that the development process is CVS like.
-- Dave Abrahams Boost Consulting http://www.boost-consulting.com

on Fri Jun 08 2007, troy d straszheim <troy-AT-resophonic.com> wrote:
On Wed, Jun 06, 2007 at 08:40:52AM -0400, David Abrahams wrote:
I've been asking myself why we're still talking about a "trunk," too. Can somebody explain why, and what it's supposed to mean?
...
So to talk about the 'trunk' in a CVS-less environment is to either be vague or to imply that the development process is CVS like.
I'll try to give a hopefully non-vague answer reflecting my idea of 'trunky' development. The trunk is where integration testing happens. If the testing resources can only cover one branch, the trunk is that branch. Development occurs on the trunk via incremental refactoring. Ideally, changes are committed only after a test cycle, so that the reason for a regression can be isolated with reasonable accuracy. The practice of a developer working on a branch (or on the local copy) for four months, then merging a major patch and breaking the world is discouraged. The practice of a developer working on a branch for four months and then not merging at all because the real world interfered in the meantime is discouraged. Non-incremental development is discouraged. Does this make sense? :-)

Peter Dimov wrote: [...]
I'll try to give a hopefully non-vague answer reflecting my idea of 'trunky' development.
The trunk is where integration testing happens. If the testing resources can only cover one branch, the trunk is that branch.
Doesn't it imply no more HEAD/branch dichotomy, or at the very least no tests on HEAD?
Development occurs on the trunk via incremental refactoring. Ideally, changes are committed only after a test cycle, so that the reason for a regression can be isolated with reasonable accuracy.
Still developers should have private areas/branches/whatever to which commit their intermediate stages, don't you think? That is, committing changes as you say above may actually be a case of merging.
The practice of a developer working on a branch (or on the local copy) for four months, then merging a major patch and breaking the world is discouraged. The practice of a developer working on a branch for four months and then not merging at all because the real world interfered in the meantime is discouraged. Non-incremental development is discouraged.
Does this make sense? :-)
It certainly does, but isn't that similar to what I wrote here: http://svn.boost.org/trac/boost/wiki/AlternateProcessProposal possibly with extremely short stages? Cheers, Nicola Musatti

Gennadiy Rozental said: (by the date of Mon, 4 Jun 2007 14:23:02 -0400)
What if you depend on serialization or GUI lib or XML parser.
Hold on. What Boost.GUI library are you talking about? Quick google reveals only some preparations for that (?) http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?BoostGUI is there an ongoing effor in this direction? Just curious. -- Janek Kozicki |

"Janek Kozicki" <janek_listy@wp.pl> wrote in message news:20070604224858.6d059484@szpak...
Gennadiy Rozental said: (by the date of Mon, 4 Jun 2007 14:23:02 -0400)
What if you depend on serialization or GUI lib or XML parser.
Hold on. What Boost.GUI library are you talking about?
Quick google reveals only some preparations for that (?)
http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?BoostGUI
is there an ongoing effor in this direction? Just curious.
Just an example of potentially complicated library ;) Gennadiy

On Jun 4, 2007, at 2:01 PM, Peter Dimov wrote:
As a Boost user, I simply don't use Boost components whose HEAD versions are unstable.
Yep. One tends to learn this very, very quickly as a user of HEAD.
As a Boost developer, if a dependency takes too much time to stabilize, I sever ties with it and reimplement the parts I need. This is rare since I have low tolerance for dependencies anyway. :-)
Ha!
I understand that this mindset may be unusual. Still, I find the idea that the trunk is assumed to be unstable a bit odd. The trunk should be stable and everyone should work to keep it that way.
Yes. The problem is that the trunk becomes the Wild West when there is a release branch active, and it takes us *forever* to get it back into a release-ready state. It's a death spiral of a development process. Splitting into devel/stable is one way to fix it, because stable is based on *stable* code (i.e., what's on the 1.34.x branch), and devel is based on head (which is a bit of a mess at the moment). If "devel" became stable, would we need "stable"? - Doug

On Mon, Jun 04, 2007 at 07:57:00PM +0300, Peter Dimov wrote:
troy d. straszheim wrote:
(in directory BOOST_ROOT, say)
iostreams/ include boost/ # contains only dir 'iostreams' iostreams/ *.hpp # notice each project has its own include dir src/ ... test/ ...
variant/ include/ boost/ # contains only dir 'variant' variant/ *.hpp test/ ...
This works and is a good solution from a testing standpoint... but I don't support it for purely selfish reasons; it breaks my "CVS HEAD" use case. :-) I'd still like to have a 'trunk' from which I can 'svn update'.
I you might misunderstand how this works. It does what you want. I'd taken your requirement for obvious. My workspace (CVS HEAD) contains some svn:externals that look like this: iostreams https://svn.boost.org/svn/projects/iostreams/trunk variant https://svn.boost.org/svn/projects/variant/trunk date_time https://svn.boost.org/svn/projects/date_time/trunk (etc) which means the code layout looks like that above. When I svn update, each of those subdirectories are updated recursively. I can make one commit across multiple projects. -t

troy d. straszheim wrote:
My workspace (CVS HEAD) contains some svn:externals that look like this:
iostreams https://svn.boost.org/svn/projects/iostreams/trunk variant https://svn.boost.org/svn/projects/variant/trunk date_time https://svn.boost.org/svn/projects/date_time/trunk (etc)
which means the code layout looks like that above. When I svn update, each of those subdirectories are updated recursively. I can make one commit across multiple projects.
We use 1 svn:external per project that points to an shared directory in my companies repository. I find it a constant source of trouble. People forget that they must branch the external separately and make erroneous checkins. When they branch the external it generally is branched with a different name than the other branch as it must account for the name of the projec that is branching it. Our server address is different for remote access and on site access. The external must be updated locally to point to the external address and if it gets checked in accidently it breaks things. It may just be my client (TortoiseSVN) but if I show log on the folder with the external its contents are considered changes to an unrelated path and hidden when that option is enabled. Perhaps others have had better experiences. When I read the svn book and found out about externals I reorginized are whole companies repository to make heavy use of them. Soon we realized the maintenence headache they bring and we changed it again. Now if I could just get rid of that last svn external my life would be much easier. As an aside if it doesn't exist yet an empty directory should be added in a readily accessible location that can be svn switched into. It helps useable a lot at our company. Thanks, Michael Marcin

Michael Marcin wrote:
We use 1 svn:external per project that points to an shared directory in my companies repository. I find it a constant source of trouble. People forget that they must branch the external separately and make erroneous checkins. In the proposed scheme, no one would ever branch the place where the externals are, only the imported subtrees. When they branch the external it generally is branched with a different name than the other branch as it must account for the name of the projec that is branching it.
This shouldn't matter either, because the externals should only refer to the stable versions of the libraries, and development branches shouldn't be referred to anyway.
Our server address is different for remote access and on site access. The external must be updated locally to point to the external address and if it gets checked in accidently it breaks things. I don't think Boost would have more than one address. If it does, this will be a real problem. It may just be my client (TortoiseSVN) but if I show log on the folder with the external its contents are considered changes to an unrelated path and hidden when that option is enabled.
Don't know anything about this. Sebastian Redl

Halleluhuh!!! Here are my observations: Mostly minor quibbles "Objectives" "A developer may at any time request non-invasive tests of a library's development branch on any or all platforms against any stable branch of the other libraries. The non-invasive requirement ensures that problems in the development branch of the library being tested do not destabilize the stable branch being tested against. The on-demand requirement ensures that developers receive timely test results. Running tests against stable branches of other libraries ensures the tests are run in a stable environment." This section doesn't seem to belong to "Objectives" as it pre-suppose the solution described subsquently in "Policies". I think it should be move down to that section. "Policies" "Integration of a development branch into a stable branch only occurs after the integration is proven to be stable." When I first read the document it wasn't clear to me whether it referred to one development branch or one development branch for each library. I presume we mean the latter. (plus one each for bjam, quickbooks and ?, see below). "Reduction in tool fragility is a general goal, because tool fragility" Tools for build and testing should be part of the same system. Currently Boost Test is part of this system so that would be fine. Bjam.v2 should also be sjubect to the same procedure. Of course this would mean that tools like bjam and quickbooks should have their own test suites which can be run on a separate branch until they are determined to be "release stable". In other words, tools for building boost should be subject to the same standards and procedures that boost libraries are. "Scenarios" "A library has been modified and tested locally on a development branch, and needs to be tested on other platforms. These tests must be against one or more of the stable branches. Variation: the library is a dependency of other libraries, so they must also be tested to ensure they have not been broken by changes to the base library." I don't think this second sentence is correct. If a library is tested against the stable branch and errors are found, they are either in the library tested or in another library already in the stable branch. If its the former, the test fails and the library cannot be merged into the stable branch. If its the latter, the stable branch has a bug and it should be handled according to the procedure for that case. The developer of the dependent library has the choice to either workaround the problem or wait for the bug in the stable branch to be fixed. "A request for the latest snapshot of one of the stable branches." I don't understand what this means. "[A] Bug is found in one of the stable branches. The fix must be done in such a way that there is no possibility of the stable branch becoming unstable. This should be handled as any other case. The bug is already there and had been released. Its probably been there for a while so its not cropping up everywhere. That is its not any more urgent than any other enhancement or bug. So I would say that the correct procedure for addressing a bug in the stable branch is: a) open up the developement branch. b) make up a test which traps the bug and add the test to the library's test suite. c) Continue as with any other library enhancement. Thus, though we can't guarantee perfection, this will guarentee continuous improvement. "A new release of a ccompiler breaks existing libraries." "A new release of a non-critical compiler breaks existing libraries." "A library wishes to support a compiler that is not on the "release-critical" I think the whole concept of classification of compilers is sort of red herring and adds work and complication in an attempt to address an unaddressable situation. Suppose we have two libaries a) boost.lambda - it unrealistic and unproductive to use this library with an older compiler. This is not a problem as no reasonable person is going to expect or attempt to do this. b) boost serialization - this library supports borand compilers back to 5.51. And some users find this useful. And it doesn't cost anything to leave it in. So don't see where a boost wide concept of "release critical" compiler can be defined. Boost already has the policy that code should compile on a standards conforming compiler and that it shouldn't be rejected if a some inconformant compiler (basically all f them) can't compile something in the library. So I would suggest that each "stable" version has matrix which for every combination of library/compiler shows support or non-support. Actually we already have this in the test matrix. So when a new release of compiler x comes out, there is nothing urgent to do - nothing is "broken" its just that that compiler support boost only up to "version xxx" ( more on that later) until library developers get around to catching up. "A library is to be separately released. This separate release is to include both the library and its dependencies" The desire or percieved necessity that a library needs to be separately releasable should be seen that the library developer has found the system lacking. That is, if the procedures realize the stated objectives, there should be no incentive to making a separate release. I envision that one might spend some time on the development branch and testing at home until things are as perfect as I can make them. Then I request a test to check against the compilers I don't have. A couple of iterations on that and now I as to be merged into the stable branch. Things should pass. (Since we've already tested my changes, we're just re-running the tests on all the other libraries so the only surprises should come in libraries which depend on mine if and only if I broke an interface). At this point the "stable" branch is a "guarenteed improvement" so there is no point in NOT releasing it" So I would expect a new "release" every month or so. So why would a developer go through the hassle of a separate release just get changes to users and average of two weeks sooner? Misceleanous The term "stable" bugs me. It suggests "good enough", "not getting worse" in general it pessimistic. I would prefer "release" or "release candidate" which I think is more accurate anyway. Robert Ramey Beman Dawes wrote:
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html

"Beman Dawes" <bdawes@acm.org> wrote in message news:f3uemn$sa2$1@sea.gmane.org...
There is a fresh draft of a Boost Development Environment proposal up at http://mysite.verizon.net/beman/development_environment.html
Hi, Beman. The proposal is the move in a right direction. There are some "issues" with it IMO: 1. Overly formal notion of "stable" branch
From what I understand the proposal goes long length making sure there is a stable branch. On the way several "procedures" are introduced, which require some discipline from developers, new scripts to be written developed and manual development branch test requests to be submitted.
2. "Half baked" notion of dependency On a way the proposal refer to some library dependencies. It's unclear how they are supported, checked and enforced. For example if author of library A is ready to move his dev branch to stable, how can we make sure this change won't break library B that depends on it. And if it is who is at fault? 3. There is no support for partial/independent release (at least in policies) 4. No ability to test against particular branch of dependant library A. I believe most (all?) of the problem can be resolved with solution I proposed soon before BoostCon: http://article.gmane.org/gmane.comp.lib.boost.devel/158491 I see Troy pitching similar idea for directory structure in other post in this chain so I won't repeat it. Let me just say that I implemented this approach while ago in my make system I use at work and it was worth all the efforts. My solution formalize notion of independent components with their independent versions and the dependency in between them. The only tough spot is "stable" umbrella release combination. The algorithm for it might require some fine tuning but it's clear how it should work. Essentially for every library A that changed from version N1 to version N2 you find version N3: N1 <= N3 <= N2, so that all libraries that depends on A were releases against that version. I can come up with formal algorithm description if there is an interest. Couple answers on possible questions: * To request testing against particular releases of dependencies you just change your Jamfile to point to them. * To release your own library developer doesn't need to do anything single snv copy command will do. Jamfile will include version of all the dependencies. (Optionally svn externals can be used to simplify pulling particular subset of boost even more) * No testing required during release. Once umbrella boost release is combined it follows strait to packaging. The way I see it, we have almost all we need to implement this. If snv externals work (and some people tell me they don't) we may not need to make any changes in make system (by creating reflection of split by lib tree into combined tree). Gennadiy
participants (39)
-
Ames, Andreas (Andreas)
-
Anthony Williams
-
Beman Dawes
-
Benjamin Kosnik
-
Bjørn Roald
-
Darren Garvey
-
David Abrahams
-
David Bergman
-
Domenico Andreoli
-
Douglas Gregor
-
Edward Diener
-
Emil Dotchevski
-
Eric Niebler
-
Gennadiy Rozental
-
Henrik Sundberg
-
Janek Kozicki
-
Joel de Guzman
-
Johan Nilsson
-
John Phillips
-
Julio M. Merino Vidal
-
K. Noel Belcourt
-
Martin Bonner
-
Martin Wille
-
Mathias Gaunard
-
Matias Capeletto
-
Michael Marcin
-
Nicola Musatti
-
Peter Dimov
-
Phil Richards
-
Rene Rivera
-
Robert Ramey
-
Sebastian Redl
-
Sohail Somani
-
Stefan Seefeld
-
Stefano Delli Ponti
-
Thomas Witt
-
troy d straszheim
-
troy d. straszheim
-
Vladimir Prus