
Hi, with 1.33.0 now five months old it is time to get the 1.34.0 release going. Well to be honest it's probably more than time. In order to improve our release process we are planning to adopt a staged release process that is modelled partly after the gcc release process. By and large it can be seen as a formalization of things already done in past releases. A description of the different stages will be given at the end of this mail. But lets get the important issues out of the way first: schedule. I think that we have enough features in CVS already to make 1.34 worthwhile. For that reason the main focus for 1.34 will be a release within reasonable time. For that reason I am going to aim for a feature freeze (Stage 3) in two weeks from now. I ask anybody who can not make that deadline for one reason or another to contact me directly and we try to figure something out. Furthermore for this release feature freeze will also be the cut off date for addition of new libraries. I am planning to flesh out the further schedule in a few days once I got an idea how we are doing with the feature freeze. Thanks Thomas Boost Release Procedure (by Doug Gregor) ==================== Boost development for each release will be divided into four stages, influenced by the GCC development model. The stages move from a very loose, open development model where major changes can be made toward a much more tightly-controlled model as the release date nears. Release managers are appointed for the duration of the release process for a particular version of Boost, and several Boost versions may be in progress at any point in time. However, actual release dates will be staggered to allow testing resources to be allocated to the most imminent release. Stage 1 - Open Development -------------------------- Major changes to libraries or infrastructure may be performed, including large updates or additions to core libraries (e.g., type traits) and tools (e.g., Boost.Build). New libraries and new features can be added freely. Stage 2 - Intracomponent Open Development ----------------------------------------- This stage restricts Stage 1 slightly by banning far-reaching changes to Boost. Major changes to libraries or infrastructure, or the addition of new features and libraries, can be made in the stage. However, the changes must be limited in scope and may not have far-reaching affects. For instance, the build or regression testing systems in Boost cannot be changed at this point, and Boost libraries on which other major components of Boost depend (such as MPL, Type Traits, and Config) may not have large interface changes or be fundamentally rewritten. The Release Manager has final say regarding the classification of changes as "far-reaching" or not; if you are unsure, please ask. Stage 3 - Feature freeze ------------------------ This stage is a bug-fixing stage. Bug fixes may be freely applied to Boost, even if they require substantial changes. Interfaces may not be changed (unless required to fix bugs) and new features should be added only sparingly and with Release Manager approval. New libraries may be introduced at this stage with Release Manager approval, but the introduction of new libraries is tentative: if the new library fails a significant number of its regression tests, it will be removed from the release. Stage 4 - Release preparation ----------------------------- Only bug-fixes and documentation changes may be applied, and the changes should be as minimal and as safe as possible. If a change is large, or has the potential to cause additional failures, consult the Release Manager before applying the change to the repository. Changes that cause failures will be reverted after 48 hours if the failures are not addressed to the Release Manager's satisfaction. The beginning of stage 4 will be marked by the creation of a new CVS branch for the release. At this point, the old CVS branch falls back to stage 1 (for major releases, e.g., 1.XX.0) or stage 3 (for minor releases, e.g., 1.XX.Y). Final Freeze ------------ 7 days prior to a release, the Release Manager will call for a final freeze on the CVS branch corresponding to the release. At this point, no changes may be made to the branch with explicit Release Manager approval. Post-Release ------------ After a release is made, the changes made on the release branch should be merged back into its parent branch. The release branch reverts to stage 4; should another minor release be scheduled on that branch, the Release Manager for that minor version may revert to stage 3 or remain in stage 4 until the point release is imminent. Thomas Witt witt@acm.org

Thomas Witt wrote:
For that reason I am going to aim for a feature freeze (Stage 3) in two weeks from now.
Does that imply we are in 'stage 2' now, or are we jumping straight from 1 -> 3?
Stage 2 - Intracomponent Open Development ----------------------------------------- ... However, the changes must be limited in scope and may not have far-reaching affects. For instance, the build or regression testing systems in Boost cannot be changed at this point ...
I ask, because there was recent discussion on the boost.build lists about moving to v2 for the next release, and if we are already stage 2 it would seem too to make such a change. At the least, I think the build maintainers and regression testers need to make a decision on this fast if we are heading back into release mode: Will Boost 1.34 be based on boost build v1 or boost build v2? Second contentious issue: deprecations. We have already used a deprecation mechanism for warning about and later removing libraries. There has been some recent discussion about stopping support for old compilers. This is a topic that is not going to go away until resolved, and I think we were getting close to some agreement (if not on which platforms should go!) I suggest we formally deprecate VC6, GCC prior to 3, and have a bun-fight over Borland support. This would ammount to adding the deprecation notice to the release notes, a #pragma warning to the compiler config.hpp, but full support through this release as before. We can argue about what the consequence of deprecation should be for the next release. -- AlisdairM

So , I guess these is no chance of asio making the deadline? (Presuming it is accepted, of course) Dave M

Dave Moore wrote:
So , I guess these is no chance of asio making the deadline? (Presuming it is accepted, of course)
IF asio is accepted, every effort should be made to get it into the next release. If there is any hope of getting a networking library into C++0x, we need the usage experience now. (All IMHO, of course.) -- Eric Niebler Boost Consulting www.boost-consulting.com

On Wed, 25 Jan 2006 08:08:58 -0800, Eric Niebler wrote
Dave Moore wrote:
So , I guess these is no chance of asio making the deadline? (Presuming it is accepted, of course)
I'm sorry for the delay in the review results -- I hope to finish them this week.
IF asio is accepted, every effort should be made to get it into the next release. If there is any hope of getting a networking library into C++0x, we need the usage experience now. (All IMHO, of course.)
While I agree with the sentiment I find it unlikely that this could happen. If accepted, there will surely be some set of changes before asio is included. That process could take a substantial amount of time. Even if there were no changes it would probably take a month to get asio in... Jeff

Jeff Garland wrote:
On Wed, 25 Jan 2006 08:08:58 -0800, Eric Niebler wrote
Dave Moore wrote:
So , I guess these is no chance of asio making the deadline? (Presuming it is accepted, of course)
I'm sorry for the delay in the review results -- I hope to finish them this week.
IF asio is accepted, every effort should be made to get it into the next release. If there is any hope of getting a networking library into C++0x, we need the usage experience now. (All IMHO, of course.)
While I agree with the sentiment I find it unlikely that this could happen. If accepted, there will surely be some set of changes before asio is included. That process could take a substantial amount of time. Even if there were no changes it would probably take a month to get asio in...
I agree. To put it bluntly I don't see any way asio can make it. But there is good news as well. Part of being very restrictive about what goes in 1.34 is the goal to have more frequent releases. The idea is being strict will shorten the release cycle and that will reduce the pressure for any specific feature to make a specific release. So much for the theory. Thomas -- Thomas Witt witt@acm.org

On Wed, 25 Jan 2006 08:08:58 -0800, Eric Niebler wrote
Dave Moore wrote:
So , I guess these is no chance of asio making the deadline? (Presuming it is accepted, of course)
I'm sorry for the delay in the review results -- I hope to finish them this week.
IF asio is accepted, every effort should be made to get it into the next release. If there is any hope of getting a networking library into C++0x, we need the usage experience now. (All IMHO, of course.)
While I agree with the sentiment I find it unlikely that this could happen. If accepted, there will surely be some set of changes before asio is included. That process could take a substantial amount of time. Even if there were no changes it would probably take a month to get asio in...
If recent releases are anything to go by, we may desire a quick release but practically I'm expecting it to take 3+ months to get 1.34 out the door. If asio is available in a month, I do think it can make it into 1.34 even if only as a 'preview'. It is such an important library. I'm not complaining about the recent Boost release cycles; there is a fantastic amount of hard work and dedication by everyone, its just that Boost releases are so large and monolithic at present. I think perhaps too much for a single release manager co-ordinating a large number of volunteers. While on the subject of releases, it has been mooted that perhaps Boost shouldn't be released as one monolithic package and that the problem might be better tackled by a finer grained approach and having a two-tier release management structure. Individual 'modules' are released upwards (and available as versioned outputs) and a main boost release is simply a collection of latest inter-operable set of versioned modules. The aim is to spread the release management load but also to allow individual modules to support 'releases' more frequently than the top level. Regards Paul Baxter

On Jan 26, 2006, at 5:58 AM, Paul Baxter wrote:
While on the subject of releases, it has been mooted that perhaps Boost shouldn't be released as one monolithic package and that the problem might be better tackled by a finer grained approach and having a two-tier release management structure. Individual 'modules' are released upwards (and available as versioned outputs) and a main boost release is simply a collection of latest inter-operable set of versioned modules.
The problem with a fine-grained approach is that all of the grains have to fit together. We have to manage version dependencies between the components (e.g., to deal with breaks in backward compatibility), and deal with user problems that arise from mismatched component versions. We'd have to push our overly-taxed regression testing systems harder to cover various collections of modules. I'm not completely against the finer-grained approach, but I've seen enough problems with it in other projects to cause some concern. Have you ever maintained a Linux system using Gentoo? It's nearly impossible to duplicate (and, thus, diagnose) errors from one system to the next, because nobody has exactly the same set of components. Often, compiling a new component will break because of some minor incompatibility with another component, forcing you to roll back something. Cygwin has the same issue, although the problems there usually result in weird instabilities due to misunderstood interactions between different versions of the components. Where has the fine-grained approach worked well? How can Boost duplicate their model to successfully deploy a more modular Boost? How much effort would be involved both in the conversion and in maintaining the resulting modular Boost? I could spew more questions, but my underlying question is much more broad: While component- oriented systems work wonderfully on paper, I've seen far more failures than successes, and I'd like to see why we think Boost can get itself into the latter category. Doug

Douglas Gregor wrote:
I'm not completely against the finer-grained approach, but I've seen enough problems with it in other projects to cause some concern. Have you ever maintained a Linux system using Gentoo? It's nearly impossible to duplicate (and, thus, diagnose) errors from one system to the next, because nobody has exactly the same set of components. Often, compiling a new component will break because of some minor incompatibility with another component, forcing you to roll back something. I agree completely. I've been using Gentoo for more then an year updating it regularly. Most of the time it was fine, thanks to great Gentoo team. But sometimes I had to resolve errors myself. Finally, when it failed to update completely because python stopped working I decided to install FreeBSD. Two weeks later there was another fault and I replaced my Gentoo with FreeBSD. Gentoo is too flexible. -- Alexander Nasonov

On Jan 26, 2006, at 5:58 AM, Paul Baxter wrote:
While on the subject of releases, it has been mooted that perhaps Boost shouldn't be released as one monolithic package and that the problem might be better tackled by a finer grained approach and having a two-tier release management structure. Individual 'modules' are released upwards (and available as versioned outputs) and a main boost release is simply a collection of latest inter-operable set of versioned modules.
The problem with a fine-grained approach is that all of the grains have to fit together. We have to manage version dependencies between the components (e.g., to deal with breaks in backward compatibility), and deal with user problems that arise from mismatched component versions. We'd have to push our overly-taxed regression testing systems harder to cover various collections of modules.
I'm not completely against the finer-grained approach, but I've seen enough problems with it in other projects to cause some concern. Have you ever maintained a Linux system using Gentoo? It's nearly impossible to duplicate (and, thus, diagnose) errors from one system to the next, because nobody has exactly the same set of components.
I'm in complete agreement about Gentoo and boost at too fine a granularity, but I had envisaged the process to involve a look at library dependencies and try to capture a boost 'core' package and then satellite packages around it. Many of the boost libraries depend on a couple of key libraries and then are largely independent of the majority of the rest. You'll never find a completely satisfactory core set of libraries that works for everyone, but it also seems wrong to delay a 'core' release because one non-core library has a few problems with a couple of compilers. Why not release a new core and then when the satellite library is ready it can play catchup. A boost 'full' release could occur as such a point where all the satellite packages have caught up with the core version and devolves responsibility for satellite packages lessening (in theory) the release manager's workload. Of course some 'satellite' packages are going to be dependent on others, so there still needs some level of coordination between them but I'd much prefer the core to be available as a tested entity even if say a serialisation or an asio library still has some final gotchas to sort out.
Where has the fine-grained approach worked well? How can Boost duplicate their model to successfully deploy a more modular Boost?
I'm thinking linux kernel = core, userspace programs = satellites. Also perhaps larger userspace efforts like KDE that are already managed as multiple packages dependent on a core set of headers and functionality.
How much effort would be involved both in the conversion and in maintaining the resulting modular Boost? I could spew more questions, but my underlying question is much more broad: While component- oriented systems work wonderfully on paper, I've seen far more failures than successes, and I'd like to see why we think Boost can get itself into the latter category.
I certainly agree with this and accept the caution however I worry that boost is becoming like the ACE framework, very well regarded but with a steep learning curve and a bit of an 'all or nothing' proposition. That applies both to test/release and usage. If all libraries depended on many others and dependencies really couldn't be factored into a core with some satellites, this approach gets blown out the water, of course. Paul

AlisdairM wrote:
Thomas Witt wrote:
For that reason I am going to aim for a feature freeze (Stage 3) in two weeks from now.
Does that imply we are in 'stage 2' now, or are we jumping straight from 1 -> 3?
Well for all practical purposes assume we are transitioning 1->3 for this release in order to get started.
I ask, because there was recent discussion on the boost.build lists about moving to v2 for the next release, and if we are already stage 2 it would seem too to make such a change.
At the least, I think the build maintainers and regression testers need to make a decision on this fast if we are heading back into release mode: Will Boost 1.34 be based on boost build v1 or boost build v2?
If v2 is up and running in two weeks it can make it in. My personal opinion is that this is a recipe for disaster. I would like to push it to 1.35.
Second contentious issue: deprecations. We have already used a deprecation mechanism for warning about and later removing libraries. There has been some recent discussion about stopping support for old compilers. This is a topic that is not going to go away until resolved, and I think we were getting close to some agreement (if not on which platforms should go!)
I suggest we formally deprecate VC6, GCC prior to 3, and have a bun-fight over Borland support. This would ammount to adding the deprecation notice to the release notes, a #pragma warning to the compiler config.hpp, but full support through this release as before.
I would not be opposed to this. Thomas -- Thomas Witt witt@acm.org

Thomas Witt <witt@acm.org> writes:
AlisdairM wrote:
Thomas Witt wrote:
At the least, I think the build maintainers and regression testers need to make a decision on this fast if we are heading back into release mode: Will Boost 1.34 be based on boost build v1 or boost build v2?
If v2 is up and running in two weeks it can make it in. My personal opinion is that this is a recipe for disaster. I would like to push it to 1.35.
Have you seen Vladimir's plan for the transition? It's solid. I think if we can accomplish all the necessary testing of the system within the two week timeframe, the risk of using v2 for the release will be very low. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
Have you seen Vladimir's plan for the transition?
Only briefly.
It's solid.
Ok.
I think if we can accomplish all the necessary testing of the system within the two week timeframe, the risk of using v2 for the release will be very low.
If you guys can manage that, I am ok with it. Well actually as long as we can still fall back to v1 in case major problems occur. Thomas -- Thomas Witt witt@acm.org

Thomas Witt wrote:
AlisdairM wrote:
Thomas Witt wrote:
For that reason I am going to aim for a feature freeze (Stage 3) in two weeks from now. Does that imply we are in 'stage 2' now, or are we jumping straight from 1 -> 3?
Well for all practical purposes assume we are transitioning 1->3 for this release in order to get started.
So we are still in stage 1? And I can make changes to bjam... So that I can make a separate release as is customary before a Boost release. And required if we are also going to switch to BBv2. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - Grafik/jabber.org

I guess now is the time to formally request that QNX6 (AKA QNX Neutrino) be adopted as a supported platform for version 1.34. Regards Jim Douglas

Jim Douglas wrote:
I guess now is the time to formally request that QNX6 (AKA QNX Neutrino) be adopted as a supported platform for version 1.34.
Am I to take it that a lack of objection implies acceptance? Just to expand - * The relevant qcc-*-tools.jam files are already in HEAD for BBv1 * I am working on a qcc.jam file for BBv2 * I have been running nightly QNX regression tests since before the release of version 1.33.1 and I am in the process of setting up a dedicated test machine so that I can run both HEAD and development branch tests every 24 hours. * Boost.config recognises QNX 6/Neutrino. * Most of the code now has QNX6 conditional fixes where necessary. * There remains a handful of test failures where we still have to identify the cause, but that can be achieved before the release of 1.34. * I will prepare a QNX page and a qcc tools page for the website. * I will prepare a list of known test failures for explicit-failures-markup.xml. * Someone else needs to add the appropriate links from the relevant pages to my new pages. I am acting as the main point of contact but there are other interested parties contributing as well. Given that I do not have CVS access I need to know who to contact for the website and test failure material. So is it welcome aboard, or don't ring us - we'll ring you? Jim

On Jan 28, 2006, at 7:57 AM, Jim Douglas wrote:
Jim Douglas wrote:
I guess now is the time to formally request that QNX6 (AKA QNX Neutrino) be adopted as a supported platform for version 1.34.
Am I to take it that a lack of objection implies acceptance?
Thomas Witt (our 1.34 release manager) has the final say. The only contention I see is that we have one group of Boosters calling for deprecation of GCC < 3 and another that wants to adopt QNX as a supported platform. Of course, QNX has a GCC 2.95.3 compiler as an option. Are you proposing to adopt QNX with GCC 2.95.3, GCC 3.3.x or both? You've done a wonderful job making Boost portable to QNX, so I can't see any objection to including support for the GCC 3.3x compiler; GCC 2.95.3 might be a harder sell. Doug

Douglas Gregor wrote:
The only contention I see is that we have one group of Boosters calling for deprecation of GCC < 3 and another that wants to adopt QNX as a supported platform. Of course, QNX has a GCC 2.95.3 compiler as an option.
Are you proposing to adopt QNX with GCC 2.95.3, GCC 3.3.x or both?
Support for 2.95.3 is a very low priority for me right now and if it was dropped from Boost I don't think there would be many (if any) dissapointed QNX programmers out there. My plan for Boost v 1.34 would be to make a good job of gcc 3.3.5 support. I will continue to monitor the current debate over gcc 2.95.3 and maybe when my dedicated test machine is on line I will try some runs with gcc 2.95.3 just to see how bad it really is :-) Jim

Jim Douglas wrote:
Jim Douglas wrote:
I guess now is the time to formally request that QNX6 (AKA QNX Neutrino) be adopted as a supported platform for version 1.34.
Am I to take it that a lack of objection implies acceptance?
Just to expand -
* The relevant qcc-*-tools.jam files are already in HEAD for BBv1 * I am working on a qcc.jam file for BBv2 * I have been running nightly QNX regression tests since before the release of version 1.33.1 and I am in the process of setting up a dedicated test machine so that I can run both HEAD and development branch tests every 24 hours. * Boost.config recognises QNX 6/Neutrino. * Most of the code now has QNX6 conditional fixes where necessary. * There remains a handful of test failures where we still have to identify the cause, but that can be achieved before the release of 1.34.
* I will prepare a QNX page and a qcc tools page for the website. * I will prepare a list of known test failures for explicit-failures-markup.xml. * Someone else needs to add the appropriate links from the relevant pages to my new pages.
I am acting as the main point of contact but there are other interested parties contributing as well. Given that I do not have CVS access I need to know who to contact for the website and test failure material.
So is it welcome aboard, or don't ring us - we'll ring you?
Sorry for not responding earlier (I was googeling QNX ;-)), but it certainly is welcome aboard. Your efforts in supporting boost on QNX are very much appreciated. As far as the gcc 2.95.3 support goes see Doug's post. Thanks! Thomas
Jim
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Thomas Witt witt@acm.org

"Thomas Witt" <witt@acm.org> wrote in message
with 1.33.0 now five months old it is time to get the 1.34.0 release going. Well to be honest it's probably more than time. In order to improve our release process we are planning to adopt a staged release process that is modelled partly after the gcc release process. By and large it can be seen as a formalization of things already done in past releases. A description of the different stages will be given at the end of this mail.
But lets get the important issues out of the way first: schedule. I think that we have enough features in CVS already to make 1.34 worthwhile. For that reason the main focus for 1.34 will be a release within reasonable time. For that reason I am going to aim for a feature freeze (Stage 3) in two weeks from now.
Is registering a library types with typeof going to be considered a new feature? Is somebody planning to add the typeof support to a library in 1.34? Regards, Arkadiy

Arkadiy Vertleyb wrote:
Is somebody planning to add the typeof support to a library in 1.34?
Yes, xpressive has typeof support in main CVS. boost/xpressive/xpressive_typeof.hpp. -- Eric Niebler Boost Consulting www.boost-consulting.com

Arkadiy Vertleyb wrote:
Is registering a library types with typeof going to be considered a new feature? Is somebody planning to add the typeof support to a library in 1.34?
Yep. Spirit should be doable. Of course I still think we should first agree on a consistent scheme for adding Typeof support to the libraries. We have talked enough about the pros and cons of different possibilities IMO and we have a winner among the possibilties considered. So if there is noone with more/different/better ideas would it be possible to turn our top candidate (typeof support in subfolders next to the component headers) into some guidelines to be added to the Typeof docs? Would the work needed to make XPressive use this scheme be acceptable for Eric (or is there any volunteer for porting)? Regards, Tobias

Arkadiy Vertleyb wrote:
Is registering a library types with typeof going to be considered a new feature? Is somebody planning to add the typeof support to a library in 1.34?
Yep. Spirit should be doable.
Of course I still think we should first agree on a consistent scheme for adding Typeof support to the libraries. We have talked enough about the pros and cons of different possibilities IMO and we have a winner among the possibilties considered. So if there is noone with more/different/better ideas would it be possible to turn our top candidate (typeof support in subfolders next to
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote the
component headers) into some guidelines to be added to the Typeof docs?
I didn't have the impression this was a clear winner though, was it?
Would the work needed to make XPressive use this scheme be acceptable for Eric (or is there any volunteer for porting)?
I don't think we have Eric's opinion about the scheme... Eric, have you seen the discussion about consistent type registration? Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Arkadiy Vertleyb wrote:
Is registering a library types with typeof going to be considered a new feature? Is somebody planning to add the typeof support to a library in 1.34?
Yep. Spirit should be doable.
Of course I still think we should first agree on a consistent scheme for adding Typeof support to the libraries. We have talked enough about the pros and cons of different possibilities IMO and we have a winner among the possibilties considered. So if there is noone with more/different/better ideas would it be possible to turn our top candidate (typeof support in subfolders next to the component headers) into some guidelines to be added to the Typeof docs?
I didn't have the impression this was a clear winner though, was it?
(BTW. we're referring to this [ http://tinyurl.com/c6qye ] thread). Well, maybe not /that/ clear regarding all the details. Still -- considering that the "include directory suggestion" would require to use #include with weird relative paths (which at least you and me felt uncomfortable with), that the winner (or not) is aesthetically quite appealing, that there is (at least some) existing code, and that all other alternatives we considered had significant drawbacks, it's clear enough IMO.
Would the work needed to make XPressive use this scheme be acceptable for Eric (or is there any volunteer for porting)?
I don't think we have Eric's opinion about the scheme... Eric, have you seen the discussion about consistent type registration?
Right. That would be great (and excuse me in case my question reads too provocative -- I meant to imply that -first of all- Eric's opinion on all that stuff is very welcome, of course ;-) )! Regards, Tobias

Arkadiy Vertleyb wrote:
I don't think we have Eric's opinion about the scheme... Eric, have you seen the discussion about consistent type registration?
I skimmed it and it seems over-engineered to me. IMO, the libraries that will want typeof support are those which have complicated intermediate types -- the expression template libraries. And for those libraries, you'll need to register all the types that could show up in the resulting expression. Those can all go in one file. Done. I don't see a compelling reason to make it more complicated. What is the argument for separate files? Compile times? Has anyone measured? -- Eric Niebler Boost Consulting www.boost-consulting.com

"Eric Niebler" <eric@boost-consulting.com> wrote
Arkadiy Vertleyb wrote:
I don't think we have Eric's opinion about the scheme... Eric, have you seen the discussion about consistent type registration?
I skimmed it and it seems over-engineered to me. IMO, the libraries that will want typeof support are those which have complicated intermediate types -- the expression template libraries. And for those libraries, you'll need to register all the types that could show up in the resulting expression. Those can all go in one file. Done. I don't see a compelling reason to make it more complicated.
I think it depends. The approach you are using relies on forward declarations, and those will not work well with the default parameters. Consider the following: template<class T, class Pred = std::less<T>, class A = std::allocator<T> > class set; BOOST_TYPEOF_REGISTER_TEMPLATE(std::set, 1) // error BOOST_TYPEOF_REGISTER_TEMPLATE(std::set, 2) // error BOOST_TYPEOF_REGISTER_TEMPLATE(std::set, 3) // OK In this case you can only use the third form (as opposed to all three if the template was defined instead of forward-declared), and so, such type as set<int> will be encoded with five integral values instead of two, and set<set<int> > with 17 instead of 3. And I think, if BOOST_MPL_LIMIT_VECTOR_SIZE=50, mpl::vector<int, char> would be encoded with 51 integral values instead of 3. I think, if you have default template parameters, and want typeof to take advantage of them, you have to include instead of forward declare, and in this case the approach with one registration file per header is more natural (although not the only) option. The alternative would be to use include guards defined in your headers to decide what to register. Regards, Arkadiy

Arkadiy Vertleyb wrote:
template<class T, class Pred = std::less<T>, class A = std::allocator<T> > class set;
BOOST_TYPEOF_REGISTER_TEMPLATE(std::set, 1) // error BOOST_TYPEOF_REGISTER_TEMPLATE(std::set, 2) // error BOOST_TYPEOF_REGISTER_TEMPLATE(std::set, 3) // OK
Would it be possible to allow Typeof registration of partially specialized templates? Ideally, this feature would allow us to a) register the templates partially specialized with their default template arguments (and solve the problem described above) REGISTER_SPEC((typename T), (std::set< T )( std::less<T> )( std::allocator<T> > )) REGISTER_SPEC((typename T)(typename Pred), (std::set< T )( Pred )( std::allocator<T> > )) // ^^^ NOTE: not a proposal for an interface -- for illustration, only , b) exploit redundancy to eliminate the integers needed to encode the type REGISTER_SPEC((typename T), (std::pair< T )( T )) // for complex to encode types, we reduce the complexity for about 50% , and c) register templated composite types of expression common to an expression template framework. E.g. REGISTER_SPEC((class P1)(class P2)(typename A), (spirit::sequence< spirit::action<spirit::difference<P1 )( P2> )( A> )( P2> ) // for Spirit expressions of the following kind: // // (parser1 - parser2)[action] >> parser2 Regards, Tobias

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Arkadiy Vertleyb wrote:
template<class T, class Pred = std::less<T>, class A = std::allocator<T>
class set;
BOOST_TYPEOF_REGISTER_TEMPLATE(std::set, 1) // error BOOST_TYPEOF_REGISTER_TEMPLATE(std::set, 2) // error BOOST_TYPEOF_REGISTER_TEMPLATE(std::set, 3) // OK
Would it be possible to allow Typeof registration of partially specialized templates?
It seems that it would be very hard to achieve any decent syntax :-( For example, how would we handle commas _inside_ the parameters: REGISTER_SPEC((class T), (A<)(vector<T*, allocator<T*> >)) ^^^^^^^^^^ Breaking at every comma and placing every part into the sequence element just to pass in an unstructured string? Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org>
Would it be possible to allow Typeof registration of partially specialized templates?
It seems that it would be very hard to achieve any decent syntax :-(
Without thinking about the macro interface for a moment it would be technically possible, wouldn't it?
Breaking at every comma and placing every part into the sequence element just to pass in an unstructured string?
Right, I just escaped every comma with ")(" and put parentheses about the whole thing. Another (maybe more readable) approach could be to use a PP-array instead of a sequence, or a comma count parameter. Regards, Tobias

Tobias Schwinger <tschwinger@neoscientists.org> writes:
Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org>
Would it be possible to allow Typeof registration of partially specialized templates?
It seems that it would be very hard to achieve any decent syntax :-(
Without thinking about the macro interface for a moment it would be technically possible, wouldn't it?
Breaking at every comma and placing every part into the sequence element just to pass in an unstructured string?
Right, I just escaped every comma with ")(" and put parentheses about the whole thing. Another (maybe more readable) approach could be to use a PP-array instead of a sequence, or a comma count parameter.
I don't know if it helps, but it's sometimes possible to work around the comma issue by using a syntax with extra parens on the outside. Then you can form a function (pointer) type and take it from there. int (foo<bar, baz>) HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
I don't know if it helps, but it's sometimes possible to work around the comma issue by using a syntax with extra parens on the outside. Then you can form a function (pointer) type and take it from there.
int (foo<bar, baz>)
Nice! I believe that should do the trick (although I'm not quite sure yet). Thanks, Tobias

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org>
Would it be possible to allow Typeof registration of partially
specialized
templates?
It seems that it would be very hard to achieve any decent syntax :-(
Without thinking about the macro interface for a moment it would be technically possible, wouldn't it?
Yes, I think it would.
Breaking at every comma and placing every part into the sequence element just to pass in an unstructured string?
Right, I just escaped every comma with ")(" and put parentheses about the whole thing. Another (maybe more readable) approach could be to use a PP-array instead of a sequence, or a comma count parameter.
Right now I am skeptical about the possibility to implement anything readable, but you are welcome to convince me otherwise ;-) Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org>
Would it be possible to allow Typeof registration of partially specialized templates?
It seems that it would be very hard to achieve any decent syntax :-(
Without thinking about the macro interface for a moment it would be technically possible, wouldn't it?
Yes, I think it would.
It's hot ;-).
Right now I am skeptical about the possibility to implement anything readable, but you are welcome to convince me otherwise ;-)
Well, after reading Dave's post I believe it might be possible to allow REG_SPEC((typename T),(std::set<T,std::less<T>,std::allocator<T> >)) (with some extra work that is -- by specializing for a function with a special return type). Here are some more straightforward versions which do not seem that hard to read to me (although they involve counting commas): REG_SPEC((typename T),2,(std::set<T,std::less<T>,std::allocator<T> >)) // ^--- comma count or REG_SPEC((typename T),3,(std::set<T,std::less<T>,std::allocator<T> >)) // ^--- tuple arity or REG_SPEC((typename T),(3,(std::set<T,std::less<T>,std::allocator<T> >)) ) // \--- pp-array used as a string -----------------/ Anything that works with your taste among it? <by the way> Our list correspondence is often hard to read because our clients seem to disagree on where to break lines. I set the line width down to 80 characters (which I figure should be acceptable -- it used to be 82 for quotes plus code) but the problem seems to persist. Is there anything that you can do about it, perhabs? </by the way> Regards, Tobias

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org>
Would it be possible to allow Typeof registration of partially specialized templates?
It seems that it would be very hard to achieve any decent syntax :-(
Without thinking about the macro interface for a moment it would be technically possible, wouldn't it?
Yes, I think it would.
It's hot ;-).
Right now I am skeptical about the possibility to implement anything readable, but you are welcome to convince me otherwise ;-)
Well, after reading Dave's post I believe it might be possible to allow
REG_SPEC((typename T),(std::set<T,std::less<T>,std::allocator<T> >))
Considering my previous post... we could probably make it: REG_SPEC((typename),(std::set<P0,std::less<P0>,std::allocator<P0> >)) Where P0 stands for "the first parameter". We already use this techinique in dependent template parameters. Then (typename) is free to use for other purposes. The above syntax is most attractive, but I think there will be a problem specializing on a type that was calculated by the means of template metaprogramming...
(with some extra work that is -- by specializing for a function with a
special
return type).
Here are some more straightforward versions which do not seem that hard to read to me (although they involve counting commas):
REG_SPEC((typename T),2,(std::set<T,std::less<T>,std::allocator<T> )) // ^--- comma count
or
REG_SPEC((typename T),3,(std::set<T,std::less<T>,std::allocator<T> )) // ^--- tuple arity
or
REG_SPEC((typename T),(3,(std::set<T,std::less<T>,std::allocator<T> )) ) // \--- pp-array used as a string -----------------/
Anything that works with your taste among it?
<by the way> Our list correspondence is often hard to read because our clients seem to disagree on where to break lines. I set the line width down to 80 characters (which I figure should be acceptable -- it used to be 82 for quotes plus code) but the problem seems to
All of them are much better than using ")(" :-) persist. Is
there anything that you can do about it, perhabs? </by the way>
I also switched to 80 (was 50) -- let's see if it works... Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Well, after reading Dave's post I believe it might be possible to allow
REG_SPEC((typename T),(std::set<T,std::less<T>,std::allocator<T> >))
Considering my previous post... we could probably make it:
REG_SPEC((typename),(std::set<P0,std::less<P0>,std::allocator<P0> >))
Where P0 stands for "the first parameter". We already use this techinique in dependent template parameters. Then (typename) is free to use for other purposes.
The above syntax is most attractive, but I think there will be a problem specializing on a type that was calculated by the means of template metaprogramming...
That's not exactly what I meant. It's possible to #define REG_SPEC(a,b) \ ... struct my_traits< a_private_return_type b > { ... } And then use "my_traits< a_private_return_type(T) >". However, we can't apply this technique globally because not all types are valid function parameter types (this is what I meant with "extra work" in the sentence below).
(with some extra work that is -- by specializing for a function with a special return type).
[... code]
Anything that works with your taste among it?
All of them are much better than using ")(" :-)
So what? IIRC my post contained a disclaimer that I was trying to communicate functionality and not designing a pretty user interface :-).
[... line breaks in conversation]
I also switched to 80 (was 50) -- let's see if it works...
Obviously not so really. As another experiment I told my client not to break lines at all (if my posts remain readable it should be a solution). Regards, Tobias

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
That's not exactly what I meant. It's possible to
#define REG_SPEC(a,b) \ ... struct my_traits< a_private_return_type b > { ... }
Do you mean: ... struct my_traits< a_private_return_type(b) > { ... } ?
And then use "my_traits< a_private_return_type(T) >".
However, we can't apply this technique globally because not all types are valid function parameter types (this is what I meant with "extra work" in the sentence below).
Which types are not valid? void?
I also switched to 80 (was 50) -- let's see if it works...
Obviously not so really. As another experiment I told my client not to break lines at all (if my posts remain readable it should be a solution).
I don't have such option. Trying maximum (132). Regards, Arkadiy

"Arkadiy Vertleyb" <vertleyb@hotmail.com> wrote
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
That's not exactly what I meant. It's possible to
#define REG_SPEC(a,b) \ ... struct my_traits< a_private_return_type b > { ... }
Do you mean: ... struct my_traits< a_private_return_type(b) > { ... } ?
Please disregard this -- I see what you mean now... Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
That's not exactly what I meant. It's possible to
#define REG_SPEC(a,b) \ ... struct my_traits< a_private_return_type b > { ... }
Do you mean: ... struct my_traits< a_private_return_type(b) > { ... } ?
No, the original should be OK because input for 'b' must be parenthesized to allow any number of commas. For the expansion it means ... struct my_traits< a_private_return_type (b-without-parentheses) > ... , though.
And then use "my_traits< a_private_return_type(T) >". However, we can't apply this technique globally because not all types are valid function parameter types
Which types are not valid? void?
Well, is this code legal? #include <boost/mpl/assert.hpp> #include <boost/type_traits/is_same.hpp> template<typename R,typename T> struct func1 { typedef R type(T); }; BOOST_MPL_ASSERT(( boost::is_same< func1<void,void>::type, void() > )); MSVC thinks it is but GCC and Comeau disagree -- I'm not sure which one is right (according to 8.3.5-9). Fact is, it means trouble in practice. Completely illegal would be cv-void.
[... experiments with line breaking options of news clients]
I don't have such option. Trying maximum (132).
Me not either - just entered 0 ;-) (Thunderbird). Regards, Tobias

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
[discussing approach with the function type...]
So, if we decided to implement this approach, the following would need to be done: 1) change encode_type to modify the type before forwarding it to encode_type_impl, such as (pseudo-code): if (void) typedef my_return_type(*)(my_void) type; else if (const void) typedef my_return_type(*)(my_const_void) type; ... else typedef my_return_type(*)(T) type; (using function pointer would probably be more portable) 2) change all specializations of encode_type_impl to specialize on my_return_type(*)(T) instead of T, and to restore the special types; 3) implement REGISTER_SPEC in terms of parenthesized parameter. Do I understand this correctly? As far as wrapping is concerned, Outlook Express forces me to enter a value between 30 and 132 :-( Regards, Arkadiy

"Arkadiy Vertleyb" <vertleyb@hotmail.com> wrote
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
[discussing approach with the function type...]
OTOH, I just thought of something really trivial... If one wants to pass an arbitrary text to a macro, he can just wrap it into another macro can't he: #define DEF_SPEC(Name, Spec)\ template<>\ class encode<Name< Spec > >\ {}; DEF_SPEC(set, P0) #define SPEC P0, less<P0>, allocator<P0> DEF_SPEC(set, SPEC) Results in: template<> class encode<set< P0 > > {}; template<> class encode<set< P0, less<P0>, allocator<P0> > > {}; ? Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
[discussing approach with the function type...]
So, if we decided to implement this approach, the following would need to be done:
1) change encode_type to modify the type before forwarding it to encode_type_impl, such as (pseudo-code):
if (void) typedef my_return_type(*)(my_void) type; else if (const void) typedef my_return_type(*)(my_const_void) type; ... else typedef my_return_type(*)(T) type;
(using function pointer would probably be more portable)
2) change all specializations of encode_type_impl to specialize on my_return_type(*)(T) instead of T, and to restore the special types;
3) implement REGISTER_SPEC in terms of parenthesized parameter.
Do I understand this correctly?
Yes, I think so. But the REGISTER_SPEC macro in itself is not enough, I'm afraid. Yes, it theoretically solves the "forward declaration problem" but can be very unpractical; let's therefore take a look at mpl::vector: template<typename T0, typename T1, ... // ^^ forward declaratoin REGISTER_TYPE(boost::mpl::vector<>) REGISTER_SPEC((typename), boost::mpl::vector, (boost::mpl::vector<P1, boost::mpl::n_a, boost::mpl::na ... REGISTER_SPEC((typename)(typename), boost::mpl::vector, (boost::mpl::vector<P1, P2, boost::mpl::na ... // ... Not exactly user-friendly, is it? While REGISTER_SPEC is definitely a nice feature, it would be great to have another macro for the special case illustrated by the example above, which - forward declares a template, and - registers all possible default argument specializations. The interface would ideally look similar to a forward declaration with default arguments. BTW. here our "comma problem" pops up again (for the default arguments) but "Dave's function trick" can be applied again (and this time we get away without rather intrusive changes to the Typeof implementation): // given template<typename T> struct strip_special_func { typedef T type; }; template<typename T> struct strip_special_func< ret_type(T) > { typedef T type; }; // we can define a partial specialization like this one template<typename T> struct a_template < T, typename strip_special_func< my_type expr1 >::type , typename strip_special_func< my_type expr2 > { ... OK. That's it for now.
As far as wrapping is concerned, Outlook Express forces me to enter a value between 30 and 132 :-(
Good to know that the extra memory (and start-up time) for Thunderbird isn't entirely wasted ;-). Regards, Tobias

Hmm... I had the impression that it's impossible to register default-parameter-versions of forward-declared templates. But this impression was caused by my experience with STL registration, where all default parameters are provided in the class definitions, and I was obviously not able to change this. But it looks possible to specify them in the declarations (verified with vc71, gcc, and comeau online), so the following code is compiles: --------------------------------------------- #define BOOST_TYPEOF_COMPLIANT #include <boost/typeof/typeof.hpp> template<class T, class U = T*, class V = U*> struct x; #include BOOST_TYPEOF_INCREMENT_REGISTRATION_GROUP() BOOST_TYPEOF_REGISTER_TEMPLATE(x, 1) BOOST_TYPEOF_REGISTER_TEMPLATE(x, 2) BOOST_TYPEOF_REGISTER_TEMPLATE(x, 3) template<class T, class U, class V> struct x {}; typedef BOOST_TYPEOF(x<int>()) type1; typedef BOOST_TYPEOF((x<int, short>())) type2; typedef BOOST_TYPEOF((x<int, short, char>())) type3; int main() {} --------------------------------------------- If this code is portable, I would say it definitely seems more preferable for default parameter registrations than anything we discussed so far. Regards, Arkadiy

Arkadiy Vertleyb wrote:
template<class T, class U = T*, class V = U*> struct x;
<snip>
template<class T, class U, class V> struct x {};
Bingo. Yes, this is legal, and as a general rule, I put all my default template parameters in forward declarations this way. -- Eric Niebler Boost Consulting www.boost-consulting.com

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
REGISTER_SPEC((typename T), (std::set< T )( std::less<T> )( std::allocator<T> > ))
REGISTER_SPEC((typename T)(typename Pred), (std::set< T )( Pred )( std::allocator<T> > ))
// ^^^ NOTE: not a proposal for an interface -- for illustration,
only Also, please note that currently, when (class) or (typename) is passed to the macro, it's not just used as is. Instead, it is used to create an "object" that contains certain knowledge of how the type template parameter is different from integral template parameter and template template parameter. So it doesn't seem possible to just replace it with (typename T). We would probably have to pass them separately: (typename)(T). Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
REGISTER_SPEC((typename T), (std::set< T )( std::less<T> )( std::allocator<T> > ))
REGISTER_SPEC((typename T)(typename Pred), (std::set< T )( Pred )( std::allocator<T> > ))
// ^^^ NOTE: not a proposal for an interface -- for illustration,
only
Also, please note that currently, when (class) or (typename) is passed to the macro, it's not just used as is. Instead, it is used to create an "object" that contains certain knowledge of how the type template parameter is different from integral template parameter and template template parameter. So it doesn't seem possible to just replace it with (typename T). We would probably have to pass them separately: (typename)(T).
Right - I forgot about the non-type template parameters. ((typename,T0))((typename,T1)) would be another option. Maybe there's some trick to make (typename,T0)(typename,T1) work (which would be the best looking notation IMO). Regards, Tobias

"Thomas Witt" <witt@acm.org> wrote in message news:18B9A8D3-39E0-4838-AEAD-2B50DEE41880@acm.org...
Hi,
with 1.33.0 now five months old it is time to get the 1.34.0 release going. Well to be honest it's probably more than time. In order to improve our release process we are planning to adopt a staged release process that is modelled partly after the gcc release process. By and large it can be seen as a formalization of things already done in past releases. A description of the different stages will be given at the end of this mail.
Is there a summary list of major changes and library additions from 1.33.1 to 1.34.0? Thanks, Michael Goldshteyn

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Michael Goldshteyn | Sent: 25 January 2006 15:15 | To: boost@lists.boost.org | Subject: Re: [boost] 1.34 Release Plan | | "Thomas Witt" <witt@acm.org> wrote in message | news:18B9A8D3-39E0-4838-AEAD-2B50DEE41880@acm.org... | > | > with 1.33.0 now five months old it is time to get the 1.34.0 release going. With the useful and significant major 1.34 revisions to the essential Boost Test, a central feature of Boost Quality, I trust that we can include a major update to its documentation which is is now well behind Boost standards for quality, clarity and reliability. As well as a major improvement in useability of the documentation, it would also benefit from some serious editorial work, even if done by someone fairly mindless, and without the disbenefit of having written the package. Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

| > with 1.33.0 now five months old it is time to get the 1.34.0 release going.
With the useful and significant major 1.34 revisions to the essential Boost Test, a central feature of Boost Quality, I trust that we can include a major update to its documentation which is is now well behind Boost standards for quality, clarity and reliability.
As well as a major improvement in useability of the documentation, it would also benefit from some serious editorial work, even if done by someone fairly mindless, and without the disbenefit of having written the package.
Paul
I am working slowly (;() on the docs update. Any help would be greatly appreciated. Anyone interested in the job please contact me. Gennadiy

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental | Sent: 25 January 2006 19:00 | To: boost@lists.boost.org | Subject: Re: [boost] 1.34 Release Plan | > With the useful and significant major 1.34 revisions to the | essential | > Boost | > Test, a central feature of Boost Quality, I trust that we | can include a | > major update to its documentation which is is now well behind Boost | > standards for quality, clarity and reliability. | > | > As well as a major improvement in useability of the | documentation, it | > would | > also benefit from some serious editorial work, even if done | by someone | > fairly mindless, and without the disbenefit of having | written the package. | > | > Paul | | I am working slowly (;() on the docs update. Any help would | be greatly | appreciated. Anyone interested in the job please contact me. | | Gennadiy Suitably qualified, I will of course be pleased to put my money where my mouth is ;-) Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

Michael Goldshteyn wrote:
"Thomas Witt" <witt@acm.org> wrote in message news:18B9A8D3-39E0-4838-AEAD-2B50DEE41880@acm.org...
Hi,
with 1.33.0 now five months old it is time to get the 1.34.0 release going. Well to be honest it's probably more than time. In order to improve our release process we are planning to adopt a staged release process that is modelled partly after the gcc release process. By and large it can be seen as a formalization of things already done in past releases. A description of the different stages will be given at the end of this mail.
Is there a summary list of major changes and library additions from 1.33.1 to 1.34.0?
Well finally index.htm in CVS will be the place to go to. But it seems incomplete currently. Thomas -- Thomas Witt witt@acm.org

I understand that as boost gets bigger it gets harder to make new releases. There is more to do, more to keep in sync, and more ripple effect in changes. I see this more structured and formal release procedure as an attempt to deal with this. It seems to aim for a closer coordination of developement and release efforts to avoid difficulties associated with development of software libraries which have varying degrees of coupling. I sympathize with with the effort. I would summarize the general idea as getting developers more "in sync" through different stages of development and release. I see this as getting more difficult as boost gets bigger and more diverse. The development of different libraries can't really be "scheduled". So I think this is the wrong approach and I think it will make things harder rather than easier. I would propose a different way of going about this. Here it is. For the moment, exclude those packages related to building and testing from consideration - I'll address those at the end. a) developers build and test their libraries on their local machine against the latest release. b) when they think they are ready, they open a branch on the main trunk and check in their "next release". So the cvs system will have a branch for every developers "next release" c) when checked in - the developer will queue up a test request on the new branch. d) when the the test results show that the "next library release" is "ready", the branch is merged with the main trunk. The complete regression test is then run. There "should" be no new errors but there might be cases where the library developer has inadvertently changed some undocumented behavior which might show us as some error. Also there might be cases where the other parts of boost detect an error which has not been detected in the library test suite. These should be only a few cases. e) Once tests a d) above pass, the boost library version is incremented and a new release is emitted. f) At this point, working developers would update their local development copies of the CVS tree. In this system: a) Each library could be developed at its own pace an schedule. There would be not rush to "make the next release" with all the problems that that causes. b) The latest release of Boost would more up to date. That is, one wouldn't be in the situation of needing the next ehancemet, knowing its already in there but not being able to use it because the next release isn't ready yet. c) Testing resources would be used much more efficiently. Basically more testing would be focused on areas that are likely to have problems and less testing is focused on things that "almost for sure" should pass. Testing would be done only where and when changes have been made. Code for building and test are special cases and would have to be considered individually. Certainly Jamfile/toolset updates could be work with the above system. Movement to V2 wouldn't fit here and would have to be considered how to do this. Boost.Test enhancement would require some consideration as well. In general - code breaking changes would have to be considered specially. Basically, I don't believe the current Boost developement model is scalable and I think the procedure has to change to recognise this. So in my view the current proposal goes in exactly the wrong direction. Note that this is starting to occur by necesity. Multi-index has a "beta" version compatible with 1.33 that one can download. I've been testing changes to serialization on my machine against 1.33. I haven't checked them into the HEAD. So now I know what problems are mine and what problems are associated with changes in compiler versions, stlport versions etc. My next step is to make a few more changes, run some more tests locally (basically later version of stlport) and upload a package similar to Joaquin's. This would make the changes (mostly bug fixes and documentation upates) available to those who need them now and also give those who want to help me out a way to test my changes without waiting for the next release when it will be too late to fix anything. Robert Ramey

Robert Ramey wrote:
I understand that as boost gets bigger it gets harder to make new releases. There is more to do, more to keep in sync, and more ripple effect in changes. I see this more structured and formal release procedure as an attempt to deal with this. It seems to aim for a closer coordination of developement and release efforts to avoid difficulties associated with development of software libraries which have varying degrees of coupling.
That is certainly part of the idea.
Basically, I don't believe the current Boost developement model is scalable and I think the procedure has to change to recognise this. So in my view the current proposal goes in exactly the wrong direction.
I can see your point. I would like to postpone this discussion until after 1.34 because I strongly believe that it is just too late for 1.34 to make major changes.
Note that this is starting to occur by necesity. Multi-index has a "beta" version compatible with 1.33 that one can download. I've been testing changes to serialization on my machine against 1.33. I haven't checked them into the HEAD. So now I know what problems are mine and what problems are associated with changes in compiler versions, stlport versions etc. My next step is to make a few more changes, run some more tests locally (basically later version of stlport) and upload a package similar to Joaquin's. This would make the changes (mostly bug fixes and documentation upates) available to those who need them now and also give those who want to help me out a way to test my changes without waiting for the next release when it will be too late to fix anything.
I have doubts that we have the infrastructure in place that would be needed for this. This might be different once we switched to subversion. Thanks Thomas -- Thomas Witt witt@acm.org

Thomas Witt wrote:
I have doubts that we have the infrastructure in place that would be needed for this. This might be different once we switched to subversion.
I'm quite sure we don't have the infrastructure in place. My motivation is to to start a discussion that might result in movement to such an infrastructure. The last cycle lasted from july (initial projected release date) to november (release of 1.33.1) and was quite arduous. Each one is harder than the last. My point is that the process has to be re-thought. Robert Ramey
Thanks
Thomas

At 22:30 2006-01-25, Robert Ramey wrote:
Thomas Witt wrote:
I have doubts that we have the infrastructure in place that would be needed for this. This might be different once we switched to subversion.
I'm quite sure we don't have the infrastructure in place. My motivation is to to start a discussion that might result in movement to such an infrastructure. The last cycle lasted from july (initial projected release date)
I thought someone originally said release on April 15. btw, I notice we're planning on (again <sigh>) putting all the release stuff on a "tagged branch" then _manually_ changing all the regression test machines to test on the "release branch" with all the chaos that attends. I won't argue this time, I'll simply summarize. Leave the release stuff on the HEAD branch and tell developers who want to mess around with stuff that's NOT going to be in 1.34 to simply make their OWN branch and go work on it.
to november (release of 1.33.1) and was quite arduous. Each one is harder than the last. My point is that the process has to be re-thought.
Robert Ramey
Thanks
Thomas
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
At 22:30 2006-01-25, Robert Ramey wrote:
Thomas Witt wrote:
I have doubts that we have the infrastructure in place that would be needed for this. This might be different once we switched to subversion.
I'm quite sure we don't have the infrastructure in place. My motivation is to to start a discussion that might result in movement to such an infrastructure. The last cycle lasted from july (initial projected release date)
I thought someone originally said release on April 15. btw, I notice we're planning on (again <sigh>) putting all the release stuff on a "tagged branch" then _manually_ changing all the regression test machines to test on the "release branch" with all the chaos that attends.
How much effort is that _manual_ change? More than a couple <sigh>s worth?
I won't argue this time, I'll simply summarize. Leave the release stuff on the HEAD branch and tell developers who want to mess around with stuff that's NOT going to be in 1.34 to simply make their OWN branch and go work on it.
I agree with Victor; keeping the trunk in releaseable state is the right thing to do. On the other hand, anytime we do a point release, testers will have to be operating on a branch, so I don't see how this is going to help _them_ very much. -- Dave Abrahams Boost Consulting www.boost-consulting.com

At 09:07 2006-01-26, David Abrahams wrote:
"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
At 22:30 2006-01-25, Robert Ramey wrote:
Thomas Witt wrote:
I have doubts that we have the infrastructure in place that would be needed for this. This might be different once we switched to subversion.
I'm quite sure we don't have the infrastructure in place. My motivation is to to start a discussion that might result in movement to such an infrastructure. The last cycle lasted from july (initial projected release date)
I thought someone originally said release on April 15. btw, I notice we're planning on (again <sigh>) putting all the release stuff on a "tagged branch" then _manually_ changing all the regression test machines to test on the "release branch" with all the chaos that attends.
How much effort is that _manual_ change? More than a couple <sigh>s worth?
I won't argue this time, I'll simply summarize. Leave the release stuff on the HEAD branch and tell developers who want to mess around with stuff that's NOT going to be in 1.34 to simply make their OWN branch and go work on it.
I agree with Victor; keeping the trunk in releaseable state is the right thing to do. On the other hand, anytime we do a point release, testers will have to be operating on a branch, so I don't see how this is going to help _them_ very much.
nope, the testers will ONLY test on the HEAD... if someone thinks they need to fix something, they can branch and test to their heart's content. When they think it's "fixed" they merge it back.
-- Dave Abrahams Boost Consulting www.boost-consulting.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
At 09:07 2006-01-26, David Abrahams wrote:
"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
I thought someone originally said release on April 15. btw, I notice we're planning on (again <sigh>) putting all the release stuff on a "tagged branch" then _manually_ changing all the regression test machines to test on the "release branch" with all the chaos that attends.
How much effort is that _manual_ change? More than a couple <sigh>s worth?
??
I won't argue this time, I'll simply summarize. Leave the release stuff on the HEAD branch and tell developers who want to mess around with stuff that's NOT going to be in 1.34 to simply make their OWN branch and go work on it.
I agree with Victor; keeping the trunk in releaseable state is the right thing to do. On the other hand, anytime we do a point release, testers will have to be operating on a branch, so I don't see how this is going to help _them_ very much.
nope, the testers will ONLY test on the HEAD... if someone thinks they need to fix something, they can branch and test to their heart's content. When they think it's "fixed" they merge it back.
I'm sorry, I don't understand how this can work. Let's say we have version 1.54 on the HEAD. Then people check in some perfectly good changes for 1.55 to the HEAD. Then we discover we need to release 1.54.1. To make changes starting with the 1.54 state, we need a branch. How will we test 1.54.1 before releasing it? Are you suggesting 1.54.1 would go out without being tested as a complete unit? -- Dave Abrahams Boost Consulting www.boost-consulting.com

"Robert Ramey" <ramey@rrsd.com> writes:
I understand that as boost gets bigger it gets harder to make new releases. There is more to do, more to keep in sync, and more ripple effect in changes. I see this more structured and formal release procedure as an attempt to deal with this. It seems to aim for a closer coordination of developement and release efforts to avoid difficulties associated with development of software libraries which have varying degrees of coupling. I sympathize with with the effort.
I would summarize the general idea as getting developers more "in sync" through different stages of development and release. I see this as getting more difficult as boost gets bigger and more diverse. The development of different libraries can't really be "scheduled".
So I think this is the wrong approach and I think it will make things harder rather than easier.
I would propose a different way of going about this. Here it is.
For the moment, exclude those packages related to building and testing from consideration - I'll address those at the end.
a) developers build and test their libraries on their local machine against the latest release.
b) when they think they are ready, they open a branch on the main trunk and check in their "next release". So the cvs system will have a branch for every developers "next release"
I'm not opposed to the general direction you're going in (in fact I'll cautiously say I like it ;->), but there absolutely needs to be room in the plan for developers to use the CVS/SVN repository for checkins during the development of their "next release." Making large changes on a local machine without small intermediate checkins is a recipe for disaster. You may be comfortable with it, but it's generally not considered to be good practice and I wouldn't want to consign the other Boost developers to working that way. It has to be possible to make small changes, test, and check in repeatedly. One way to do it is to create a branch for the next release of a library, then create a branch of that branch for any intermediate checkins that are not intended to result in a "releaseable state." But then, what is the trunk for? Instead of using a branch and a meta-branch, why not the trunk and a branch?
c) when checked in - the developer will queue up a test request on the new branch.
What happens with that test request? You don't expect the testers to test each branch separately against the last stable release of the rest of Boost, do you? This would be a lot simpler if developers would just keep the library's "next releaseable state" on the trunk. Of course, that's what I try to do with my libraries, and I think most of the other developers do, too. The only reasons the trunk has seemed to get destabilized in the past is that some people seemed to be allergic to using branches (so they'd do intermediate development on the trunk), and some others did not view failures on the trunk as something to be avoided (see "test-driven development").
d) when the the test results show that the "next library release" is "ready", the branch is merged with the main trunk. The complete regression test is then run. There "should" be no new errors but there might be cases where the library developer has inadvertently changed some undocumented behavior which might show us as some error. Also there might be cases where the other parts of boost detect an error which has not been detected in the library test suite. These should be only a few cases.
e) Once tests a d) above pass, the boost library version is incremented and a new release is emitted.
f) At this point, working developers would update their local development copies of the CVS tree.
In this system:
a) Each library could be developed at its own pace an schedule. There would be not rush to "make the next release" with all the problems that that causes.
Having thought about the way you're proposing to use branches, I don't see much difference from the situation we have today. I certainly don't see any difference in the (dis)incentives to "rush features to make the next release." IMO the greatest thing we can do to make that less of an issue is to shorten the release cycle, so that if you miss this release you know there will be another a couple of months. That was something the moderators (those who were present) discussed at Mont Tremblant, and Thomas' upcoming release was part of that plan.
b) The latest release of Boost would more up to date. That is, one wouldn't be in the situation of needing the next ehancemet, knowing its already in there but not being able to use it because the next release isn't ready yet.
I don't see how your suggestion would make any difference in this respect either. The fact that, under your plan, a library author doesn't put his work *anywhere* in the repository until he's sure it's ready for release isn't going to keep the release any closer to the latest state of development. It just keeps the latest state of development on private machines. Of course, Boost is a community, so this state will migrate to wikis, the vault, and personal websites as library authors collaborate and try to get users to exercise some of their new ideas. If you check your latest releaseable state into the trunk then at least it gets tested and you can find out what the problems are. Again, the best thing we can do to make sure that the latest release of Boost is "more up to date" with the latest enhancements from library authors is to shorten the release cycle.
c) Testing resources would be used much more efficiently. Basically more testing would be focused on areas that are likely to have problems and less testing is focused on things that "almost for sure" should pass. Testing would be done only where and when changes have been made.
How so? What about this plan would cause this effect? I don't see it.
Code for building and test are special cases and would have to be considered individually. Certainly Jamfile/toolset updates could be work with the above system. Movement to V2 wouldn't fit here and would have to be considered how to do this. Boost.Test enhancement would require some consideration as well. In general - code breaking changes would have to be considered specially.
Basically, I don't believe the current Boost developement model is scalable and I think the procedure has to change to recognise this.
That has been recognized, and we are changing it. Boost is getting to be a big ship; changing its direction has significant cost, so there needs to be an obvious and plausible cause-and-effect relationship between any changes to procedure and the results we want to achieve. I agree with all your goals here, but I don't see any way in which the specific suggestions can improve things.
So in my view the current proposal goes in exactly the wrong direction.
What proposal? Thomas is the release manager for 1.34. What he posted was his release plan and schedule; it's his perogative. The release procedure takes a sensible approach from a Boost-wide point-of-view, by which I mean that it is specified at a level that can be managed by a release manager -- and it is the only approach that I know of that's been shown to work for large project releases with many independent but related development efforts. Before choosing a different one, I would want to know that it had been shown to work in large projects like Boost. It may be that we need a library-level procedure to be developed, so that developers understand how to use branches and the testing system to accomplish the goals we share.
Note that this is starting to occur by necesity. Multi-index has a "beta" version compatible with 1.33 that one can download. I've been testing changes to serialization on my machine against 1.33. I haven't checked them into the HEAD. So now I know what problems are mine and what problems are associated with changes in compiler versions, stlport versions etc. My next step is to make a few more changes, run some more tests locally (basically later version of stlport) and upload a package similar to Joaquin's. This would make the changes (mostly bug fixes and documentation upates) available to those who need them now and also give those who want to help me out a way to test my changes without waiting for the next release when it will be too late to fix anything.
You could have done this much more reliably (and, IMO, more easily) by doing the development on a branch off latest release state and merging each one into the head as you become satisfied with it. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
I understand that as boost gets bigger it gets harder to make new releases. There is more to do, more to keep in sync, and more ripple effect in changes. I see this more structured and formal release procedure as an attempt to deal with this. It seems to aim for a closer coordination of developement and release efforts to avoid difficulties associated with development of software libraries which have varying degrees of coupling. I sympathize with with the effort.
I would summarize the general idea as getting developers more "in sync" through different stages of development and release. I see this as getting more difficult as boost gets bigger and more diverse. The development of different libraries can't really be "scheduled".
So I think this is the wrong approach and I think it will make things harder rather than easier.
I would propose a different way of going about this. Here it is.
For the moment, exclude those packages related to building and testing from consideration - I'll address those at the end.
a) developers build and test their libraries on their local machine against the latest release.
b) when they think they are ready, they open a branch on the main trunk and check in their "next release". So the cvs system will have a branch for every developers "next release"
I'm not opposed to the general direction you're going in (in fact I'll cautiously say I like it ;->), but there absolutely needs to be room in the plan for developers to use the CVS/SVN repository for checkins during the development of their "next release."
Hmmm - I thought that's what I said.
Making large changes on a local machine without small intermediate checkins is a recipe for disaster. You may be comfortable with it, but it's generally not considered to be good practice and I wouldn't want to consign the other Boost developers to working that way. It has to be possible to make small changes, test, and check in repeatedly.
At what point a developer checks in his changes to the branch is up to him. The important thing is that they be checked into a branch separate from the main trunk.
One way to do it is to create a branch for the next release of a library, then create a branch of that branch for any intermediate checkins that are not intended to result in a "releaseable state." But then, what is the trunk for? Instead of using a branch and a meta-branch, why not the trunk and a branch?
I think you're agreeing with me here - but its hard to tell.
c) when checked in - the developer will queue up a test request on the new branch.
What happens with that test request? You don't expect the testers to test each branch separately against the last stable release of the rest of Boost, do you?
Ahhh - here is the key. If fact that's exacely what I expect and I think it will reduce resources required for testing considerably. Let me explain. When a test is queued only the tests for THAT library need be run at this point. In boost terms that means the Jamfile in the test directory for THAT library is run. And its only run when the developer queues up a request. The test is run on the branch for library X - inheriting from the trunk the last "stable" release for modules not in library X.
This would be a lot simpler if developers would just keep the library's "next releaseable state" on the trunk. Of course, that's what I try to do with my libraries, and I think most of the other developers do, too. The only reasons the trunk has seemed to get destabilized in the past is that some people seemed to be allergic to using branches (so they'd do intermediate development on the trunk), and some others did not view failures on the trunk as something to be avoided (see "test-driven development").
Sounds like you're agreeing with me again - but I again - I can't be sure.
Having thought about the way you're proposing to use branches, I don't see much difference from the situation we have today.
Configuration of cvs/svn wouldn't change at all. I don't think its currently customary to create a branch for each libraries next group of changes.
I certainly don't see any difference in the (dis)incentives to "rush features to make the next release." IMO the greatest thing we can do to make that less of an issue is to shorten the release cycle, so that if you miss this release you know there will be another a couple of months.
My proposal would shorten the release cycle to every time there is a a significant upgrade to any library. It wouldn't be driven by a schedule based on any target date. When a developer is "done" - that is - when all tests are passes - the release is tagged. Occasionally, some screwup will slip through - then that release is just retagged as a "turkey" and not distributed. Hopefully that won't occur any more frequently than it already does..
b) The latest release of Boost would more up to date. That is, one wouldn't be in the situation of needing the next ehancemet, knowing its already in there but not being able to use it because the next release isn't ready yet.
I don't see how your suggestion would make any difference in this respect either. The fact that, under your plan, a library author doesn't put his work *anywhere* in the repository until he's sure it's ready for release isn't going to keep the release any closer to the latest state of development. It just keeps the latest state of development on private machines. Of course, Boost is a community, so this state will migrate to wikis, the vault, and personal websites as library authors collaborate and try to get users to exercise some of their new ideas. If you check your latest releaseable state into the trunk then at least it gets tested and you can find out what the problems are.
Perhaps my mention of private machines was misleading. The thrust of the proposal is using a separate branch for development of each library and periodically mergeing them. From this standpoint, whether its on a private machine or a branch isn't relevent. It has to be checked in (to its own branch) before it can be tested on all platforms. The point at which a developer decides to do this is in hands - just as it is now.
Again, the best thing we can do to make sure that the latest release of Boost is "more up to date" with the latest enhancements from library authors is to shorten the release cycle.
This presupposes a release cycle defined by target dates. I'm proposing a whole different view point. Release every time the library is significantly enhanced and retested. This would mean that very little time would pass ( a couple of days?) between the time that a branch is merged and all tests pass, and the time that it is available to users.
c) Testing resources would be used much more efficiently. Basically more testing would be focused on areas that are likely to have problems and less testing is focused on things that "almost for sure" should pass. Testing would be done only where and when changes have been made.
How so? What about this plan would cause this effect? I don't see it.
a) tests would only be run when requested b) only on libraries which have made changes
Basically, I don't believe the current Boost developement model is scalable and I think the procedure has to change to recognise this.
That has been recognized, and we are changing it. Boost is getting to be a big ship; changing its direction has significant cost, so there needs to be an obvious and plausible cause-and-effect relationship between any changes to procedure and the results we want to achieve.
Agreed.
but I don't see any way in which the specific suggestions can improve things.
OK
So in my view the current proposal goes in exactly the wrong direction.
What proposal?
Thomas is the release manager for 1.34. What he posted was his release plan and schedule; it's his perogative.
Not arguement here - I'm just making a suggestion. I see his release plan and schedule as an attempt to address the difficulties of the past by making the schedule more detailed and carefully controlled. It is my opinion that this does not address the source of the difficulties and if fact will only make the process even more difficult as more "asyncrounous events" from more developers appear. We'll see.
The release procedure takes a sensible approach from a Boost-wide point-of-view, by which I mean that it is specified at a level that can be managed by a release manager -- and it is the only approach that I know of that's been shown to work for large project releases with many independent but related development efforts.
Before choosing a different one, I would want to know that it had been shown to work in large projects like Boost.
I don't think the changes I propose are as radical as they seem. develop on branch, test on branch, merge to trunk, retest, release.
Note that this is starting to occur by necesity. Multi-index has a "beta" version compatible with 1.33 that one can download.
I've been testing changes to serialization on my machine against 1.33. I haven't checked them into the HEAD. So now I know what problems are mine and what problems are associated with changes in compiler versions, stlport versions etc. My next step is to make a few more changes, run some more tests locally (basically later version of stlport) and upload a package similar to Joaquin's. This would make the changes (mostly bug fixes and documentation upates) available to those who need them now and also give those who want to help me out a way to test my changes without waiting for the next release when it will be too late to fix anything.
You could have done this much more reliably (and, IMO, more easily) by doing the development on a branch off latest release state and merging each one into the head as you become satisfied with it.
Whether its on cvs branch or code on a private machine isn't really the issue. I could check it into a separate branch but it wouldn't change anything here. The issue is testing against the last stable release branch. If I check it into a branch - it won't get tested on other platforms - so there isn't much benefit to checking it in until I've got it all tested here. To summarize, my proposal differs from current approach a) a branch is open for the next release for each library b) tests are against lastest release platform c) tests are run on specific libraries when and only when requrested d) releases occur after every significant enhancement. They are not tied to any specific target date. There is only one problem that I've not really addressed here. Its a big problem and its fundemental to boost and other large systems. Were it not for this I don't think the above would be contraversial. Here it is. In the course of development, developers sometimes change API either on purpose or inadvertenly. This breaks code which depends upon the library. For some "top level" libraries (e.g. serialization) this isn't a big problem as hardly anything else in boost depends upon it. For "low level" (e.g. mpl, preprocessor, etc) the rest of boost constitutes the "defacto test suite". Errors in one library show up as test failures in dependent libraries. So after merging - the current test setup has to be run again and results for ALL the boost libraries need to be checked by the developer. Currently this is impractical and the test results display isn't set up for this. (note this just happened to me today. A change in config showed up as a test failure in the serialization library. I'm sure that the author of the change in config never found out about this.) In theory this would suggest that the low-level libraries need more tests. In practice its always going to occur in varying degrees. If it occurs a lot it suggests that there is too much coupling between libraries or that some libraries are changing "too fast" to be relied upon. So as a practical matter, a few "complete boost library test" similar to the current one would have to be performed before the latest merge is tagged for release. But this would be a lot less frequenly than its done now. Robert Ramey

"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
One way to do it is to create a branch for the next release of a library, then create a branch of that branch for any intermediate checkins that are not intended to result in a "releaseable state." But then, what is the trunk for? Instead of using a branch and a meta-branch, why not the trunk and a branch?
I think you're agreeing with me here - but its hard to tell.
I didn't think I was agreeing with you, but maybe I misunderstood you.
c) when checked in - the developer will queue up a test request on the new branch.
What happens with that test request? You don't expect the testers to test each branch separately against the last stable release of the rest of Boost, do you?
Ahhh - here is the key. If fact that's exacely what I expect and I think it will reduce resources required for testing considerably. Let me explain.
When a test is queued only the tests for THAT library need be run at this point. In boost terms that means the Jamfile in the test directory for THAT library is run. And its only run when the developer queues up a request. The test is run on the branch for library X - inheriting from the trunk the last "stable" release for modules not in library X.
That would be great to have. That said, it can only reduce the amount of testing if we stop running regular tests against the trunk. I don't think that would be wise. Also there's the issue that some authors may develop features that depend on unreleased features of other libraries that are slated for the next release. This gets a little complicated.
This would be a lot simpler if developers would just keep the library's "next releaseable state" on the trunk. Of course, that's what I try to do with my libraries, and I think most of the other developers do, too. The only reasons the trunk has seemed to get destabilized in the past is that some people seemed to be allergic to using branches (so they'd do intermediate development on the trunk), and some others did not view failures on the trunk as something to be avoided (see "test-driven development").
Sounds like you're agreeing with me again - but I again - I can't be sure.
Me neither. I thought you were suggesting that developers only ever check in new code on a branch when it was in a "next release" state, but it sounds like I was wrong about that.
Having thought about the way you're proposing to use branches, I don't see much difference from the situation we have today.
Configuration of cvs/svn wouldn't change at all. I don't think its currently customary to create a branch for each libraries next group of changes.
I think that's because many developers find branching to be burdensome. I don't do it when my changes are small and can be tested thoroughly on my local machine, but I probably should, because it would give me a chance to test on a few platforms before committing to the trunk.
I certainly don't see any difference in the (dis)incentives to "rush features to make the next release." IMO the greatest thing we can do to make that less of an issue is to shorten the release cycle, so that if you miss this release you know there will be another a couple of months.
My proposal would shorten the release cycle to every time there is a a significant upgrade to any library.
...whoa, no way. That would put release managers completely at the mercy of the library authors.
It wouldn't be driven by a schedule based on any target date. When a developer is "done" - that is - when all tests
Which tests? Of the trunk?
are passes - the release
Of his particular library?
is tagged.
And then what?
Occasionally, some screwup will slip through - then that release is just retagged as a "turkey" and not distributed. Hopefully that won't occur any more frequently than it already does..
b) The latest release of Boost would more up to date. That is, one wouldn't be in the situation of needing the next ehancemet, knowing its already in there but not being able to use it because the next release isn't ready yet.
I don't see how your suggestion would make any difference in this respect either. The fact that, under your plan, a library author doesn't put his work *anywhere* in the repository until he's sure it's ready for release isn't going to keep the release any closer to the latest state of development. It just keeps the latest state of development on private machines. Of course, Boost is a community, so this state will migrate to wikis, the vault, and personal websites as library authors collaborate and try to get users to exercise some of their new ideas. If you check your latest releaseable state into the trunk then at least it gets tested and you can find out what the problems are.
Perhaps my mention of private machines was misleading.
Perhaps.
The thrust of the proposal is using a separate branch for development of each library and periodically mergeing them.
Generally a good idea; that's what we did with the parameter library recently, for example. I don't know that it's a good idea to do everything on a branch, though. If I find a workaround for a Borland bug doesn't it make sense to check it into the trunk right away? I think it would be a waste of resources to add a branch testing request for something like that.
From this standpoint, whether its on a private machine or a branch isn't relevent. It has to be checked in (to its own branch) before it can be tested on all platforms. The point at which a developer decides to do this is in hands - just as it is now.
Again, the best thing we can do to make sure that the latest release of Boost is "more up to date" with the latest enhancements from library authors is to shorten the release cycle.
This presupposes a release cycle defined by target dates.
?? A release cycle is the time between release dates, by definition.
I'm proposing a whole different view point. Release every time the library is significantly enhanced and retested. This would mean that very little time would pass ( a couple of days?) between the time that a branch is merged and all tests pass, and the time that it is available to users.
Maybe you should try managing a few releases before you suggest that. There is currently a great deal of administrative overhead for the release manager (http://www.boost.org/more/release_mgr_checklist.html). If something could be done to reduce that overhead to almost nothing, it would be great, but until then I don't think we can afford to do it.
c) Testing resources would be used much more efficiently. Basically more testing would be focused on areas that are likely to have problems and less testing is focused on things that "almost for sure" should pass. Testing would be done only where and when changes have been made.
How so? What about this plan would cause this effect? I don't see it.
a) tests would only be run when requested b) only on libraries which have made changes
What about the trunk?
Basically, I don't believe the current Boost developement model is scalable and I think the procedure has to change to recognise this.
That has been recognized, and we are changing it. Boost is getting to be a big ship; changing its direction has significant cost, so there needs to be an obvious and plausible cause-and-effect relationship between any changes to procedure and the results we want to achieve.
Agreed.
but I don't see any way in which the specific suggestions can improve things.
OK
Now that you've explained, I see potential in your ideas, but they're not quite fully-baked yet.
I see his release plan and schedule as an attempt to address the difficulties of the past by making the schedule more detailed and carefully controlled.
It's not a break with the past. He's replicating the release plan used by Doug for the last two releases (and that plan is just a refinement of what we did before). It said as much in his posting. So we're trying to achieve a little consistency. Getting the process to be more systematic and regular is crucial to reducing the pain, IMO.
It is my opinion that this does not address the source of the difficulties and if fact will only make the process even more difficult as more "asyncrounous events" from more developers appear. We'll see.
I don't see how it can make anything worse. The past two releases have been an improvement over those that came before.
The release procedure takes a sensible approach from a Boost-wide point-of-view, by which I mean that it is specified at a level that can be managed by a release manager -- and it is the only approach that I know of that's been shown to work for large project releases with many independent but related development efforts.
Before choosing a different one, I would want to know that it had been shown to work in large projects like Boost.
I don't think the changes I propose are as radical as they seem. develop on branch, test on branch, merge to trunk, retest, release.
You don't plan to have a point at which the trunk is frozen? Maybe once we switch to SVN that will be practical. It certainly isn't practical for CVS.
You could have done this much more reliably (and, IMO, more easily) by doing the development on a branch off latest release state and merging each one into the head as you become satisfied with it.
Whether its on cvs branch or code on a private machine isn't really the issue. I could check it into a separate branch but it wouldn't change anything here. The issue is testing against the last stable release branch. If I check it into a branch - it won't get tested on other platforms
It can be so tested, by request.
so there isn't much benefit to checking it in until I've got it all tested here.
To summarize, my proposal differs from current approach
a) a branch is open for the next release for each library b) tests are against lastest release platform
As mentioned, doesn't work when you depend on new changes in another library. You could, however, include that library's changes in your branch. That will be lots easier once we have SVN.
c) tests are run on specific libraries when and only when requrested
Nice, but increases test load unless we drop trunk testing.
d) releases occur after every significant enhancement.
Too much overhead. Need (probably massive) infrastructure upgrade to make that feasible.
They are not tied to any specific target date.
Again, probably not feasible until we're using SVN.
There is only one problem that I've not really addressed here. Its a big problem and its fundemental to boost and other large systems. Were it not for this I don't think the above would be contraversial. Here it is.
In the course of development, developers sometimes change API either on purpose or inadvertenly. This breaks code which depends upon the library. For some "top level" libraries (e.g. serialization) this isn't a big problem as hardly anything else in boost depends upon it. For "low level" (e.g. mpl, preprocessor, etc) the rest of boost constitutes the "defacto test suite". Errors in one library show up as test failures in dependent libraries. So after merging - the current test setup has to be run again and results for ALL the boost libraries need to be checked by the developer. Currently this is impractical and the test results display isn't set up for this.
?? It seems to me that we have exactly this today. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
Again, the best thing we can do to make sure that the latest release of Boost is "more up to date" with the latest enhancements from library authors is to shorten the release cycle.
This presupposes a release cycle defined by target dates.
?? A release cycle is the time between release dates, by definition.
Hmm - this is a source of confusion then. My view is that a release is set of modules with some minimal level (zero ?) of bugs, quirks or other anamolies that we are comfortable recommending that users use. Whether this "release" occurs on some prespecified target date or when some new feature is added isn't relevant to my usage of the word "release". So the phrase "release date" wouldn't come in my comments. "Up to date" is different. By "Up to date" I mean that the "release" has all features that we we have added in and feel comfortable encouraging users to use. So my system is by definiition always "Up to Date"
I'm proposing a whole different view point. Release every time the library is significantly enhanced and retested. This would mean that very little time would pass ( a couple of days?) between the time that a branch is merged and all tests pass, and the time that it is available to users.
Maybe you should try managing a few releases before you suggest that. There is currently a great deal of administrative overhead for the release manager (http://www.boost.org/more/release_mgr_checklist.html). If something could be done to reduce that overhead to almost nothing, it would be great, but until then I don't think we can afford to do it.
I didn't know about this and just now gave it an cursory examination. It is helpful in understanding what's involved. However, there are a couple of observations that occur to me. a) It seems out of date - its dated 04 Feb 2004 - two years ago. b) It seems to suggest that most of the "tools" are being moved in the direction of C++ while in practice it seems that we're depending more on other stuff like python, quickbook and xml, etc. c) "Pre-release activities" is really what I'd like to see replaced with an continuous maintaince of the trunk in a "releasable" state. Only occasional merge from branches to trunk and associated tests on trunk of the whole library would put the trunk in a temporarily non-releaseable state. d) "CVS Branch for Release" I would like to see disappear. e) "CVS Release" would stay the same. f) "Distribution" would hopefully be more automated. This looks like a huge pain to do by hand.
Now that you've explained, I see potential in your ideas, but they're not quite fully-baked yet.
I won't dispute that. Its not been my intention to propose a detailed replacement for the current procedure. I really just want to point out that the current difficulties are the result of the way we use cvs and the way we do our testing. And that things won't get much better until these things are re-examined.
I see his release plan and schedule as an attempt to address the difficulties of the past by making the schedule more detailed and carefully controlled.
It's not a break with the past. He's replicating the release plan used by Doug for the last two releases (and that plan is just a refinement of what we did before). It said as much in his posting. So we're trying to achieve a little consistency.
I know its not a break with the past. I see it as a refinement of the traditional approach - that's why I think its on the wrong track.
Getting the process to be more systematic and regular is crucial to reducing the pain, IMO.
It is my opinion that this does not address the source of the difficulties and if fact will only make the process even more difficult as more "asyncrounous events" from more developers appear. We'll see.
I don't see how it can make anything worse. The past two releases have been an improvement over those that came before.
The "chaos" (substitute your own word here) is not in any way due to the lack of systematic approach or to any lack of effort on the part of the those who manage the release. The "chaos" occurs because of things outside the managers control. New bugs found at the last minute. New bugs introduced at the last minute. New compilers dropped in etc. etc. The fact that boost is getting bigger means there are more of these asyncronous events and working harder will help only for a little while longer.
The release procedure takes a sensible approach from a Boost-wide point-of-view, by which I mean that it is specified at a level that can be managed by a release manager -- and it is the only approach that I know of that's been shown to work for large project releases with many independent but related development efforts.
Before choosing a different one, I would want to know that it had been shown to work in large projects like Boost.
Well - there isn't anything quite like boost - that's why we're here. I would guess the closest analogy that comes to mind is a large website. Every time a new "branch" is coded it is tested and folded in without any formal "release".
I don't think the changes I propose are as radical as they seem. develop on branch, test on branch, merge to trunk, retest, release.
You don't plan to have a point at which the trunk is frozen? Maybe once we switch to SVN that will be practical. It certainly isn't practical for CVS.
My plan is that the trunk would be maintained so as to be in one of the following states. a) "current release" with a tag saying the release number when a library is merged into the trunk move to state b b) "newly merged branch - subject to test" when a test on the trunk fails move to b c) "broken - one or more boost wide tests fail" when boost wide tests pass - the state moves from c to a. I don't see anything in CVS which would prevent this. In fact it seems to me that this is the way these systems are intended to be used.
To summarize, my proposal differs from current approach
a) a branch is open for the next release for each library b) tests are against lastest release platform
As mentioned, doesn't work when you depend on new changes in another library. You could, however, include that library's changes in your branch. That will be lots easier once we have SVN.
In my scenario the problem doesn't come up. my library X depends on your library Y. I'm on my branch and you are on yours. So my library X can't even be built until yours is checked into the trunk. As soon as your library Y is merged to the trunk and trunk is retested (ie "released") its now available. Of course in some special case I might check out your branch to my machine but that would be a pretty unusual case. In any case my library X wouldn't be merged into the trunk until your library Y has passed adn the trunk moves back to "current release" state. If the libraries are being developed together then they should be handled as one - ie merged into the trunk as a pair. But generally I would expect this to be unusual. By shortenting the time between when a library passes its tests and is "released" (made available to users) much of these issues disappear.
c) tests are run on specific libraries when and only when requrested
Nice, but increases test load unless we drop trunk testing.
trunk testing would still occur but only after merging of a new branch and only while the trunk is in a "temporarily broken" state. I would anticipate much less consumption of resources than now.
d) releases occur after every significant enhancement.
Too much overhead. Need (probably massive) infrastructure upgrade to make that feasible.
If you're refering to the "Distribution" section of the release manager checklist I would agree that a big upgrade would be required.
They are not tied to any specific target date.
Again, probably not feasible until we're using SVN.
Again, I don't see anything in CVS that prevents is usage in this manner. In fact, I believe that's the way it is intended to be used.
In the course of development, developers sometimes change API either on purpose or inadvertenly. This breaks code which depends upon the library. For some "top level" libraries (e.g. serialization) this isn't a big problem as hardly anything else in boost depends upon it. For "low level" (e.g. mpl, preprocessor, etc) the rest of boost constitutes the "defacto test suite". Errors in one library show up as test failures in dependent libraries. So after merging - the current test setup has to be run again and results for ALL the boost libraries need to be checked by the developer. Currently this is impractical and the test results display isn't set up for this.
?? It seems to me that we have exactly this today.
Here is an example. The other day a change was made in config. As a side effect of this change, a number of tests in the serialization library start to fail. The author other the original change looks at the test in the config section and sees no problem - he isn't even aware of the failures that occur in other libraries due to this change. Meanwhile, I look at the test results and find no changes in any code which could account for the failure. So the person that made the change know of no failures and the person who knows about the failure knows nothing about the change. Of course if I have been making changes to my own code simultaneaously I'll presume it's due to one of my changes and I'll go crazy trying to find it. So although this problem is addressed by testing the trunk its not a perfect solution - Of course if the change to config had been made on a branch and that branch tested, the problem in this example wouldn't have come up. So maybe its not the best example. In general the problem will still occur but using the approach I've described here would make the whole process much easier. Robert Ramey

"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
Again, the best thing we can do to make sure that the latest release of Boost is "more up to date" with the latest enhancements from library authors is to shorten the release cycle.
This presupposes a release cycle defined by target dates.
?? A release cycle is the time between release dates, by definition. ^^^^^
Hmm - this is a source of confusion then. My view is that a release is set of modules with some minimal level (zero ?) of bugs, quirks or other anamolies that we are comfortable recommending that users use. Whether this "release" occurs on some prespecified target date or when some new feature is added isn't relevant to my usage of the word "release".
Mine neither. I'm not quibbling over what a release is. A release _cycle_ is the period between releases.
As mentioned, doesn't work when you depend on new changes in another library. You could, however, include that library's changes in your branch. That will be lots easier once we have SVN.
In my scenario the problem doesn't come up. my library X depends on your library Y. I'm on my branch and you are on yours. So my library X can't even be built until yours is checked into the trunk. As soon as your library Y is merged to the trunk and trunk is retested (ie "released") its now available.
You have been advocating testing library branches against the last release. If you want to change your proposal now so that library branches are being tested against the trunk, that's fine, but don't act like you've been saying that all along. (No matter how streamlined things get, we will not have a release the instant something changes on the trunk)
They are not tied to any specific target date.
Again, probably not feasible until we're using SVN.
Again, I don't see anything in CVS that prevents is usage in this manner. In fact, I believe that's the way it is intended to be used.
The lack of atomic changes to the trunk and to branches makes it very difficult to capture a point in time when everything is passing. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
The lack of atomic changes to the trunk and to branches makes it very difficult to capture a point in time when everything is passing.
Exactly - that's the problem. My suggestion is that library changes be merged into the trunk one at a time and no other merges are undertaken until the trunk passes all tests. At this point the trunk is tagged as a "release" and available to users to use and to other developers to develop against. The last version of the trunk that passed all tests is defined as the "last release" Maybe my suggestion might be better characterised as casting aside the whole concept of "release" in favor of "kaizan" - continuous incremental improvement. I'm beginning to question the whole concept of "release". It seems to have been inherited from the the traditional ideas of manufactureing. design, test, and start manufacturing. Perhaps building large software systems like boost are more akin to building cathedrals in the middle ages. Add some here, add another bit there, Uh oh - that old part broke under the weight of a newer part - so fix that and get back on track. Afer 100 years you have something far beyond what anyone could have envisioned at the begining. That may or may not be a good analogy - but its closer to how things really work than building and shipping a traditional software or hardware product. Anyway I realise that this idea does seem too radical to be seriously considered any time soon. But as time goes on, boost gets bigger, it gets harder and harder to keep everything in sync, and the idea has time to "percolate" I think we'll becoming back to it. And besides, what I see as problems in the current method don't really affect me personaly very much. I have the luxury of detachment here. So I'm happy to let it rest until the subject comes up again in the future. Robert Ramey

"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
The lack of atomic changes to the trunk and to branches makes it very difficult to capture a point in time when everything is passing.
Exactly - that's the problem.
That is solved by SVN. -- Dave Abrahams Boost Consulting www.boost-consulting.com

At 09:24 2006-01-27, you wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
The lack of atomic changes to the trunk and to branches makes it very difficult to capture a point in time when everything is passing.
Exactly - that's the problem.
That is solved by SVN.
I wouldn't bet on it (well capturing the instant is how SVN works), but it's irrelevant anyhow. if you only "release" what on HEAD it doesn't matter than you cannot synch w/ branches. I've been saying for a little over a decade that dropping the "state" from CVS was a mistake and this hammers it home more than anything I've seen...but it's gone. One trusts that the Subversion folks weren't as blind.
-- Dave Abrahams Boost Consulting www.boost-consulting.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
At 09:24 2006-01-27, you wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
The lack of atomic changes to the trunk and to branches makes it very difficult to capture a point in time when everything is passing.
Exactly - that's the problem.
That is solved by SVN.
I wouldn't bet on it (well capturing the instant is how SVN works), but it's irrelevant anyhow. if you only "release" what on HEAD it doesn't matter than you cannot synch w/ branches.
I keep asking, if you only release what's on HEAD, how do you do point releases? And anyway, why this obsession with HEAD? In SVN, it's just another branch.
I've been saying for a little over a decade that dropping the "state" from CVS was a mistake and this hammers it home more than anything I've seen...but it's gone. One trusts that the Subversion folks weren't as blind.
Sorry, I don't know what you mean. -- Dave Abrahams Boost Consulting www.boost-consulting.com

At 03:01 2006-01-30, David Abrahams wrote:
"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
At 09:24 2006-01-27, you wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
The lack of atomic changes to the trunk and to branches makes it very difficult to capture a point in time when everything is passing.
Exactly - that's the problem.
That is solved by SVN.
I wouldn't bet on it (well capturing the instant is how SVN works), but it's irrelevant anyhow. if you only "release" what on HEAD it doesn't matter than you cannot synch w/ branches.
I keep asking, if you only release what's on HEAD, how do you do point releases?
what's the "meaning" of a "point" release anyhow? release numbers (names) are a marketing concept (so the collateral material can be produced). They've _never_ had any relevance to software (other than some loose conventions which caused more problems than they were worth)
And anyway, why this obsession with HEAD? In SVN, it's just another branch.
In SVN, iirc, "numbers" are even further removed from any meaningful relationship for a "release"... but as for the "obsession" it's actually just a desire to keep people's fingers out of the regression testing process. The more you make people mess with things, the more likely you'll have an error. One of the points of automated testing is that you warp the development system so the test system doesn't have to be massaged every time something changes.
I've been saying for a little over a decade that dropping the "state" from CVS was a mistake and this hammers it home more than anything I've seen...but it's gone. One trusts that the Subversion folks weren't as blind.
Sorry, I don't know what you mean.
http://wwwipd.ira.uka.de/~tichy/ the man who started it all, with RCS reading what he had in mind back in the beginning (and implemented) is instructive.
-- Dave Abrahams Boost Consulting www.boost-consulting.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
At 03:01 2006-01-30, David Abrahams wrote:
"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
I keep asking, if you only release what's on HEAD, how do you do point releases?
what's the "meaning" of a "point" release anyhow?
The "meaning" is a release that's exactly like some other release except for the addition of bug fixes.
release numbers (names) are a marketing concept (so the collateral material can be produced). They've _never_ had any relevance to software (other than some loose conventions which caused more problems than they were worth)
Yes, numbering is irrelevant. I didn't mention numbering. Please address the question with that in mind.
And anyway, why this obsession with HEAD? In SVN, it's just another branch.
In SVN, iirc, "numbers" are even further removed from any meaningful relationship for a "release"... but as for the "obsession" it's actually just a desire to keep people's fingers out of the regression testing process. The more you make people mess with things, the more likely you'll have an error. One of the points of automated testing is that you warp the development system so the test system doesn't have to be massaged every time something changes.
We can easily automate what branch people are testing.
I've been saying for a little over a decade that dropping the "state" from CVS was a mistake and this hammers it home more than anything I've seen...but it's gone. One trusts that the Subversion folks weren't as blind.
Sorry, I don't know what you mean.
Access is forbidden. -- Dave Abrahams Boost Consulting www.boost-consulting.com

At 10:53 2006-01-31, David Abrahams wrote:
"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
At 03:01 2006-01-30, David Abrahams wrote:
"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
I keep asking, if you only release what's on HEAD, how do you do point releases?
what's the "meaning" of a "point" release anyhow?
The "meaning" is a release that's exactly like some other release except for the addition of bug fixes.
release numbers (names) are a marketing concept (so the collateral material can be produced). They've _never_ had any relevance to software (other than some loose conventions which caused more problems than they were worth)
Yes, numbering is irrelevant. I didn't mention numbering. Please address the question with that in mind.
and exactly what does "point" refer to if NOT the separator between numbers? for example 1.33.1
And anyway, why this obsession with HEAD? In SVN, it's just another branch.
In SVN, iirc, "numbers" are even further removed from any meaningful relationship for a "release"... but as for the "obsession" it's actually just a desire to keep people's fingers out of the regression testing process. The more you make people mess with things, the more likely you'll have an error. One of the points of automated testing is that you warp the development system so the test system doesn't have to be massaged every time something changes.
We can easily automate what branch people are testing.
we haven't thusfar
I've been saying for a little over a decade that dropping the "state" from CVS was a mistake and this hammers it home more than anything I've seen...but it's gone. One trusts that the Subversion folks weren't as blind.
Sorry, I don't know what you mean.
very odd, I get access denied also, but I copied/pasted it from my browser this morning.
Access is forbidden.
-- Dave Abrahams Boost Consulting www.boost-consulting.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
At 10:53 2006-01-31, David Abrahams wrote:
"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
At 03:01 2006-01-30, David Abrahams wrote:
"Victor A. Wagner Jr." <vawjr@rudbek.com> writes:
I keep asking, if you only release what's on HEAD, how do you do point releases?
what's the "meaning" of a "point" release anyhow?
The "meaning" is a release that's exactly like some other release except for the addition of bug fixes.
release numbers (names) are a marketing concept (so the collateral material can be produced). They've _never_ had any relevance to software (other than some loose conventions which caused more problems than they were worth)
Yes, numbering is irrelevant. I didn't mention numbering. Please address the question with that in mind.
and exactly what does "point" refer to if NOT the separator between numbers? for example 1.33.1
I'm not talking about notation, and I think that should be abundantly clear at this point. I'm going to keep calling it a point release because that is the commonly accepted name for a a release that's exactly like some other release except for the addition of bug fixes. Please try to ignore the offensive implication in the terminology so I don't have to write a long sentence where only two words would do. But to humor you, let me rephrase the question: How do you make a release that's exactly like some other release except for the addition of bug fixes, when other non-bug-fix material has been checked into the HEAD?
We can easily automate what branch people are testing.
we haven't thusfar
It's a lot easier than solving the point release problem. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
How do you make a release that's exactly like some other release except for the addition of bug fixes, when other non-bug-fix material has been checked into the HEAD?
If the HEAD is always maintained in "releaseable" state you won't want to do such a thing. Of course you could branch from a previous version - but in practice I wouldn't expect one to want to. This idea presumes and depends upon the existence of an on-demand facility for testing on branches. This same facility could be used to test a "point release". However, a main motivation for this idea is the elimination of the need for "point release". If the "trunk" (HEAD in the current setup) is maintained in a releaseable state, any need for a "point release" would be addressed by just downloading the latest "releaseable" version. The main requirements/obstacles to such a system are: a) a decent source control system - this is no problem. Maybe some other system would be better, but CVS is plenty good enough to work here. b) a faciltiy for testing branches/libraries on demand. This facility would be used to specific libraries on branches durring library developement and be applied to the trunk for all libraries before the trunk is marked as "releaseable". Currently we don't have anything like this. c) automatic generatation of "release package" from a releaseable trunk. Currently, we don't have this either. Robert Ramey

Robert Ramey wrote:
If the HEAD is always maintained in "releaseable" state you won't want to do such a thing. Of course you could branch from a previous version - but in practice I wouldn't expect one to want to.
I would. Especially if you're going to stop support for old compilers, you will want a releasable branch for each of the old Boost versions that were the last to support a specific compiler. You'll also want to be able to backport bugfixes to that old version (or check in backports that other people do for you), which means that you'll want to make point releases for these old versions. I think this is a prerequisite to dropping support for compilers that still are in use, like VC6. Sebastian Redl

Sebastian Redl wrote:
Robert Ramey wrote:
If the HEAD is always maintained in "releaseable" state you won't want to do such a thing. Of course you could branch from a previous version - but in practice I wouldn't expect one to want to.
I would. Especially if you're going to stop support for old compilers, you will want a releasable branch for each of the old Boost versions that were the last to support a specific compiler. You'll also want to be able to backport bugfixes to that old version (or check in backports that other people do for you), which means that you'll want to make point releases for these old versions. I think this is a prerequisite to dropping support for compilers that still are in use, like VC6.
OK in that case you would - but we're not doing any of that now. Robert Ramey

On Wed, 01 Feb 2006 13:50:58 -0800, Robert Ramey wrote:
Sebastian Redl wrote:
If the HEAD is always maintained in "releaseable" state you won't want to do such a thing. Of course you could branch from a previous version - but in practice I wouldn't expect one to want to. I would. Especially if you're going to stop support for old compilers, you will want a releasable branch for each of the old Boost versions
Robert Ramey wrote: that were the last to support a specific compiler. [...minor snip...] I think this is a prerequisite to dropping support for compilers that still are in use, like VC6. OK in that case you would - but we're not doing any of that now.
Well, how many compilers have been deprecated/stopped being supported by boost in the last few years? Certainly none of the "big" ones. So, this process hasn't been needed until now. I agree completely with Sebastian - create support branches on the last release that supports any specific compiler. Backport *significant* bug fixes as required. Do this for as long as there is somebody who actually cares enough to organise testing/releasing of that version (and obviously it only needs to be tested/maintained for those compilers that the branch exists to support). Personally, I see no need to deprecate a compiler for a whole release - what good does it serve? I think there is general acceptance that new libraries wouldn't *need* to work against deprecated compilers, so it is quite possible that the next release after the "deprecation release" will contain no new libraries for the compiler anyway. As long as there is a clear list on a downloads page which indicates what release to do download for which compilers, that is surely sufficient... isn't it? (Oh, and I suspect that keeping a copy of the "old docs" around might well be desirable.) phil -- change name before "@" to "phil" for email

"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
How do you make a release that's exactly like some other release except for the addition of bug fixes, when other non-bug-fix material has been checked into the HEAD?
If the HEAD is always maintained in "releaseable" state you won't want to do such a thing. Of course you could branch from a previous version - but in practice I wouldn't expect one to want to.
It doesn't matter what you want if your customers demand it. And they do. Even a releasable state sometimes contains changes that are not backward-compatible.
This idea presumes and depends upon the existence of an on-demand facility for testing on branches. This same facility could be used to test a "point release".
However, a main motivation for this idea is the elimination of the need for "point release". If the "trunk" (HEAD in the current setup) is maintained in a releaseable state, any need for a "point release" would be addressed by just downloading the latest "releaseable" version.
No, it wouldn't. In many organizations, code stability is important. It can be prohibitive to accept the next releasable state and make all the local adjustments that go along with it. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams <dave@boost-consulting.com> writes:
"Robert Ramey" <ramey@rrsd.com> writes:
However, a main motivation for this idea is the elimination of the need for "point release". If the "trunk" (HEAD in the current setup) is maintained in a releaseable state, any need for a "point release" would be addressed by just downloading the latest "releaseable" version.
No, it wouldn't. In many organizations, code stability is important. It can be prohibitive to accept the next releasable state and make all the local adjustments that go along with it.
Agreed. I have some code that is currently stuck with boost V1.32 because it uses boost.optional in a way that no longer works in V1.33. If I want to make use of any changes/bug fixes in the rest of boost from more recent versions, then I will either have to back-port them myself, rely on someone releasing V1.32.1, or deal with the boost.optional problems. If HEAD is always releasable, and always 100% backwards-compatible, then you don't need point releases. If there are breaking changes, then you might need point releases to support those customers who cannot accept the cost of change. Anthony -- Anthony Williams Software Developer Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk

Thomas Witt wrote:
Stage 2 - Intracomponent Open Development ----------------------------------------- This stage restricts Stage 1 slightly by banning far-reaching changes to Boost. Major changes to libraries or infrastructure, or the addition of new features and libraries, can be made in the stage. However, the changes must be limited in scope and may not have far-reaching affects. For instance, the build or regression testing systems in Boost cannot be changed at this point, and Boost libraries on which other major components of Boost depend (such as MPL, Type Traits, and Config) may not have large interface changes or be fundamentally rewritten. The Release Manager has final say regarding the classification of changes as "far-reaching" or not; if you are unsure, please ask.
If we're keeping the infrastructure (Build, MPL, Type Traits, Config, others?) stable at this point, why not branch them for release earlier as a "Boost Infrastructure" package? Regards, -- João
participants (22)
-
Alexander Nasonov
-
AlisdairM
-
Anthony Williams
-
Arkadiy Vertleyb
-
Dave Moore
-
David Abrahams
-
Douglas Gregor
-
Eric Niebler
-
Gennadiy Rozental
-
Jeff Garland
-
Jim Douglas
-
João Abecasis
-
Michael Goldshteyn
-
Paul A Bristow
-
Paul Baxter
-
Phil Richards
-
Rene Rivera
-
Robert Ramey
-
Sebastian Redl
-
Thomas Witt
-
Tobias Schwinger
-
Victor A. Wagner Jr.