Re:[boost] Re: 1.32 release preparation

Here is the current scenario.
a) The announcement gets posted - release 1.32 scheduled for date x. b) Each developer decides - uh oh I better get in my changes before date x. c) Of course these changes take longer d) and break some other things e) which takes longer still IMHO, point d) touches a weak point: There are two kind of libraries in a system
a) the release manager or the core influential developers keep an eye on the current state of the CVS tree and tests. b) When things look "pretty good", branch for release without warning. None of this loading my latest feature in to make the schedule. c) the Main tree goes on as usual d) pending issues in the release branch are addressed by nagging the person responsible. e) the new version is releases with a number like 1.31.1 or maybe even 28 July 2004. f) any changes between branch and release should be just bug fixes which can then be merged back into the development tree.
Under this system, releases would occur more frequently - on the order of 6 weeks. If you're ready to check in your latest feature and the new release branch catches you with your pants down - well too bad. But it's no big deal because the next release will be in 6? weeks anyhow which is close to what its going to take to shake out the repercussions of your check in anyway. We should seriouly consider such a release process. Howerver, then the CVS
On Sun, 27 Jun 2004 19:05:45 -0700, Robert Ramey wrote like boost. At first, there are core libraries which are used by many other libraries. Changes at these libraries are potentially dangerous and must be well-considered since they can broke a lot of stuff. They should early checked into CVS to get time for stabelizing the base. Secondly, there are libraries which 'only' provide a special feature like the boost graph library the circular_buffer library. They shouldn't be expected to lead to regressions of other boost libraries. I think that they could be handled differently than the core libraries. main tree must always be in a nearly 'Release-Ready' state. But IMO that is anyway a good idea.
But I think it's really hard to get such a large number of people working independently on exactly the same schedule. Yes ...
Almost all libraries are accepted subject to some changes being made. Many such changes seem simple but end up rippling through the whole library. This takes time. This is IMHO an indication of a flaw in the current boost review process. A library should be only accepted after the list of requested changes has been made. Additionally, I think a library should be accepted only if it is compatible to the current CVS main tree.
Once a library is accepted, then one can get serious about making it pass with a larger number of compilers. This also takes time - a lot of time Passing compilers which weren't supported during the review process under no circumstances shouldn't delay the integration of the library into the boost main pool. Supporting a special compiler is only a nice to have feature. This means not that it not worth spending the effort supporting important platforms. However, in the first iteration I just expect a library to be written in ANSI C++ and to be usable on platforms which support it.
These changes end up having a ripple effect and can result it changes like moving things in namespaces and subdirectories. While this is going on, the system is broken and really not suitable to be subject to the daily testing routine. This is IMHO an indication of a flaw ... (s.a)
So it seems attractive to deal with all this stuff before one does the initial check in to CVS. This way one can start with a relatively clean start. I second this.
With best regards Johannes

"Johannes Brunen" <jbrunen@datasolid.de> writes:
On Sun, 27 Jun 2004 19:05:45 -0700, Robert Ramey wrote
Here is the current scenario.
a) The announcement gets posted - release 1.32 scheduled for date x. b) Each developer decides - uh oh I better get in my changes before date x. c) Of course these changes take longer d) and break some other things e) which takes longer still
IMHO, point d) touches a weak point: There are two kind of libraries in a system like boost. At first, there are core libraries which are used by many other libraries. Changes at these libraries are potentially dangerous and must be well-considered since they can broke a lot of stuff. They should early checked into CVS to get time for stabelizing the base. Secondly, there are libraries which 'only' provide a special feature like the boost graph library the circular_buffer library. They shouldn't be expected to lead to regressions of other boost libraries. I think that they could be handled differently than the core libraries.
Boost.Graph changes have caused plenty of regressions in Boost.Python in the past. <snip>
Once a library is accepted, then one can get serious about making it pass with a larger number of compilers. This also takes time - a lot of time
Passing compilers which weren't supported during the review process under no circumstances shouldn't delay the integration of the library into the boost main pool. Supporting a special compiler is only a nice to have feature. This means not that it not worth spending the effort supporting important platforms. However, in the first iteration I just expect a library to be written in ANSI C++ and to be usable on platforms which support it.
I absolutely agree with that. No library needs to support vc6 in order to be acceptable. Get it checked in, then improve its portability incrementally. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com
participants (2)
-
David Abrahams
-
Johannes Brunen