Why the integration branchmust remain clean, etc.

I just realized I can articulate what's not working for me about the current trunk/release branch arrangement. * IIUC, there are always lots of failures on the trunk because there's no pressure to keep it clean. * Before I check in a change in one of my libraries, I want to make sure I haven't broken anything else in boost. So naturally, I run the whole regression suite (there oughta be a better way!) * But since the trunk is not clean, I spend a lot of time analyzing problems that have nothing to do with my library. That's a real disincentive to making changes in Boost. In fact, I am confident my low activity in the repo over the last year or so is primarily caused by the amount of labor associated with testing my changes. Now, maybe you'll suggest that I test my own library, then damn the torpedoes and check in the changes, and let other people complain to me when their libraries break. That would work... except that many people aren't paying attention to the health of their libraries on the trunk. I suppose another approach might be to test against the release branch, then svn switch to the trunk and check in my changes there. But that is still laborious. I'm not sure what to do about this, but it's really killin' me. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Mon, Jun 8, 2009 at 12:48 PM, David Abrahams <dave@boostpro.com> wrote:
I just realized I can articulate what's not working for me about the current trunk/release branch arrangement. [..snip..]
I entirely agree, a dirty trunk is huge disincentive to activity. I'd like to see a clean trunk policy, but quite how that could be encouraged and/or enforced is difficult to imagine. Without change tho', this is only going to get worse as Boost has more contributors and a wider audience. I guess we either have to look at some kind of gatekeeper arrangement, or think DVCS's. - Rob.

David Abrahams wrote:
I just realized I can articulate what's not working for me about the current trunk/release branch arrangement.
* IIUC, there are always lots of failures on the trunk because there's no pressure to keep it clean.
that's always been the case.
* Before I check in a change in one of my libraries, I want to make sure I haven't broken anything else in boost. So naturally, I run the whole regression suite (there oughta be a better way!)
* But since the trunk is not clean, I spend a lot of time analyzing problems that have nothing to do with my library.
That has always been a problem for me.
That's a real disincentive to making changes in Boost. In fact, I am confident my low activity in the repo over the last year or so is primarily caused by the amount of labor associated with testing my changes.
Now, maybe you'll suggest that I test my own library, then damn the torpedoes and check in the changes, and let other people complain to me when their libraries break.
I believe that happens alot since there doesn't seem to be much alternative.
That would work... except that many people aren't paying attention to the health of their libraries on the trunk.
I suppose another approach might be to test against the release branch, then svn switch to the trunk and check in my changes there.
This has worked very well form me.
But that is still laborious.
And it's not laborious at all. I have a boost release tree on my machine. The three directories relateded to the serialization library have been switched to the trunk. I run serialization libraries tests on my own machine with no problem. No extra effort is required to know that all errors are due to my own local changes. There are no "hidden variables" or "side effects". It has saved me huge amounts of wasted effort.
I'm not sure what to do about this, but it's really killin' me.
Take my advice. Robert Ramey

on Mon Jun 08 2009, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
David Abrahams wrote:
I suppose another approach might be to test against the release branch, then svn switch to the trunk and check in my changes there.
This has worked very well form me.
But that is still laborious.
And it's not laborious at all.
It is. You're suggesting something else:
I have a boost release tree on my machine. The three directories relateded to the serialization library have been switched to the trunk.
I was talking about switching the whole tree over to trunk just before checkin (and testing again to make sure you're not breaking anything there). Your approach works pretty well until you find an issue in some utility outside your own library --- say, it needs a workaround for a compiler you're testing against --- and you have to make a tweak there. You risk checking that tweak in on the release branch. Also, I'm responsible for more diverse areas of the Boost SVN tree than you are, so it might get hard to keep track of which parts are switched. Also there's a good chance of breaking something in the trunk because you haven't tested against that code. Hmm, a clean compile against the trunk is supposed to be a prerequisite for a merge to release, but not testing against the trunk might be a gamble worth taking if that breakage is rare enough. Seems like this approach could contribute a little to making the trunk more broken overall, though, and we both agree that brokenness on trunk is a problem.
I run serialization libraries tests on my own machine with no problem. No extra effort is required to know that all errors are due to my own local changes. There are no "hidden variables" or "side effects". It has saved me huge amounts of wasted effort.
I'm not sure what to do about this, but it's really killin' me.
Take my advice.
I'll try it, but I'm worried that it's more viable for you than it is for me for some of the reasons I cited above. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
on Mon Jun 08 2009, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
David Abrahams wrote:
I suppose another approach might be to test against the release branch, then svn switch to the trunk and check in my changes there.
This has worked very well form me.
But that is still laborious.
And it's not laborious at all.
It is. You're suggesting something else:
I have a boost release tree on my machine. The three directories relateded to the serialization library have been switched to the trunk.
I was talking about switching the whole tree over to trunk just before checkin (and testing again to make sure you're not breaking anything there).
This amounts to using the "rest of boost" to test one library. For most libraries this isn't very cost/time effective. It really tests for interface breaking change. I wonder, do you go back and check ALL the failures in All the libraries to see which one's might be related to the changes in your library and which ones are not? That would be a huge amount of work. Perhaps it might be worth it for MPL, but for most libraries I doubt it.
Your approach works pretty well until you find an issue in some utility outside your own library --- say, it needs a workaround for a compiler you're testing against --- and you have to make a tweak there. You risk checking that tweak in on the release branch. Also, I'm responsible for more diverse areas of the Boost SVN tree than you are, so it might get hard to keep track of which parts are switched.
I totally understand this. One thing that might be considered is that the release branch be locked by the release manager and unlocked as each developer want's to merge. This would give the release manager a means to excercise authority he might require. At this time, I'm not willing to argue for this though, I'm aware of the tradeoffs. Actually, I would just leave this option to the discretion of the release manager. I could easily imagine that he might want use this idea just when release is close.
Also there's a good chance of breaking something in the trunk because you haven't tested against that code. Hmm, a clean compile against the trunk is supposed to be a prerequisite for a merge to release, but not testing against the trunk might be a gamble worth taking if that breakage is rare enough.
FWIW, I test locally my one library against a couple of compilers that I have then check into the trunk and see what happens. (that is, wait for complaints). Since not very much is dependent on the serialization library, this hasn't happened in years?
Seems like this approach could contribute a little to making the trunk more broken overall, though,
and we both agree that brokenness on trunk is a problem.
Ahhhh - this is the crux of matter for me. I don't agree with this. The trunk is never released. The trunk is aways broken and its only a problem because people test against it. If the trunk system testing were changed so that each library were tested against the current release this wouldn't be noticed at all. Basically, changing more than one variable (library) at a time and then testing hides information. Some time ago, I changed the serialization library tests so as not to use the boost test component. It's not that boost test was bad, it's that the testing the serialization library with a test system which was still in flux generated a huge amount of work in tracking down source of test failures. Same goes for boost build. Now I test locally only against release and I don't have those problems. Now that I"m doing that, I might well go back to using boost test.
Take my advice.
I'll try it, but I'm worried that it's more viable for you than it is for me for some of the reasons I cited above.
Hmm - I should have said, "Try my method" Robert Ramey

On Monday 08 June 2009 15:48:11 David Abrahams wrote:
I just realized I can articulate what's not working for me about the current trunk/release branch arrangement.
[snip]
I'm not sure what to do about this, but it's really killin' me.
I'm relatively new to working on boost but while fixing date_time tickets I felt exactly as you described. What bothered me most is that quite an amount of testing platforms appeared to be broken in some way. For example: * Sandia-sun - tests fail to compile with the ridiculous error "int64_t is not a member of boost", while other platforms, including Sandia-Linux-sun, are fine. * On some platforms (Huang-Vista-x86_32, for example) tests fail with a sole output "EXIT STATUS: -1073741819", which I consider as a crash of some kind. However, I ran tests with this compiler and had no errors. Other testers with the same compiler are also all green. * The steven_watanabe-como platform always fail at linking stage. I admit, this may be some problem in the Jamfile used with tests, but I have no clue as to what it is. Again, other platforms link fine. * Some test failures simply don't have any sensible output except for "Lib [...]: fail". This is an example: http://www.boost.org/development/tests/trunk/developer/siliconman-date_time- borland-6-1-3-testc_local_adjustor-variants_.html Click the first link. In the end I decided to simply ignore some of the testing platforms and keep other from failing. I considered a change acceptable for the release branch if at least Sandia-gcc, Huang-Vista-x64, RW_WinXP_VC and Huang-Vista-x64-intel don't introduce new failures. I also tried to maintain some SunCC platform but due to its traditional failures it was complicated. I think it would really help to fix platforms that are failing due to configuration problems. Another thing that would be useful is to highlight officially supported platforms. If tests pass on those platforms the change is acceptable for release. Another suggestion is to provide some kind of testing results archiving. It would simplify tracking individual test results. It could even automatically highlight new failures or fixed tests and send an email to the developer. That would at least help maintaining your own library. As for cross-library interferences, I think email notifications could also help, as testing failures would be detected sooner. Having a revision number, at which a new test failure introduced, it would be easier to see what caused the problem and who made the change (not to execute the poor man, of course, but at least to know the person to contact to).

AMDG Andrey Semashev wrote:
* The steven_watanabe-como platform always fail at linking stage. I admit, this may be some problem in the Jamfile used with tests, but I have no clue as to what it is. Again, other platforms link fine.
The problem is that the como-win toolset doesn't support dynamic linking. Anything that tries to use a dll is going to fail at the moment. In Christ, Steven Watanabe

AMDG Andrey Semashev wrote:
* Sandia-sun - tests fail to compile with the ridiculous error "int64_t is not a member of boost", while other platforms, including Sandia-Linux-sun, are fine.
This isn't a ridiculous error at all. Uses of int64_t should be protected by #ifndef BOOST_NO_INT64_T In Christ, Steven Watanabe

On Tuesday 09 June 2009 00:53:31 Steven Watanabe wrote:
AMDG
Andrey Semashev wrote:
* Sandia-sun - tests fail to compile with the ridiculous error "int64_t is not a member of boost", while other platforms, including Sandia-Linux-sun, are fine.
This isn't a ridiculous error at all. Uses of int64_t should be protected by #ifndef BOOST_NO_INT64_T
Strange, I was sure these tests were successful some time ago, so 64-bit integers should be available on these compilers. Looking at Boost.Config tests, it looks like config tests aren't run on this compiler, so we actually don't know if BOOST_NO_INT64_T is valid.

On Tuesday 09 June 2009 01:54:18 you wrote:
On Tuesday 09 June 2009 00:53:31 Steven Watanabe wrote:
AMDG
Andrey Semashev wrote:
* Sandia-sun - tests fail to compile with the ridiculous error "int64_t is not a member of boost", while other platforms, including Sandia-Linux-sun, are fine.
This isn't a ridiculous error at all. Uses of int64_t should be protected by #ifndef BOOST_NO_INT64_T
Strange, I was sure these tests were successful some time ago, so 64-bit integers should be available on these compilers. Looking at Boost.Config tests, it looks like config tests aren't run on this compiler, so we actually don't know if BOOST_NO_INT64_T is valid.
Oh, well, I'm being hasty. Sorry. The config test seems to pass. How did date_time tests pass before is still beyond me.

AMDG Andrey Semashev wrote:
On Tuesday 09 June 2009 01:54:18 you wrote:
Looking at Boost.Config tests, it looks like config tests aren't run on this compiler, so we actually don't know if BOOST_NO_INT64_T is valid.
Oh, well, I'm being hasty. Sorry. The config test seems to pass. How did date_time tests pass before is still beyond me.
Well, thanks for making me look at the Boost.Config tests, anyway. limits_test is failing because of a build system problem... In Christ, Steven Watanabe
participants (5)
-
Andrey Semashev
-
David Abrahams
-
Robert Jones
-
Robert Ramey
-
Steven Watanabe