
Hi, Robert Ramey schreef op 23-1-2015 om 21:05:
Peter Dimov-2 wrote
Robert's perspective here, FWIW, is that he develops and tests (quite extensively) in a configuration in which all libraries are on their master branch and Serialization is on develop. Thanks for pointing this out. This is true and of course a big reason that for me the merge from develop into master IS trivial. Sorry I forgot to (re)mention that.
You forget to point out all the other actions I mentioned in my list.
On this configuration, once everything is green, merge is as easy as "merge --no-ff develop". You've already tested it, so there's no way anything can fail. correct, that's why I do it this way.
And it is dangerous. Unless you're not dependant on any other Boost library. If you (with e.g. library Y) always work on develop and test with other librarys on master, you don't notice any other development or possible problem. Suppose you are dependant on library X. As soon as the schedule comes out, moment S, you merge again because it is trivial (moment T) , and you go on holiday. Library X decides (at moment M, long before S), to change a return-type into something much better. You are using that feature. It happens. As soon as the release schedule comes out, they stop developing, monitor regression matrix (for library X), and after a couple of days, just before the closing of master, they merge (moment U, after T). Now your library is broken. It does not even compile anymore. You don't see it, because you are on holiday and have done the job. Even if you are not on holiday, you might not notice it immediately, depending on how often you compile. And the masterbranch closes... They don't see it because they run library X unit tests and not library Y unit tests. It will be found by the users of the RC though. Of course it is not your fault! Why the xxx did they change the return-type? But it is better... even you have to agree with that. And they did it already at moment M and they did send a notification to the Boost mailing list (which by chance you did not read). If you had used develop for your developments, you would have noticed the breaking shortly after moment M already. So it can be dangerous. Chances are low, but it happens. Another scenario, but I will not make it too long: library Z has a new feature you really need! Do you wait until they merge to master? Maybe, indeed, that is good practice. But it happens that another library uses it already earlier than that... both on develop.
Of course this has its own drawback - the boost.org tests do not work in such a manner. True again! But I don't think this is a reason why I shouldn't do it for my own tests. My life is so much easier since I started doing things this way. Much of the back and forth cited above just goes away.
Barend - Why don't you try my approach as an experiment. It's super easy to set up. And if you don't agree that it makes your life a lot easier, it's just as easy to switch back. Try this an let us know how it works out for you.
The approach is not so much different. You work always on master, I always on develop. And sometimes (indeed) I check on master too, by the way, without immediately merging afterwards. But please react on the other points on the list too. Do you monitor the regression matrix? Wait for a couple of days? Or maybe it is not necessary for you? As I earlier point out, we have multiple people working on the library daily. You state that that makes no difference - is that really true? Do you neatly update the docs every time you merge with master?
I've talked to Rene about changing the boost.org tests to work in the way I do it. He appreciated the idea - but was concerned about the difficulties or implementing it on all the libraries one by one. I'm still convinced that testing each library in develop against all the others in master is the way to go - but since Rene does the work, he get's to decide. That is the boost way - as it must be.
I don't know the details of that. Merging multiple dependant libraries is a complex thing, which can go wrong in many scenarios. That is why I thought that it was good to react on your statements that it is trivial. Things can be solved, and we can do our best efforts to do it as good as possible, but it is not trivial. Regards, Barend