
Nicola Musatti wrote:
The only "coordination" required is a means to request the test and return the results. There is no reason this process has to be "coordinated" with anyone else - except to agree that that the test is to be run after moving to the proper development branch. Beman's proposal doesn't detail the mechanism for requesting a test, and posting results - but I don't see that that is a big issue.
The difference is this part is not taking place on your machine anymore, but on the regression test machines, otherwise how can you test your changes on primary platforms you don't have access to?. Access to these machines does require coordination. In my opinion Beman's and other proposals seem to assume that there won't be contention of resources here. This is not an assumption I'd draw from seeing how things are going now.
The essential fact is that this testing can run totally independently of everyone else - in fact that is exactly what happens when I run a test on my own machine. I don't have to "coordinate" this with anyone and so far no one has complained.
In the light of what I wrote above I don't see how this may be the case.
Here is an example of a system that would implement the testing required by Beman's proposal. I' m not proposing this system but I am just demonstrating that it is possible. a) a developer wants to test a particular combination of library, development branch, compiler, operating system, standard library, variant, etc and posts such request to a central list of such requests. b) someone is willing / interested in testing some some combination of library, development branch, compiler, operating system, standard library, variant, etc. (Of course he might use a "*" wild card for one of the test parameters. When he posts his willingness to test he is returned a list of tests to run. c) the tests are run on the tester's machine and the results uploaded to the central test repository and/or emailed to the requestor. So, the decisions about what to test and when to test it are handled "coordinated" without requiring developer's and testers to work in lockstep. Each library can be developed independently at its own pace.
I attribute this situation to the fact that we're performing a sort of continuous integration in two different environments at the same time (HEAD and RC branch) most of the time, without a mechanism or procedure to ensure that breakage is swiftly dealt with.
I think we mostly agree on the source of the problem. Beman's approach trys to keep breakage out of the system rather than making an ever more elaborate system to deal with it.
This in turn is in part induced by the lack of private "repositories" for development work that isn't yet ready for for integration. In a way we would probably improve things by just stopping all regression tests on HEAD.
Agreed - this testing is not helpful to developer's nor to users.
You seem to assume that libraries cannot improve.
Nope. I have concluded that breaking interfaces is a bad thing for users. It is a breach of trust between the library developer and library user. I presumes that users have nothing else to do but constantly tweak their own code to keep up with the libraries changes. This defeats the whole purpose of using a library in the first place. Library developers who do this will lose "customers". We have already seen examples of this. Having said that, boost library authors have been pretty good about not breaking interfaces so I don't think this will be a huge problem in practice. Libraries can improve implementations and can ADD interfaces with no problem. This works well with the system.
The way out of this would be to share and discuss one's intentions with one's users and then coordinate the introduction of such changing with the authors of libraries that depend on the changing interfaces.
Nope. User code is already written and running. Suppose there are 1000 users of the serialization library. Suppose I were to make a change for 1.34 that would require all those users to investigate their code and tweak it because I made a change? Do you really think that would be OK?
I don't think users would appreciate the introduction of duplicated functionality in family of libraries that, rightly or wrongly, is perceived as a mostly coherent whole.
Now that is the central question. "mostly coherent whole" highlights the issue. I would guess that when boost started out, the "model" that was envisioned was STL which is a very coherent whole. I think experience has shown that such a vision is a chimera and no amount of work will make such a vision reality. The next best thing (actually a better thing) - is a large set of decoupled libraries which have stable interfaces. Note the by "decoupled" I don't mean that a library should not depend upon another library but rather it should depend only upon its document interface. The best thing that boost - and only boost - can do is to use its review process to promote/enforce. a) conceptional integrity - libaries which do one thing at a time without mixing up bunch of orthogonal functionality. b) well defined and well documented interfaces. c) quality implementation d) quality documentation library "unit testing" against the released library enhances these goals.
[...]
In my opinion flexibility stems from the fact that with a much shorter release cycle skipping one is not going to be such a big deal, so developers will be faced with reasonable alternatives. For that to happen each cycle must be managed very rigorously.
That's whats causing the current difficulty. Working harder at it will just make it worse.
This is where we disagree. I keep thinking that if self discipline were enough, there wouldn't be problems right now.
I agree - its not about self discipline. Its not that people aren't working hard enough. As I said, working harder won't help. Its that we're doing the wrong thing. Robert Ramey