
Martin Wille wrote:
A harder problem is adding of a new toolset. In that case, hundreds of test failures may pop up and nobody really feels responsible to look into them, effectively leaving that work to the release manager, unless he decides to consider that toolset not relevant for the release (in which case the testing effort is wasted).
We need a way to organize addition of toolsets. The test runner can't alone be made responsible for fixing all the problems that get reported. Neither should the release manager be responsible for driving the process at release preparation time.
I think the proposed practice would also apply to toolsets. In fact I think its a lot easier than improving a library itself. Someone decides to add a new toolset. He has the the current release (stable) version on his desktop. He builds the whole stable version with his new toolset. Any failures are particular to the toolset. So he may fiddle around and minimize them. At that point he builds markup for that toolset and merges it (or requests a merge into the stable branch). Maybe there is a re-test with the new toolset. But likely, since its a new toolset, he is the only one with it so that pretty much has to be the end of it unless he's willing to test it on request (which he probably be expected to do). Now that is going to leave a situation which some people aren't going to like. The "Next" release is going to have a new toolset with lots of failures (typicly). The question isn't whether its perfect, the question is whether the Next release is better than the current one. Well, it IS better even though it has more failures. The total breadth of applicability is broader than the previous version. We can't make releases perfect no matter how long we stretch out the delivery time no matter how much we put current development on hold no matter how many times we test. We can guarentee the each release is better than the current one and we should do that as frequently as is practical. Robert Ramey