
Martin Wille wrote:
Whatever tone might be appropriate or not ...
IMO an offensive tone against some who have invested an incredible amount of work during the last months is never appropriate.
Several testers have raised issues and plead for better communication several (probably many) times. Most of the time, we seem to get ignored, unfortunately. I don't want to accuse anyone of voluntarily neglecting our concerns. However, I think we apparently suffer from a "testing is not too well understood" problem at several levels.
Maybe it's the fact that I don't run incremental tests and therefore don't encounter as much problems with the tests as you, but I've never got the impression of being neglected by Aleksey and his team except for 'usual' newsgroup delays.
The tool chain employed for testing is very complex (due to the diversity of compilers and operation systems involved) and too fragile. Complexity leads to lack of understanding (among the testers and among the library developers) and to false assumptions and to lack of communication. It additionally causes long delays between changing code and running the tests and between running the tests and the result being rendered. This in turn makes isolating bugs in the libraries more difficult. Fragility leads to the testing procedure breaking often and to breaking without getting noticed for some time and to breaking without anyone being able to recognize immediately exactly what part broke. This is a very unpleasant situation for anyone involved and it causes a significant level of frustration at least among those who run the tests (e.g. to see the own test results not being rendered for severals days or to see the test system being abused as a change announcement system isn't exactly motivating).
Please, understand that a lot of resources (human and computers) are wasted due to these problems. This waste is most apparent those who run the tests. However, most of the time, issues raised by the testers seemed to get ignored. Maybe, that was just because we didn't yell loud enough or we didn't know whom to address or how to fix the problems.
No doubt. [...]
The people involved in creating the test procedure have put very much effort in it and the resulting system does its job nicely when it happens to work correctly. However, apparently, the overall complexity of the testing procedure has grown above our management capabilities. This is one reason why release preparations take so long.
I agree 100%! IMO there is one major issue that was raised by me and others several times and that has <cynismn>successfully</cynismn> been set aside so far: boost is getting larger and larger but nobody wants to talk about the side effects this brings along: - the size of the binaries has grown incredibly! - the time and disk space needed for test runs are higher than ever before - the boost library itself is on the way to become a blob of more or less unrelated code/library fragments. IMO its overdue to think about splitting boost into components. For me it's not clear why a user who'll never use python must install and build boost.python on his machine. The same yields for many other boost libraries like graph, spirit, serialization, wave, etc. etc. It would be _much_ easier for us testers to run tests on boost _components_ than on the complete boost blob! If boost continues growing as it did in the past, this will be the only way to continue regression testing in the quality of today. Sorry to say that, but if we don't start thinking about this ASAP, there will definitely be a 'test breakdown' in the future.
Maybe, we should take a step back and collect all the issues we have and all knowledge about what is causing these issues.
I'll make a start, I hope others will contribute to the list. Issues and causes unordered (please, excuse any duplicates):
[...] Very good idea! Thanks for putting them together here! Cheers, Stefan