
On Monday 08 June 2009 15:48:11 David Abrahams wrote:
I just realized I can articulate what's not working for me about the current trunk/release branch arrangement.
[snip]
I'm not sure what to do about this, but it's really killin' me.
I'm relatively new to working on boost but while fixing date_time tickets I felt exactly as you described. What bothered me most is that quite an amount of testing platforms appeared to be broken in some way. For example: * Sandia-sun - tests fail to compile with the ridiculous error "int64_t is not a member of boost", while other platforms, including Sandia-Linux-sun, are fine. * On some platforms (Huang-Vista-x86_32, for example) tests fail with a sole output "EXIT STATUS: -1073741819", which I consider as a crash of some kind. However, I ran tests with this compiler and had no errors. Other testers with the same compiler are also all green. * The steven_watanabe-como platform always fail at linking stage. I admit, this may be some problem in the Jamfile used with tests, but I have no clue as to what it is. Again, other platforms link fine. * Some test failures simply don't have any sensible output except for "Lib [...]: fail". This is an example: http://www.boost.org/development/tests/trunk/developer/siliconman-date_time- borland-6-1-3-testc_local_adjustor-variants_.html Click the first link. In the end I decided to simply ignore some of the testing platforms and keep other from failing. I considered a change acceptable for the release branch if at least Sandia-gcc, Huang-Vista-x64, RW_WinXP_VC and Huang-Vista-x64-intel don't introduce new failures. I also tried to maintain some SunCC platform but due to its traditional failures it was complicated. I think it would really help to fix platforms that are failing due to configuration problems. Another thing that would be useful is to highlight officially supported platforms. If tests pass on those platforms the change is acceptable for release. Another suggestion is to provide some kind of testing results archiving. It would simplify tracking individual test results. It could even automatically highlight new failures or fixed tests and send an email to the developer. That would at least help maintaining your own library. As for cross-library interferences, I think email notifications could also help, as testing failures would be detected sooner. Having a revision number, at which a new test failure introduced, it would be easier to see what caused the problem and who made the change (not to execute the poor man, of course, but at least to know the person to contact to).