
2007/10/23, Beman Dawes <bdawes@acm.org>:
Most of the bigger infrastructure issues that were getting in the way have now been solved. The tarballs are working again, the missing files in the release/branch have been found, and both trunk and release branch regression reporting is cycling smoothly.
There are still some outstanding testing issues, but they are at the level of individual test platforms rather than the whole testing system.
Both developers and patch submitters have been active, so it isn't like we are starting from scratch, but the emphasis for release management is shifting to focus on reducing test failures in individual libraries.
Looking at the regression test results, I'd like to call attention to these failures:
conversion: lexical_cast_loopback_test on many platforms
graph: csr_graph_test on many platforms
python: import_ on many platforms
range: iterator_range and sub_range on many platforms
typeof: experimental_* on many platforms
I added the experimental_* tests temporarily a couple of weeks ago and removed them from svn after a few days, but the regression test system does not automatically check to see if a test has been deleted, so they remain as ghost results. Is there a common way to deal with this problem? Peder These are particularly worrisome from the release management standpoint
because they affect many platforms and because I'm not seeing any attempts to fix or markup by their developers.
For many of the other failures that affect a lot of key platforms, the developers are actively committing fixes on a regular basis so I assume these failures will be fixed or marked up in the near future.
I'll be traveling Thursday through Tuesday, and will start moving libraries to the release branch when I get back.
--Beman _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost