
David Abrahams wrote:
Hi,
http://www.boost.org/development/tests/trunk/developer/issues_release_.html#... now shows that the changes we made are clean. The failures you're seeing there are due to tests we added newly; those tests are from "literate programming" examples that were automatically extracted from the documentation, and for some reason we had never checked them in so far.
How would you like this handled? I suppose one option is that we could avoid merging the new tests.
I'm confused. Are the tests clean or are they failing? You seem to be saying that your changes haven't introduced new regressions, but that you've added additional tests that are failing for other reasons, is that right? What are the nature of the recent changes? Bug fixes? If so, I'm inclined to accept the bug fixes and reject the additional tests to avoid the appearance of a regression. Beman? -- Eric Niebler BoostPro Computing http://www.boostpro.com