
on Fri Jan 30 2009, Eric Niebler <eric-AT-boost-consulting.com> wrote:
David Abrahams wrote:
Hi,
http://www.boost.org/development/tests/trunk/developer/issues_release_.html#...
now shows that the changes we made are clean. The failures you're ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ seeing there are due to tests we added newly; those tests are from "literate programming" examples that were automatically extracted from the documentation, and for some reason we had never checked them in so far.
How would you like this handled? I suppose one option is that we could avoid merging the new tests.
I'm confused. Are the tests clean or are they failing?
The tests I just added are failing. The changes to the library code itself introduced no new failures.
You seem to be saying that your changes haven't introduced new regressions, but that you've added additional tests that are failing for other reasons, is that right?
Yes.
What are the nature of the recent changes? Bug fixes?
A bug fix and associated doc change. https://svn.boost.org/trac/boost/changeset/50863
If so, I'm inclined to accept the bug fixes and reject the additional tests to avoid the appearance of a regression. Beman?
That's the possibility I was suggesting. -- Dave Abrahams BoostPro Computing http://www.boostpro.com