[Release Managers] Merging Parameter to Release

Hi, http://www.boost.org/development/tests/trunk/developer/issues_release_.html#... now shows that the changes we made are clean. The failures you're seeing there are due to tests we added newly; those tests are from "literate programming" examples that were automatically extracted from the documentation, and for some reason we had never checked them in so far. How would you like this handled? I suppose one option is that we could avoid merging the new tests. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
Hi,
http://www.boost.org/development/tests/trunk/developer/issues_release_.html#... now shows that the changes we made are clean. The failures you're seeing there are due to tests we added newly; those tests are from "literate programming" examples that were automatically extracted from the documentation, and for some reason we had never checked them in so far.
How would you like this handled? I suppose one option is that we could avoid merging the new tests.
I'm confused. Are the tests clean or are they failing? You seem to be saying that your changes haven't introduced new regressions, but that you've added additional tests that are failing for other reasons, is that right? What are the nature of the recent changes? Bug fixes? If so, I'm inclined to accept the bug fixes and reject the additional tests to avoid the appearance of a regression. Beman? -- Eric Niebler BoostPro Computing http://www.boostpro.com

on Fri Jan 30 2009, Eric Niebler <eric-AT-boost-consulting.com> wrote:
David Abrahams wrote:
Hi,
http://www.boost.org/development/tests/trunk/developer/issues_release_.html#...
now shows that the changes we made are clean. The failures you're ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ seeing there are due to tests we added newly; those tests are from "literate programming" examples that were automatically extracted from the documentation, and for some reason we had never checked them in so far.
How would you like this handled? I suppose one option is that we could avoid merging the new tests.
I'm confused. Are the tests clean or are they failing?
The tests I just added are failing. The changes to the library code itself introduced no new failures.
You seem to be saying that your changes haven't introduced new regressions, but that you've added additional tests that are failing for other reasons, is that right?
Yes.
What are the nature of the recent changes? Bug fixes?
A bug fix and associated doc change. https://svn.boost.org/trac/boost/changeset/50863
If so, I'm inclined to accept the bug fixes and reject the additional tests to avoid the appearance of a regression. Beman?
That's the possibility I was suggesting. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
What are the nature of the recent changes? Bug fixes?
A bug fix and associated doc change. https://svn.boost.org/trac/boost/changeset/50863
If so, I'm inclined to accept the bug fixes and reject the additional tests to avoid the appearance of a regression. Beman?
That's the possibility I was suggesting.
Hmmm - wouldn't that just be hiding a bug? Are we doing anyone any favor by hiding a test known to fail? I think you should just leave the new tests in even though they are failing. Perhaps an addition in the release notes to indicate a pending issue would be in order. I presume that the newer version is strictly better than the previous one so there's no question that it should be released. I would say, a) Release the library with the failing test b) Note the recently detected bug in the release notes c) Address the bug separately. Note that this wouldn't break precedent in anyway since all libraries have test failures on at least some platforms. Robert Ramey

On Fri, Jan 30, 2009 at 3:14 PM, Robert Ramey <ramey@rrsd.com> wrote:
David Abrahams wrote:
What are the nature of the recent changes? Bug fixes?
A bug fix and associated doc change. https://svn.boost.org/trac/boost/changeset/50863
If so, I'm inclined to accept the bug fixes and reject the additional tests to avoid the appearance of a regression. Beman?
That's the possibility I was suggesting.
Hmmm - wouldn't that just be hiding a bug? Are we doing anyone any favor by hiding a test known to fail?
I think you should just leave the new tests in even though they are failing. Perhaps an addition in the release notes to indicate a pending issue would be in order.
I presume that the newer version is strictly better than the previous one so there's no question that it should be released.
I would say,
a) Release the library with the failing test b) Note the recently detected bug in the release notes c) Address the bug separately.
Note that this wouldn't break precedent in anyway since all libraries have test failures on at least some platforms.
Agree. --Beman

Beman Dawes wrote:
On Fri, Jan 30, 2009 at 3:14 PM, Robert Ramey <ramey@rrsd.com> wrote:
David Abrahams wrote:
What are the nature of the recent changes? Bug fixes? A bug fix and associated doc change. https://svn.boost.org/trac/boost/changeset/50863
If so, I'm inclined to accept the bug fixes and reject the additional tests to avoid the appearance of a regression. Beman? That's the possibility I was suggesting. Hmmm - wouldn't that just be hiding a bug? Are we doing anyone any favor by hiding a test known to fail?
We're not hiding anything. The test is failing on trunk. The idea was to avoiding the false impression of a regression.
I think you should just leave the new tests in even though they are failing. Perhaps an addition in the release notes to indicate a pending issue would be in order.
I presume that the newer version is strictly better than the previous one so there's no question that it should be released.
Correct.
I would say,
a) Release the library with the failing test b) Note the recently detected bug in the release notes c) Address the bug separately.
Note that this wouldn't break precedent in anyway since all libraries have test failures on at least some platforms.
Agree.
OK. -- Eric Niebler BoostPro Computing www.boostpro.com

Hi, ----- Original Message ----- From: "Eric Niebler" <eric@boostpro.com> To: <boost@lists.boost.org> Sent: Saturday, January 31, 2009 4:50 PM Subject: Re: [boost] [Release Managers] Merging Parameter to Release
Beman Dawes wrote:
On Fri, Jan 30, 2009 at 3:14 PM, Robert Ramey <ramey@rrsd.com> wrote:
David Abrahams wrote:
What are the nature of the recent changes? Bug fixes? A bug fix and associated doc change. https://svn.boost.org/trac/boost/changeset/50863
If so, I'm inclined to accept the bug fixes and reject the additional tests to avoid the appearance of a regression. Beman? That's the possibility I was suggesting. Hmmm - wouldn't that just be hiding a bug? Are we doing anyone any favor by hiding a test known to fail?
We're not hiding anything. The test is failing on trunk. The idea was to avoiding the false impression of a regression.
I think you should just leave the new tests in even though they are failing. Perhaps an addition in the release notes to indicate a pending issue would be in order.
I presume that the newer version is strictly better than the previous one so there's no question that it should be released.
Correct.
I would say,
a) Release the library with the failing test b) Note the recently detected bug in the release notes c) Address the bug separately.
Note that this wouldn't break precedent in anyway since all libraries have test failures on at least some platforms.
Agree.
OK.
I like the proposal. It is transparent. I'm wondering if all the libraries should add in the release notes all the test not working now? Said in other terms, if we need to document in the release note everything that do not work yet, the easy way is to add a test case if not already present. If the user can get the regression results of the final release, the release notes could just point to it, and so no need to add nothing more on the release note as far as there is an associated test case and the regression results are frozen. I'm not saying that it is not a good idea to include it on the release not, but just trying to cover implicitly other test cases not working now. What do you think? BTW, can we get the regression results of the final 1.37? Thanks, Vicente P.S. things that do not work yet could include some open bugs.
participants (6)
-
Beman Dawes
-
David Abrahams
-
Eric Niebler
-
Eric Niebler
-
Robert Ramey
-
vicente.botet