[1.33.0] Feature freeze tonight

At midnight tonight (Friday) the main trunk/CVS HEAD is feature-frozen and only bug fixes may be committed. New features should go on a separate development branch. We have > 700 regressions and > 3000 failures showing up in the regression tests. Library authors, please review the regression test summaries for your libraries and start fixing bugs. If there are bugs that cannot be fixed for release (due to broken compilers, the need for huge changes to the library, etc.), mark them as expected failures or mark the library as "unusable" in status/explicit-failures-markup.xml. I've updated the list of primary platforms for this release, which includes all newer versions of GCC, CodeWarrior, Visual C++, and Intel compilers. If all bugs can't be fixed, focus on the compilers in the "Release View" in the regression tests. Let's fix those bugs! Doug

Douglas Gregor wrote:
We have > 700 regressions and > 3000 failures showing up in the regression tests. Library authors, please review the regression test summaries for your libraries and start fixing bugs. If there are bugs that cannot be fixed for release (due to broken compilers, the need for huge changes to the library, etc.), mark them as expected failures or mark the library as "unusable" in status/explicit-failures-markup.xml.
Question: what should I do if a test failure does not render the library unusable, but I don't want to mark the failure as "expected" because I _never_ expect failures. ;-) Basically, what I'm asking for is support for non-critical failures, still show as yellow in the detail view, don't make the library box turn yellow in the summary view. I think.

On Apr 22, 2005, at 7:46 AM, Peter Dimov wrote:
Douglas Gregor wrote:
We have > 700 regressions and > 3000 failures showing up in the regression tests. Library authors, please review the regression test summaries for your libraries and start fixing bugs. If there are bugs that cannot be fixed for release (due to broken compilers, the need for huge changes to the library, etc.), mark them as expected failures or mark the library as "unusable" in status/explicit-failures-markup.xml.
Question: what should I do if a test failure does not render the library unusable, but I don't want to mark the failure as "expected" because I _never_ expect failures. ;-)
The typical way to handle this is to mark the failure as expected with a note that explains that it doesn't make the library unusable.
Basically, what I'm asking for is support for non-critical failures, still show as yellow in the detail view, don't make the library box turn yellow in the summary view. I think.
Would making the color of expected failures something that's not green (everything's okay) and not yellow or red (something's broken) work for you? That would be useful for me, because I'm noticing that for certain compilers the number of expected failures in the Graph lib is approaching the number of tests... but that's not obvious at first glance. Expected failures don't make the library box yellow. Doug

Doug Gregor writes:
On Apr 22, 2005, at 7:46 AM, Peter Dimov wrote:
Basically, what I'm asking for is support for non-critical failures, still show as yellow in the detail view, don't make the library box turn yellow in the summary view. I think.
Would making the color of expected failures something that's not green (everything's okay) and not yellow or red (something's broken) work for you?
May be we should mark them gray -- as we do with "N/A" libraries; in some sense these tests do represent "not available" functionality. -- Aleksey Gurtovoy MetaCommunications Engineering

Doug Gregor writes:
On Apr 22, 2005, at 7:46 AM, Peter Dimov wrote:
Basically, what I'm asking for is support for non-critical failures, still show as yellow in the detail view, don't make the library box turn yellow in the summary view. I think.
Would making the color of expected failures something that's not green (everything's okay) and not yellow or red (something's broken) work for you?
May be we should mark them gray -- as we do with "N/A" libraries; in some sense these tests do represent "not available" functionality. -- Aleksey Gurtovoy MetaCommunications Engineering

On Apr 22, 2005, at 9:01 AM, Aleksey Gurtovoy wrote:
Doug Gregor writes:
On Apr 22, 2005, at 7:46 AM, Peter Dimov wrote:
Basically, what I'm asking for is support for non-critical failures, still show as yellow in the detail view, don't make the library box turn yellow in the summary view. I think.
Would making the color of expected failures something that's not green (everything's okay) and not yellow or red (something's broken) work for you?
May be we should mark them gray -- as we do with "N/A" libraries; in some sense these tests do represent "not available" functionality.
That sounds like a great idea to me, but let's hear back from Peter first. Doug

Doug Gregor wrote:
Would making the color of expected failures something that's not green (everything's okay) and not yellow or red (something's broken) work for you? That would be useful for me, because I'm noticing that for certain compilers the number of expected failures in the Graph lib is approaching the number of tests... but that's not obvious at first glance.
Yes, this would be good enough. I wasn't sure whether this would be OK with everyone else; there's probably a reason for making expected failures and passes the same color.

On Apr 22, 2005, at 10:58 AM, Peter Dimov wrote:
Doug Gregor wrote:
Would making the color of expected failures something that's not green (everything's okay) and not yellow or red (something's broken) work for you? That would be useful for me, because I'm noticing that for certain compilers the number of expected failures in the Graph lib is approaching the number of tests... but that's not obvious at first glance.
Yes, this would be good enough. I wasn't sure whether this would be OK with everyone else; there's probably a reason for making expected failures and passes the same color.
I can't think of any compelling reason to have them the same color, though I guess the "all-green" detailed summary could give one warm fuzzies that you wouldn't get with grey expected failures... but I'd much rather have grey expected failures so users can know what works and what doesn't at a glance. Doug

Doug Gregor writes:
On Apr 22, 2005, at 10:58 AM, Peter Dimov wrote:
Doug Gregor wrote:
Would making the color of expected failures something that's not green (everything's okay) and not yellow or red (something's broken) work for you? That would be useful for me, because I'm noticing that for certain compilers the number of expected failures in the Graph lib is approaching the number of tests... but that's not obvious at first glance.
Yes, this would be good enough. I wasn't sure whether this would be OK with everyone else; there's probably a reason for making expected failures and passes the same color.
I can't think of any compelling reason to have them the same color, though I guess the "all-green" detailed summary could give one warm fuzzies that you wouldn't get with grey expected failures...
The "warm fuzzies" were actually the original reason for their current coloring :).
but I'd much rather have grey expected failures so users can know what works and what doesn't at a glance.
Note that eventually (soon, I hope) we will have a "User View" specifically tailored to user needs. This aside, grey has grown on me as well. -- Aleksey Gurtovoy MetaCommunications Engineering

Peter Dimov writes:
Douglas Gregor wrote:
We have > 700 regressions and > 3000 failures showing up in the regression tests. Library authors, please review the regression test summaries for your libraries and start fixing bugs. If there are bugs that cannot be fixed for release (due to broken compilers, the need for huge changes to the library, etc.), mark them as expected failures or mark the library as "unusable" in status/explicit-failures-markup.xml.
Question: what should I do if a test failure does not render the library unusable, but I don't want to mark the failure as "expected" because I _never_ expect failures. ;-)
Peter, I'd really like to finally make your dissatisfaction with the current markup rules go away, but in order to come up with a resolution that we all can agree on I need to understand your use case, so please bear with me while I'm trying to achieve that :). I guess my question is: do you want to keep the failures yellow "for yourself" or for users of the library? If it's the former, wouldn't keeping the already known, "cannot-do-anything-about-it" failures highlighted in the report make it much harder to notice possible new failures, thus basically rendering the detailed view useless for the purpose of examining, well, a detailed regressions/failures picture? -- Aleksey Gurtovoy MetaCommunications Engineering

Peter Dimov writes:
Douglas Gregor wrote:
We have > 700 regressions and > 3000 failures showing up in the regression tests. Library authors, please review the regression test summaries for your libraries and start fixing bugs. If there are bugs that cannot be fixed for release (due to broken compilers, the need for huge changes to the library, etc.), mark them as expected failures or mark the library as "unusable" in status/explicit-failures-markup.xml.
Question: what should I do if a test failure does not render the library unusable, but I don't want to mark the failure as "expected" because I _never_ expect failures. ;-)
Peter, I'd really like to finally make your dissatisfaction with the current markup rules go away, but in order to come up with a resolution that we all can agree on I need to understand your use case, so please bear with me while I'm trying to achieve that :). I guess my question is: do you want to keep the failures yellow "for yourself" or for users of the library? If it's the former, wouldn't keeping the already known, "cannot-do-anything-about-it" failures highlighted in the report make it much harder to notice possible new failures, thus basically rendering the detailed view useless for the purpose of examining, well, a detailed regressions/failures picture? -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
Peter Dimov writes:
Question: what should I do if a test failure does not render the library unusable, but I don't want to mark the failure as "expected" because I _never_ expect failures. ;-)
Peter, I'd really like to finally make your dissatisfaction with the current markup rules go away, but in order to come up with a resolution that we all can agree on I need to understand your use case, so please bear with me while I'm trying to achieve that :).
I guess my question is: do you want to keep the failures yellow "for yourself" or for users of the library? If it's the former, wouldn't keeping the already known, "cannot-do-anything-about-it" failures highlighted in the report make it much harder to notice possible new failures, thus basically rendering the detailed view useless for the purpose of examining, well, a detailed regressions/failures picture?
I don't want to hide the failures in the detailed view, but the users should see a green box in the summary. This kind of failure is not common (in the libraries I maintain). There are two such failures on the smart_ptr page, and a bit more on the bind page, but I don't think that they can mask new failures. A release should never go out with a "real" failure, only with "non-critical" failures. Once released, any new failures would be regressions and impossible to miss. I want to keep the non-critical failures visible because they are failures. :-) Any non-green color is fine with me.

Peter Dimov writes:
Aleksey Gurtovoy wrote:
I guess my question is: do you want to keep the failures yellow "for yourself" or for users of the library? If it's the former, wouldn't keeping the already known, "cannot-do-anything-about-it" failures highlighted in the report make it much harder to notice possible new failures, thus basically rendering the detailed view useless for the purpose of examining, well, a detailed regressions/failures picture?
I don't want to hide the failures in the detailed view, but the users should see a green box in the summary.
This kind of failure is not common (in the libraries I maintain). There are two such failures on the smart_ptr page, and a bit more on the bind page, but I don't think that they can mask new failures. A release should never go out with a "real" failure, only with "non-critical" failures. Once released, any new failures would be regressions and impossible to miss.
I want to keep the non-critical failures visible because they are failures. :-)
They are. They also _are_ distinguishable from passing tests -- after all, the text says "fail*". But I tend to agree that they deserve some coloring.
Any non-green color is fine with me.
OK. Thank you for your feedback, -- Aleksey Gurtovoy MetaCommunications Engineering

Douglas Gregor <doug.gregor@gmail.com> writes:
At midnight tonight (Friday) the main trunk/CVS HEAD is feature-frozen and only bug fixes may be committed. New features should go on a separate development branch.
Does that go for new libraries where it isn't possible to cause a regression with respect to 1.32.0? The Boost Parameters library is currently checked in, but not in the state we intended. The features we have now are in very good shape, but several extensions were requested that we intended to have implemented for this release and "of course" we need to re-do all the documentation. I realize docs are in a different class from everything else... Anyway, we can of course move the new development to a branch. I'm not pushing to do it on the main trunk; I just want clarification. Thanks, -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Sun, 24 Apr 2005 12:52:20 +0200, David Abrahams wrote
Douglas Gregor <doug.gregor@gmail.com> writes:
At midnight tonight (Friday) the main trunk/CVS HEAD is feature-frozen and only bug fixes may be committed. New features should go on a separate development branch.
Does that go for new libraries where it isn't possible to cause a regression with respect to 1.32.0? The Boost Parameters library is currently checked in, but not in the state we intended. The features we have now are in very good shape, but several extensions were requested that we intended to have implemented for this release and "of course" we need to re-do all the documentation. I realize docs are in a different class from everything else...
Anyway, we can of course move the new development to a branch. I'm not pushing to do it on the main trunk; I just want clarification.
Not the release manager opinion here: seems to me that you should be able to proceed on the main branch because as you say nothing else depends on the new lib -- so any mistakes only affect named parms. Worst case is that Doug can pull the library before the release if it doesn't get to an acceptable condition in time. Of course if you are altering other libs (say mpl for random example) as part of your work then this theory is blown to shreds... Jeff

"Jeff Garland" <jeff@crystalclearsoftware.com> wrote in message news:20050424140422.M42821@crystalclearsoftware.com... | On Sun, 24 Apr 2005 12:52:20 +0200, David Abrahams wrote | > Douglas Gregor <doug.gregor@gmail.com> writes: | > | > > At midnight tonight (Friday) the main trunk/CVS HEAD is feature-frozen | > > and only bug fixes may be committed. New features should go on a | > > separate development branch. | > | > Does that go for new libraries where it isn't possible to cause a | > regression with respect to 1.32.0? | Not the release manager opinion here: seems to me that you should be able to | proceed on the main branch because as you say nothing else depends on the new | lib -- so any mistakes only affect named parms. I think this branching thing is not going to help anybody. I would really hate to redo my corrections twice. -Thorsten

"Thorsten Ottosen" <nesotto@cs.auc.dk> writes:
"Jeff Garland" <jeff@crystalclearsoftware.com> wrote in message news:20050424140422.M42821@crystalclearsoftware.com... | On Sun, 24 Apr 2005 12:52:20 +0200, David Abrahams wrote | > Douglas Gregor <doug.gregor@gmail.com> writes: | > | > > At midnight tonight (Friday) the main trunk/CVS HEAD is feature-frozen | > > and only bug fixes may be committed. New features should go on a | > > separate development branch. | > | > Does that go for new libraries where it isn't possible to cause a | > regression with respect to 1.32.0?
| Not the release manager opinion here: seems to me that you should | be able to proceed on the main branch because as you say nothing | else depends on the new lib -- so any mistakes only affect named | parms.
I think this branching thing is not going to help anybody.
Well of course it would keep the HEAD (and the status of the params library tests) more stable. Churn in the test status can cause delays, even if there are no dependent libraries. It's a calculated risk.
I would really hate to redo my corrections twice.
What corrections? -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:uis2biz3c.fsf@boost-consulting.com... | "Thorsten Ottosen" <nesotto@cs.auc.dk> writes: | > I would really hate to redo my corrections twice. | | What corrections? If I understand this correctly, then there will exists two branches and any patch to one of them would need to go into the other too. -Thorsten

"Thorsten Ottosen" <nesotto@cs.auc.dk> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:uis2biz3c.fsf@boost-consulting.com... | "Thorsten Ottosen" <nesotto@cs.auc.dk> writes:
| > I would really hate to redo my corrections twice. | | What corrections?
How would changes to Boost.Parameter, which none of your code depends on, cause you to make any corrections at all?
If I understand this correctly, then there will exists two branches and any patch to one of them would need to go into the other too.
The policy for this release is that there is the HEAD, which is stable during the release period, and additionally, as many branches as a library author desires in order to do his/her own parallel development. The only reason to touch the HEAD and a branch at the same time is to introduce a *fix* to the release and also have it for testing the parallel development branch. If the HEAD is in pretty good shape to begin with, that won't be necessary often; normally after release you just merge any branches you want to continue with into the HEAD in one step. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Apr 25, 2005, at 10:33 AM, Thorsten Ottosen wrote:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:uis2biz3c.fsf@boost-consulting.com... | "Thorsten Ottosen" <nesotto@cs.auc.dk> writes:
| > I would really hate to redo my corrections twice. | | What corrections?
If I understand this correctly, then there will exists two branches and any patch to one of them would need to go into the other too.
Only if you're doing new-feature development at the same time that you're fixing bugs. For the main trunk of CVS (HEAD), only bug fixes are allowed. If you are developing new features, create a branch and work on those new features there... bug fixes will likely have to go on both branches. However, I would prefer that effort be spent fixing bugs rather than developing new features, so that we can get this release out the door. Doug

On Apr 24, 2005, at 5:52 AM, David Abrahams wrote:
Douglas Gregor <doug.gregor@gmail.com> writes:
At midnight tonight (Friday) the main trunk/CVS HEAD is feature-frozen and only bug fixes may be committed. New features should go on a separate development branch.
Does that go for new libraries where it isn't possible to cause a regression with respect to 1.32.0? The Boost Parameters library is currently checked in, but not in the state we intended. The features we have now are in very good shape, but several extensions were requested that we intended to have implemented for this release and "of course" we need to re-do all the documentation. I realize docs are in a different class from everything else...
We have a lot of regressions and a lot of new failures. Changes to entirely new libraries have a much lower risk associated with them, but you know that... I'd rather not see more extensions added to any library at this point, but if they are *really* small and *really* safe I guess it's okay. Doug

I'm currently changing the shared_ptr serialization to address the recent changes in the shared_ptr implementation and re-implement in a manner that has been discussed elsewhere on this list. This will fix all the shared_ptr serialization test failures. I presume this falls under the heading of "bug fix" even though its really significant development work? Robert Ramey Douglas Gregor wrote:
At midnight tonight (Friday) the main trunk/CVS HEAD is feature-frozen and only bug fixes may be committed. New features should go on a separate development branch.
We have > 700 regressions and > 3000 failures showing up in the regression tests. Library authors, please review the regression test summaries for your libraries and start fixing bugs. If there are bugs that cannot be fixed for release (due to broken compilers, the need for huge changes to the library, etc.), mark them as expected failures or mark the library as "unusable" in status/explicit-failures-markup.xml.
I've updated the list of primary platforms for this release, which includes all newer versions of GCC, CodeWarrior, Visual C++, and Intel compilers. If all bugs can't be fixed, focus on the compilers in the "Release View" in the regression tests.
Let's fix those bugs!
Doug
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Apr 29, 2005, at 11:34 AM, Robert Ramey wrote:
I'm currently changing the shared_ptr serialization to address the recent changes in the shared_ptr implementation and re-implement in a manner that has been discussed elsewhere on this list. This will fix all the shared_ptr serialization test failures. I presume this falls under the heading of "bug fix" even though its really significant development work?
This sounds like an important bug fix. Go ahead and make the changes. Doug
participants (8)
-
Aleksey Gurtovoy
-
David Abrahams
-
Doug Gregor
-
Douglas Gregor
-
Jeff Garland
-
Peter Dimov
-
Robert Ramey
-
Thorsten Ottosen