[Test library] wide_string test and wputenv

The test library's wide_string tests are failing on any platform because "wputenv" does not exist. Googling around a bit, I don't see much mention at all of "wputenv". There is a _wputenv on Microsoft compilers, but it takes wchar_t pointers, not char pointers. This is part of a much larger problem... a quick count in the regression tests shows about 300 failures coming from the test library. Most of these are on compilers we usually consider rather broken, but we need someone to look at these failures and either fix them or mark them. The Boost.Test library is one of the core libraries, on which every other library depends. It needs to be solid, stable, and portable, or we cannot release 1.33.0. Doug Gregor 1.33.0 Release Manager

This is part of a much larger problem... a quick count in the regression tests shows about 300 failures coming from the test library. Most of these are on compilers we usually consider rather broken, but we need someone to look at these failures and either fix them or mark them.
Actually majority of the failures comes from tests for the features that did not get into Boost.Test. They could be disabled for now. I will do that tonight. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
This is part of a much larger problem... a quick count in the regression tests shows about 300 failures coming from the test library. Most of these are on compilers we usually consider rather broken, but we need someone to look at these failures and either fix them or mark them.
Actually majority of the failures comes from tests for the features that did not get into Boost.Test. They could be disabled for now. I will do that tonight.
It would be nice if the Boost.Test library maintainer could avoid things that cause needless failures to appear in the tests during a release. I think we've seen multiple times that this at least causes Bost developers and release managers distress when it happens, and results in time wasted. -- Dave Abrahams Boost Consulting www.boost-consulting.com

It would be nice if the Boost.Test library maintainer could avoid things that cause needless failures to appear in the tests during a release.
1. They did not appear during release. 2. These are tests for some advanced features used or about to be used by Boost.Test. They doesn't casue *any* errors outside of Boost.Test unit tests. And I do not see how Boost.Test is different in this regards from any other boost library. I just did not have time lately to mark those failures as expected. Would there be any showstopper failures I would bring the issue long before.
I think we've seen multiple times that this at least causes Boost developers and release managers distress when it happens
Does it distress you any less, when faulures in Boost.<anything else> unit tests happends?
and results in time wasted.
What results and in what time? Gennadiy

Gennadiy Rozental wrote:
I think we've seen multiple times that this at least causes Boost developers and release managers distress when it happens
Does it distress you any less, when faulures in Boost.<anything else> unit tests happends?
I think the distress comes from not knowing that they are not required tests. During release, we assume that *all* tests are important. And most of us don't know enough about individual libraries to see if failing tests are important or not. So in this way all failures distress me equally, with the exception of failures in libraries, platforms, and toolsets I personally use. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - Grafik/jabber.org

"Rene Rivera" <grafik.list@redshift-software.com> wrote in message news:42938CE3.6050701@redshift-software.com...
Gennadiy Rozental wrote:
I think we've seen multiple times that this at least causes Boost developers and release managers distress when it happens
Does it distress you any less, when faulures in Boost.<anything else> unit tests happends?
I think the distress comes from not knowing that they are not required tests. During release, we assume that *all* tests are important. And most of us don't know enough about individual libraries to see if failing tests are important or not.
In fact majority of failures comes even not from actual tests, but from examples. I did not find a "proper" way for examples to show up in regression tests screen, so I faked them as tests (compile only rule). I think some kind of "test level" notion could be a good idea. We may have critical, feature-critical, informational kind of tests. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"Rene Rivera" <grafik.list@redshift-software.com> wrote in message news:42938CE3.6050701@redshift-software.com...
Gennadiy Rozental wrote:
I think we've seen multiple times that this at least causes Boost developers and release managers distress when it happens
Does it distress you any less, when faulures in Boost.<anything else> unit tests happends?
I think the distress comes from not knowing that they are not required tests. During release, we assume that *all* tests are important. And most of us don't know enough about individual libraries to see if failing tests are important or not.
In fact majority of failures comes even not from actual tests, but from examples. I did not find a "proper" way for examples to show up in regression tests screen, so I faked them as tests (compile only rule).
It seems clear to me that we should not be running any tests for examples that are doomed to fail because they use unimplemented features. Aside from what it does to the impression we all get of the CVS health, it soaks time from every test run.
I think some kind of "test level" notion could be a good idea. We may have critical, feature-critical, informational kind of tests.
Maybe so. I could suggest in the meantime that you can use the "expected-failure" notation, but really I believe what I said above: there's no good reason to be running these tests if they have no chance of passing. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Gennadiy Rozental writes:
"Rene Rivera" <grafik.list@redshift-software.com> wrote in message news:42938CE3.6050701@redshift-software.com...
Gennadiy Rozental wrote:
I think we've seen multiple times that this at least causes Boost developers and release managers distress when it happens
Does it distress you any less, when faulures in Boost.<anything else> unit tests happends?
I think the distress comes from not knowing that they are not required tests. During release, we assume that *all* tests are important. And most of us don't know enough about individual libraries to see if failing tests are important or not.
In fact majority of failures comes even not from actual tests, but from examples. I did not find a "proper" way for examples to show up in regression tests screen, so I faked them as tests (compile only rule). I think some kind of "test level" notion could be a good idea. We may have critical, feature-critical, informational kind of tests.
You can employ test case categorization (http://article.gmane.org/gmane.comp.lib.boost.devel/124071/) to at least visually group the tests into categories along the above lines. -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
Gennadiy Rozental writes:
"Rene Rivera" <grafik.list@redshift-software.com> wrote in message news:42938CE3.6050701@redshift-software.com...
Gennadiy Rozental wrote:
I think we've seen multiple times that this at least causes Boost developers and release managers distress when it happens
Does it distress you any less, when faulures in Boost.<anything else> unit tests happends?
I think the distress comes from not knowing that they are not required tests. During release, we assume that *all* tests are important. And most of us don't know enough about individual libraries to see if failing tests are important or not.
In fact majority of failures comes even not from actual tests, but from examples. I did not find a "proper" way for examples to show up in regression tests screen, so I faked them as tests (compile only rule). I think some kind of "test level" notion could be a good idea. We may have critical, feature-critical, informational kind of tests.
You can employ test case categorization (http://article.gmane.org/gmane.comp.lib.boost.devel/124071/) to at least visually group the tests into categories along the above lines.
Have you guys written a short manual that explains how all these features work? ;-) Sorry to be coy, but how hard would that be? It really should be in an accessible place, no? -- Dave Abrahams Boost Consulting www.boost-consulting.com

On 5/24/05, Gennadiy Rozental <gennadiy.rozental@thomson.com> wrote:
It would be nice if the Boost.Test library maintainer could avoid things that cause needless failures to appear in the tests during a release.
1. They did not appear during release.
These failures appeared well BEFORE the release, and the results have been across-the-board yellow or red for weeks now.
I think we've seen multiple times that this at least causes Boost developers and release managers distress when it happens
Does it distress you any less, when faulures in Boost.<anything else> unit tests happends?
Its rather alarming to see a key component of Boost that fails on EVERY single compiler. Clearly the state of the library is not as dire as the test results might lead one to believe, but how would anyone know this from looking at all those red and yellow boxes?
and results in time wasted.
What results and in what time?
The time of people who run and check the test results and try to investigate these failures. I know I try to find and fix the odd failure from time to time, and I'm sure others do as well. -- Caleb Epstein caleb dot epstein at gmail dot com

Caleb Epstein <caleb.epstein@gmail.com> writes:
On 5/24/05, Gennadiy Rozental <gennadiy.rozental@thomson.com> wrote:
It would be nice if the Boost.Test library maintainer could avoid things that cause needless failures to appear in the tests during a release.
1. They did not appear during release.
These failures appeared well BEFORE the release, and the results have been across-the-board yellow or red for weeks now.
I stand corrected.
I think we've seen multiple times that this at least causes Boost developers and release managers distress when it happens
Does it distress you any less, when faulures in Boost.<anything else> unit tests happends?
No.
Its rather alarming to see a key component of Boost that fails on EVERY single compiler. Clearly the state of the library is not as dire as the test results might lead one to believe, but how would anyone know this from looking at all those red and yellow boxes?
Right. This is part of what I'm getting at. I would like it if we could all be more sensitive to the cost to Boost of having any tests show up in a failing state.
and results in time wasted.
What results and in what time?
The time of people who run and check the test results and try to investigate these failures. I know I try to find and fix the odd failure from time to time, and I'm sure others do as well.
That's exactly the other part of what I'm getting at. For example, Doug had to think about this prolem, wonder about its status, and make a post about it. -- Dave Abrahams Boost Consulting www.boost-consulting.com
participants (6)
-
Aleksey Gurtovoy
-
Caleb Epstein
-
David Abrahams
-
Doug Gregor
-
Gennadiy Rozental
-
Rene Rivera