
Gennadiy Rozental writes:
"Rene Rivera" <grafik.list@redshift-software.com> wrote in message news:42938CE3.6050701@redshift-software.com...
Gennadiy Rozental wrote:
I think we've seen multiple times that this at least causes Boost developers and release managers distress when it happens
Does it distress you any less, when faulures in Boost.<anything else> unit tests happends?
I think the distress comes from not knowing that they are not required tests. During release, we assume that *all* tests are important. And most of us don't know enough about individual libraries to see if failing tests are important or not.
In fact majority of failures comes even not from actual tests, but from examples. I did not find a "proper" way for examples to show up in regression tests screen, so I faked them as tests (compile only rule). I think some kind of "test level" notion could be a good idea. We may have critical, feature-critical, informational kind of tests.
You can employ test case categorization (http://article.gmane.org/gmane.comp.lib.boost.devel/124071/) to at least visually group the tests into categories along the above lines. -- Aleksey Gurtovoy MetaCommunications Engineering