
Most of the bigger infrastructure issues that were getting in the way have now been solved. The tarballs are working again, the missing files in the release/branch have been found, and both trunk and release branch regression reporting is cycling smoothly. There are still some outstanding testing issues, but they are at the level of individual test platforms rather than the whole testing system. Both developers and patch submitters have been active, so it isn't like we are starting from scratch, but the emphasis for release management is shifting to focus on reducing test failures in individual libraries. Looking at the regression test results, I'd like to call attention to these failures: conversion: lexical_cast_loopback_test on many platforms graph: csr_graph_test on many platforms python: import_ on many platforms range: iterator_range and sub_range on many platforms typeof: experimental_* on many platforms These are particularly worrisome from the release management standpoint because they affect many platforms and because I'm not seeing any attempts to fix or markup by their developers. For many of the other failures that affect a lot of key platforms, the developers are actively committing fixes on a regular basis so I assume these failures will be fixed or marked up in the near future. I'll be traveling Thursday through Tuesday, and will start moving libraries to the release branch when I get back. --Beman

Beman Dawes wrote:
Most of the bigger infrastructure issues that were getting in the way have now been solved. The tarballs are working again, the missing files in the release/branch have been found, and both trunk and release branch regression reporting is cycling smoothly.
Thx all for the effort to solve these issues. Where can we find the release branch regressions? We really need a link on the Boost frontpage so I don't keep asking this question... Jeff

Jeff Garland wrote:
Thx all for the effort to solve these issues. Where can we find the release branch regressions? We really need a link on the Boost frontpage so I don't keep asking this question...
http://beta.boost.org/development/tests/release/developer/summary.html, can be found from http://beta.boost.org/development/testing.html. But I agree, this should be put on the front page. Markus

Markus Schöpflin wrote:
Jeff Garland wrote:
Thx all for the effort to solve these issues. Where can we find the release branch regressions? We really need a link on the Boost frontpage so I don't keep asking this question...
http://beta.boost.org/development/tests/release/developer/summary.html, can be found from http://beta.boost.org/development/testing.html. But I agree, this should be put on the front page.
It's now available at any page of <http://beta.boost.org/development> on the side bar. So it should be easy to find ;-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Beman Dawes <bdawes <at> acm.org> writes:
conversion: lexical_cast_loopback_test on many platforms
I believe those failures should be marked as expected for these toolsets: HP-UX_pa_risc_gcc (gcc-3.4.2) HP-UX_ia64_gcc (gcc-4.2.1) Caleb Epstein SunOS-5.10 (gcc-4.1.2_sunos_i86pc) Sandia-sun (sun-5.8, sun-5.7 and sun-5.9) Huang-WinXP-x86_32 (msvc-8.0) RudbekAssociates-V2 (msvc-7.1 and msvc-8.0) siliconman (borland-5.8.2 and borland-5.9.2) bcbboost-l (borland-5.8.2) Huang-Vista-x64 (msvc-8.0_64 and msvc-8.0_x86_64) bcbboost (borland-5.9.2) HP-UX_aCC_PA-RISC (acc- pa_risc) I'm not sure about bcbboost-l (borland-5.6.4) siliconman (borland-5.6.4) unknown location(0): fatal error in "test_round_conversion_long_double": exponent of a floating-point operation is greater than the magnitude allowed by the corresponding type ..\libs\conversion\test\lexical_cast_loopback_test.cpp(47): last checkpoint but it's very likely that these two should be marked as expected as well. This set covers all currently failing lexical_cast_loopback_test results. -- Alexander

Alexander Nasonov wrote:
Beman Dawes <bdawes <at> acm.org> writes:
conversion: lexical_cast_loopback_test on many platforms
I believe those failures should be marked as expected for these toolsets:
HP-UX_pa_risc_gcc (gcc-3.4.2) HP-UX_ia64_gcc (gcc-4.2.1) Caleb Epstein SunOS-5.10 (gcc-4.1.2_sunos_i86pc) Sandia-sun (sun-5.8, sun-5.7 and sun-5.9) Huang-WinXP-x86_32 (msvc-8.0) RudbekAssociates-V2 (msvc-7.1 and msvc-8.0) siliconman (borland-5.8.2 and borland-5.9.2) bcbboost-l (borland-5.8.2) Huang-Vista-x64 (msvc-8.0_64 and msvc-8.0_x86_64) bcbboost (borland-5.9.2) HP-UX_aCC_PA-RISC (acc- pa_risc)
I'm not sure about bcbboost-l (borland-5.6.4) siliconman (borland-5.6.4)
unknown location(0): fatal error in "test_round_conversion_long_double": exponent of a floating-point operation is greater than the magnitude allowed by the corresponding type ..\libs\conversion\test\lexical_cast_loopback_test.cpp(47): last checkpoint
but it's very likely that these two should be marked as expected as well.
This set covers all currently failing lexical_cast_loopback_test results.
OK. You should go ahead and do the markup. For those new to the markup system, it is done by editing boost-root/status/explicit-failures-markup.xml. The format is more or less self-explanatory. --Beman

Beman Dawes wrote:
Looking at the regression test results, I'd like to call attention to these failures:
[...]
python: import_ on many platforms
[...]
These are particularly worrisome from the release management standpoint because they affect many platforms and because I'm not seeing any attempts to fix or markup by their developers.
<rant> I have written the above import_ test (as well as the features tested therein) prior to the 1.34 release, and made sure the test passes everywhere. I have not touched the code since then. I have no idea what causes the current failures. (It's quite likely some runtime setting, such as controlled by the build system executing the tests.) However, I see a *lot* of changesets that are non-local by nature, such as modifications to the build system, the test infrastructure, etc. How do you expect this ever to stabilize if such changes keep coming in ? The situation is basically the same now as it was a year ago, and it is what let me to suggest to keep such infrastructure bits outside of boost: To be able for the boost community to focus on the code and to make sure the environment within which we develop isn't changing at the same time as the code. It seems these points are as acute as ever. :-( </rant> Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Beman Dawes wrote:
Looking at the regression test results, I'd like to call attention to these failures:
[...]
python: import_ on many platforms
[...]
These are particularly worrisome from the release management standpoint because they affect many platforms and because I'm not seeing any attempts to fix or markup by their developers.
<rant> I have written the above import_ test (as well as the features tested therein) prior to the 1.34 release, and made sure the test passes everywhere. I have not touched the code since then.
I have no idea what causes the current failures. (It's quite likely some runtime setting, such as controlled by the build system executing the tests.) However, I see a *lot* of changesets that are non-local by nature, such as modifications to the build system
Like, ehm, what? Revision numbers please. The only change I see is Rene's change to make things not fail hard if you don't have python configured. All the other changes in tools/build/v2 are either documentation fixes, or fixes in Boost.Build tests, or extra debug prints. - Volodya

Beman Dawes skrev:
Looking at the regression test results, I'd like to call attention to these failures:
conversion: lexical_cast_loopback_test on many platforms
graph: csr_graph_test on many platforms
python: import_ on many platforms
range: iterator_range and sub_range on many platforms
I'm working on the range test now. The problem you mention should be fixed. -Thorsten

2007/10/23, Beman Dawes <bdawes@acm.org>:
Most of the bigger infrastructure issues that were getting in the way have now been solved. The tarballs are working again, the missing files in the release/branch have been found, and both trunk and release branch regression reporting is cycling smoothly.
There are still some outstanding testing issues, but they are at the level of individual test platforms rather than the whole testing system.
Both developers and patch submitters have been active, so it isn't like we are starting from scratch, but the emphasis for release management is shifting to focus on reducing test failures in individual libraries.
Looking at the regression test results, I'd like to call attention to these failures:
conversion: lexical_cast_loopback_test on many platforms
graph: csr_graph_test on many platforms
python: import_ on many platforms
range: iterator_range and sub_range on many platforms
typeof: experimental_* on many platforms
I added the experimental_* tests temporarily a couple of weeks ago and removed them from svn after a few days, but the regression test system does not automatically check to see if a test has been deleted, so they remain as ghost results. Is there a common way to deal with this problem? Peder These are particularly worrisome from the release management standpoint
because they affect many platforms and because I'm not seeing any attempts to fix or markup by their developers.
For many of the other failures that affect a lot of key platforms, the developers are actively committing fixes on a regular basis so I assume these failures will be fixed or marked up in the near future.
I'll be traveling Thursday through Tuesday, and will start moving libraries to the release branch when I get back.
--Beman _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Peder Holt wrote:
2007/10/23, Beman Dawes <bdawes@acm.org>:
typeof: experimental_* on many platforms
I added the experimental_* tests temporarily a couple of weeks ago and removed them from svn after a few days, but the regression test system does not automatically check to see if a test has been deleted, so they remain as ghost results. Is there a common way to deal with this problem?
Yes. Post to the testing list explaining that you change the set of tests. And ask testers to delete the now invalid tests, pointing precisely as you can to the directories that need to be deleted. Please put "incremental" in the subject. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo
participants (9)
-
Alexander Nasonov
-
Beman Dawes
-
Jeff Garland
-
Markus Schöpflin
-
Peder Holt
-
Rene Rivera
-
Stefan Seefeld
-
Thorsten Ottosen
-
Vladimir Prus