[Report] 79 regressions on RC_1_34_0 (2007-03-22)

Boost Regression test failures Report time: 2007-03-22T00:20:02Z This report lists all regression test failures on release platforms. Detailed report: http://engineering.meta-comm.com/boost-regression/CVS-RC_1_34_0/developer/is... 79 failures in 4 libraries iostreams (6) optional (6) parameter (1) python (66) |iostreams| bzip2_test: msvc-7.1 msvc-8.0 gzip_test: msvc-7.1 msvc-8.0 zlib_test: msvc-7.1 msvc-8.0 |optional| optional_test: msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 optional_test_ref_fail2: msvc-7.1 msvc-8.0 |parameter| python_test: gcc-cygwin-3.4.4 |python| bases: gcc-4.1.1_sunos_i86pc builtin_converters: gcc-3.4.5_linux gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 crossmod_exception: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 crossmod_opaque: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 exec: gcc-4.1.1_sunos_i86pc exec-dynamic: gcc-3.4.5_linux gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 gcc-4.1.1_sunos_i86pc import_: cw-9.4 gcc-4.1.0_linux_x86_64 gcc-mingw-3.4.2 gcc-mingw-3.4.5 intel-vc71-win-9.1 msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 msvc-7.1 msvc-7.1 msvc-7.1 msvc-7.1_stlport4 msvc-8.0 msvc-8.0 msvc-8.0 iterator: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 map_indexing_suite: gcc-3.4.5_linux gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 pointee: gcc-4.1.1_sunos_i86pc pointer_type_id_test: gcc-4.1.1_sunos_i86pc try: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 upcast: gcc-4.1.1_sunos_i86pc

Douglas Gregor wrote:
|python| bases: gcc-4.1.1_sunos_i86pc builtin_converters: gcc-3.4.5_linux gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 crossmod_exception: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 crossmod_opaque: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 exec: gcc-4.1.1_sunos_i86pc exec-dynamic: gcc-3.4.5_linux gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 gcc-4.1.1_sunos_i86pc import_: cw-9.4 gcc-4.1.0_linux_x86_64 gcc-mingw-3.4.2 gcc-mingw-3.4.5 intel-vc71-win-9.1 msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 msvc-7.1 msvc-7.1 msvc-7.1 msvc-7.1_stlport4 msvc-8.0 msvc-8.0 msvc-8.0 iterator: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 map_indexing_suite: gcc-3.4.5_linux gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 pointee: gcc-4.1.1_sunos_i86pc pointer_type_id_test: gcc-4.1.1_sunos_i86pc try: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 upcast: gcc-4.1.1_sunos_i86pc
I didn't find any answer when I asked the last time, so I'm asking again: What is the meaning of the absolute number of 'regressions' ? Did this number really go up from the last report to the current one ? At least some 'new' ones stem from the inclusion of the 'gcc-4.1.1_sunos_i86pc' test run in this report, which wasn't present in the last. What determines the test runs that make it into a report ? Is this sunos platform really a primary platform for this release ? Why wasn't it tested before ? How are we ever going to get the number of unexpected failures down to zero ? I honestly don't believe it will ever happen, if we continue like that. :-( May I suggest to fix a number of 'primary platforms' (and that may well translate to specific testers, at this point in the release process), and just disregard anything else ? Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

on Thu Mar 22 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
I didn't find any answer when I asked the last time, so I'm asking again:
What is the meaning of the absolute number of 'regressions' ?
Tells us how many regressions there are?
Did this number really go up from the last report to the current one ?
I don't know.
At least some 'new' ones stem from the inclusion of the 'gcc-4.1.1_sunos_i86pc' test run in this report, which wasn't present in the last.
Hmm.
What determines the test runs that make it into a report ? Is this sunos platform really a primary platform for this release ? Why wasn't it tested before ?
How are we ever going to get the number of unexpected failures down to zero ? I honestly don't believe it will ever happen, if we continue like that. :-(
Well, things look really bad for python because I've been working on the BBv2 configuration support for it. I think that was fixed yesterday and I hope the next report will look a lot better. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
on Thu Mar 22 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
I didn't find any answer when I asked the last time, so I'm asking again:
What is the meaning of the absolute number of 'regressions' ?
Tells us how many regressions there are?
What is the reference point ? The last report ? The last release ?
Did this number really go up from the last report to the current one ?
I don't know.
But this is at the heart of the question: If the simple addition of a new test run changes the number of failures, how can this number possibly represent regressions ? Does anybody even know what test runs were accounted for at the point the last release was done ? What I'm trying to say is that, instead of just picking up whatever results are found by the report generator, an explicit list of test runners should be chosen as 'primary platforms' to be able to measure progress, and extrapolate into the future. Right now it is hard to tell whether there was any progress at all over the last couple of months. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

on Thu Mar 22 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
David Abrahams wrote:
on Thu Mar 22 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
I didn't find any answer when I asked the last time, so I'm asking again:
What is the meaning of the absolute number of 'regressions' ?
Tells us how many regressions there are?
What is the reference point ? The last report ? The last release ?
It's supposed to be the last release, IIUC.
Did this number really go up from the last report to the current one ?
I don't know.
But this is at the heart of the question: If the simple addition of a new test run changes the number of failures, how can this number possibly represent regressions ? Does anybody even know what test runs were accounted for at the point the last release was done ?
IIUC, yes.
What I'm trying to say is that, instead of just picking up whatever results are found by the report generator, an explicit list of test runners should be chosen as 'primary platforms' to be able to measure progress, and extrapolate into the future.
Right now it is hard to tell whether there was any progress at all over the last couple of months.
Yep. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
on Thu Mar 22 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
David Abrahams wrote:
on Thu Mar 22 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
I didn't find any answer when I asked the last time, so I'm asking again:
What is the meaning of the absolute number of 'regressions' ? Tells us how many regressions there are? What is the reference point ? The last report ? The last release ?
It's supposed to be the last release, IIUC.
Did this number really go up from the last report to the current one ? I don't know. But this is at the heart of the question: If the simple addition of a new test run changes the number of failures, how can this number possibly represent regressions ? Does anybody even know what test runs were accounted for at the point the last release was done ?
IIUC, yes.
I believe part of my confusion stems from the fact that the status report labels all 79 failures as 'regressions', while the html report marks the majority of them yellow, i.e. as "Failure on a newly added test/compiler." So which one is it ? Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

on Fri Mar 23 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
I believe part of my confusion stems from the fact that the status report labels all 79 failures as 'regressions', while the html report marks the majority of them yellow, i.e. as "Failure on a newly added test/compiler."
So which one is it ?
Oh, well, good point. I don't know who's in charge of this stuff, much less the answer to your question. :( -- Dave Abrahams Boost Consulting www.boost-consulting.com

Stefan Seefeld wrote:
David Abrahams wrote:
on Thu Mar 22 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
I didn't find any answer when I asked the last time, so I'm asking again:
What is the meaning of the absolute number of 'regressions' ? Tells us how many regressions there are?
What is the reference point ? The last report ? The last release ?
The last release.
Did this number really go up from the last report to the current one ? I don't know.
But this is at the heart of the question: If the simple addition of a new test run changes the number of failures, how can this number possibly represent regressions ?
In essence you are shooting the messenger.
Right now it is hard to tell whether there was any progress at all over the last couple of months.
Agreed you can't do that from the number of REPORTED regressions. Thomas -- Thomas Witt witt@acm.org

Stefan, Stefan Seefeld wrote:
Douglas Gregor wrote:
I didn't find any answer when I asked the last time, so I'm asking again:
FWIW I tried to answer the question before.
What is the meaning of the absolute number of 'regressions' ? Did this number really go up from the last report to the current one ? At least some 'new' ones stem from the inclusion of the 'gcc-4.1.1_sunos_i86pc' test run in this report, which wasn't present in the last.
In isolation that number isn't worth much. That being said I am a little puzzled by your fixation on this number. I think that the number together with the regression tables provides valuable information. It just requires some interpretation.
What determines the test runs that make it into a report ?
Availability. We rely on volunteers to provide test results.
Is this sunos platform really a primary platform for this release ? Why wasn't it tested before ?
Yes. At the point the platform was chosen we had frequent results. Testers drop in/out for all kinds of reasons. Hardware/vacation/job/configuration it's hard to blame a volunteer for any of this, but in the end we have a major reliability and turnaround issue. I would drop the platform right away but in this case we are dealing with a general python issue. Dropping the platform won't make it go away.
How are we ever going to get the number of unexpected failures down to zero ?
Fixing bugs?
I honestly don't believe it will ever happen, if we continue like that. :-(
Well in some way we already gave up. The only thing we are waiting on is the python stuff to get into shape.
May I suggest to fix a number of 'primary platforms' (and that may well translate to specific testers, at this point in the release process), and just disregard anything else ?
Personally I think we'll have to do something like this in the future. Right now we don't have the infrastructure in place to do this and the last thing I want to do at this point is fiddle with regression test infrastructure. Thomas -- Thomas Witt witt@acm.org

Stefan Seefeld wrote:
What is the meaning of the absolute number of 'regressions' ? Did this number really go up from the last report to the current one ? At least some 'new' ones stem from the inclusion of the 'gcc-4.1.1_sunos_i86pc' test run in this report, which wasn't present in the last.
Didn't we say we don't add more toolsets to the RC tests? Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

Martin Wille wrote:
Didn't we say we don't add more toolsets to the RC tests?
True. and none did so. This toolset always was part of the release set. Just none submitted test results. I compiled a small list of rc toolsets that we had no active testers yet. This list is from february: * hp_cxx-71_006_tru64 * sun-5.8 * darwin-4.0.1 x gcc-3_4_4_tru64 x gcc-4_0_3_tru64 x gcc-3.4.3_sunos * gcc-4.1.1_sunos_i86pc Roland

Roland Schwarz wrote:
I compiled a small list of rc toolsets that we had no active testers yet.
This list is from february:
* hp_cxx-71_006_tru64
??? How did you compile this list? Above toolset has been under continuous testing since nearly a year on the RC branch now.
x gcc-3_4_4_tru64 x gcc-4_0_3_tru64
Those have been tested on the RC branch until two or three month ago. I had to disable these configs because I don't have enough resources to test all of them. Markus

Markus Schöpflin schrieb:
Roland Schwarz wrote:
I compiled a small list of rc toolsets that we had no active testers yet.
This list is from february:
* hp_cxx-71_006_tru64
???
How did you compile this list?
I looked at "explicit-failures-markup.xml" : <mark-toolset name="borland-5.6.4" status="required"/> <mark-toolset name="borland-5.8.2" status="required"/> <mark-toolset name="cw-9.4" status="required"/> <mark-toolset name="msvc-6.5" status="required"/> <mark-toolset name="msvc-6.5_stlport4" status="required"/> <mark-toolset name="msvc-7.0" status="required"/> <mark-toolset name="msvc-7.1_stlport4" status="required"/> <mark-toolset name="msvc-7.1" status="required"/> <mark-toolset name="msvc-8.0" status="required"/> <mark-toolset name="gcc-mingw-3.4.2" status="required"/> <mark-toolset name="gcc-mingw-3.4.5" status="required"/> <mark-toolset name="gcc-3.3.6" status="required"/> <mark-toolset name="gcc-cygwin-3.4.4" status="required"/> <mark-toolset name="gcc-3.2.3_linux" status="required"/> <mark-toolset name="gcc-3.3.6_linux" status="required"/> <mark-toolset name="gcc-3.4.5_linux" status="required"/> <mark-toolset name="gcc-3.4.5_linux" status="required"/> <mark-toolset name="gcc-4.0.3_linux" status="required"/> <mark-toolset name="gcc-4.1.0_linux" status="required"/> <mark-toolset name="gcc-3.4.5_linux_x86_64" status="required"/> <mark-toolset name="gcc-4.1.0_linux_x86_64" status="required"/> <mark-toolset name="darwin-4.0.1" status="required"/> <mark-toolset name="intel-vc71-win-9.1" status="required"/> <mark-toolset name="intel-linux-9.0" status="required"/> <mark-toolset name="hp_cxx-71_006_tru64" status="required"/> <mark-toolset name="sun-5.8" status="required"/> <mark-toolset name="gcc-4.1.1_sunos_i86pc" status="required"/> Then looked at which compilers are vivible in the (release) regression pages, and calculated the missing set. Roland

Roland Schwarz wrote:
Markus Schöpflin schrieb:
[...]
How did you compile this list?
I looked at "explicit-failures-markup.xml" :
[...]
Then looked at which compilers are vivible in the (release) regression pages, and calculated the missing set.
Sounds reasonable, but the gcc-*-tru64 configurations don't appear in the list you posted (from explicit-failures-markup.xml), and hp_cxx-71_006_tru64 is visible on the release regression pages. Markus

on Fri Mar 23 2007, Roland Schwarz <roland.schwarz-AT-chello.at> wrote:
Martin Wille wrote:
Didn't we say we don't add more toolsets to the RC tests?
True. and none did so. This toolset always was part of the release set. Just none submitted test results.
I compiled a small list of rc toolsets that we had no active testers yet.
This list is from february:
* hp_cxx-71_006_tru64 * sun-5.8 * darwin-4.0.1 x gcc-3_4_4_tru64 x gcc-4_0_3_tru64 x gcc-3.4.3_sunos * gcc-4.1.1_sunos_i86pc
IMO We should immediately remove any from the list that have no testers, as we can't afford the cost of a new tester coming online at this point. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On 3/22/07, Douglas Gregor <dgregor@osl.iu.edu> wrote:
Boost Regression test failures Report time: 2007-03-22T00:20:02Z
This report lists all regression test failures on release platforms.
Detailed report: http://engineering.meta-comm.com/boost-regression/CVS-RC_1_34_0/developer/is...
79 failures in 4 libraries iostreams (6) optional (6) parameter (1) python (66)
|iostreams| bzip2_test: msvc-7.1 msvc-8.0 gzip_test: msvc-7.1 msvc-8.0 zlib_test: msvc-7.1 msvc-8.0
|optional| optional_test: msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 optional_test_ref_fail2: msvc-7.1 msvc-8.0
|parameter| python_test: gcc-cygwin-3.4.4
|python| bases: gcc-4.1.1_sunos_i86pc builtin_converters: gcc-3.4.5_linux gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 crossmod_exception: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 crossmod_opaque: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 exec: gcc-4.1.1_sunos_i86pc exec-dynamic: gcc-3.4.5_linux gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 gcc-4.1.1_sunos_i86pc import_: cw-9.4 gcc-4.1.0_linux_x86_64 gcc-mingw-3.4.2 gcc-mingw-3.4.5 intel-vc71-win-9.1 msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 msvc-7.1 msvc-7.1 msvc-7.1 msvc-7.1_stlport4 msvc-8.0 msvc-8.0 msvc-8.0 iterator: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 map_indexing_suite: gcc-3.4.5_linux gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 pointee: gcc-4.1.1_sunos_i86pc pointer_type_id_test: gcc-4.1.1_sunos_i86pc try: gcc-3.2.3_linux gcc-3.3.6_linux gcc-3.4.5_linux gcc-3.4.5_linux_x86_64 gcc-4.0.3_linux gcc-4.1.0_linux gcc-4.1.0_linux_x86_64 intel-linux-9.0 upcast: gcc-4.1.1_sunos_i86pc _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
It appears that the signals trackable_test.cpp is failing to report that under GCC 4.1, and later AFAIK, boost::signals::trackable does not work. Something about the test case must be avoiding this failure. If you apply the patch below, which is a bit simpler than the test code already there, the test fails: Index: trackable_test.cpp =================================================================== RCS file: /cvsroot/boost/boost/libs/signals/test/trackable_test.cpp,v retrieving revision 1.9 diff -u -r1.9 trackable_test.cpp --- trackable_test.cpp 30 Sep 2006 13:46:07 -0000 1.9 +++ trackable_test.cpp 23 Mar 2007 20:12:32 -0000 @@ -15,6 +15,11 @@ ~short_lived() {} }; +struct short_lived_2 : public boost::BOOST_SIGNALS_NAMESPACE::trackable { + ~short_lived_2() {} + int f() { return 1; } +}; + struct swallow { template<typename T> int operator()(const T*, int i) { return i; } }; @@ -51,6 +56,18 @@ } BOOST_CHECK(s1(5) == 0); + // Test auto-disconnection, part 2 + int value = 0; + BOOST_CHECK(value == 0); + { + short_lived_2 *shorty_2 = new short_lived_2; + boost::signal0<int> s2; + s2.connect(boost::bind(&short_lived_2::f, shorty_2)); + delete shorty_2; + value = s2(); + } + BOOST_CHECK(value == 0); + // Test auto-disconnection of slot before signal connection { short_lived* shorty = new short_lived(); Note that this is not a regression per se. Trackable seems to have this problem under GCC 4.1 for versions 1.32, 1.33.1, HEAD, and RC_1_34_0. Even so, it would be a shame if the next release did not contain a fix or workaround of some kind. Zach Laine

on Fri Mar 23 2007, "Zach Laine" <whatwasthataddress-AT-gmail.com> wrote:
On 3/22/07, Douglas Gregor <dgregor@osl.iu.edu> wrote:
Boost Regression test failures Report time: 2007-03-22T00:20:02Z
This report lists all regression test failures on release platforms.
Detailed report: http://engineering.meta-comm.com/boost-regression/CVS-RC_1_34_0/developer/is...
79 failures in 4 libraries iostreams (6)
<snip> please try to avoid overquoting. Also it's a good idea to put the library name concerned in the subject line, or its maintainer is likely to miss the message. Also it's a good idea to submit patches to the SF patch tracker or they are likely to slip away.
It appears that the signals trackable_test.cpp is failing to report that under GCC 4.1, and later AFAIK, boost::signals::trackable does not work. Something about the test case must be avoiding this failure. If you apply the patch below, which is a bit simpler than the test code already there, the test fails:
Index: trackable_test.cpp =================================================================== RCS file: /cvsroot/boost/boost/libs/signals/test/trackable_test.cpp,v retrieving revision 1.9 diff -u -r1.9 trackable_test.cpp --- trackable_test.cpp 30 Sep 2006 13:46:07 -0000 1.9 +++ trackable_test.cpp 23 Mar 2007 20:12:32 -0000 @@ -15,6 +15,11 @@ ~short_lived() {} };
+struct short_lived_2 : public boost::BOOST_SIGNALS_NAMESPACE::trackable { + ~short_lived_2() {} + int f() { return 1; } +}; + struct swallow { template<typename T> int operator()(const T*, int i) { return i; } }; @@ -51,6 +56,18 @@ } BOOST_CHECK(s1(5) == 0);
+ // Test auto-disconnection, part 2 + int value = 0; + BOOST_CHECK(value == 0); + { + short_lived_2 *shorty_2 = new short_lived_2; + boost::signal0<int> s2; + s2.connect(boost::bind(&short_lived_2::f, shorty_2)); + delete shorty_2; + value = s2(); + } + BOOST_CHECK(value == 0); + // Test auto-disconnection of slot before signal connection { short_lived* shorty = new short_lived();
Note that this is not a regression per se. Trackable seems to have this problem under GCC 4.1 for versions 1.32, 1.33.1, HEAD, and RC_1_34_0. Even so, it would be a shame if the next release did not contain a fix or workaround of some kind.
Zach Laine _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Dave Abrahams Boost Consulting www.boost-consulting.com

On Friday 23 March 2007 16:57 pm, David Abrahams wrote:
on Fri Mar 23 2007, "Zach Laine" <whatwasthataddress-AT-gmail.com> wrote:
+ // Test auto-disconnection, part 2 + int value = 0; + BOOST_CHECK(value == 0); + { + short_lived_2 *shorty_2 = new short_lived_2; + boost::signal0<int> s2; + s2.connect(boost::bind(&short_lived_2::f, shorty_2)); + delete shorty_2; + value = s2(); + } + BOOST_CHECK(value == 0); +
It looks to me like this test has undefined behaviour? That is, it is calling last_value with a non-void return value with no slots connected (assuming tracking is working). -- Frank

On 3/23/07, Frank Mori Hess <frank.hess@nist.gov> wrote:
On Friday 23 March 2007 16:57 pm, David Abrahams wrote:
on Fri Mar 23 2007, "Zach Laine" <whatwasthataddress-AT-gmail.com> wrote:
+ // Test auto-disconnection, part 2 + int value = 0; + BOOST_CHECK(value == 0); + { + short_lived_2 *shorty_2 = new short_lived_2; + boost::signal0<int> s2; + s2.connect(boost::bind(&short_lived_2::f, shorty_2)); + delete shorty_2; + value = s2(); + } + BOOST_CHECK(value == 0); +
It looks to me like this test has undefined behaviour? That is, it is calling last_value with a non-void return value with no slots connected (assuming tracking is working).
Good point. I tried to modify the test so that it throws in short_lived_2::f() instead, but that seems to work fine. I tried a standalone test program with 1.33.1, HEAD, and RC_1_34_0_freeze, and only 1.33.1 seems to show the problem I was trying to demonstrate. Sorry for the noise. Zach Laine

On Fri, 2007-03-23 at 17:02 -0500, Zach Laine wrote:
Good point. I tried to modify the test so that it throws in short_lived_2::f() instead, but that seems to work fine. I tried a standalone test program with 1.33.1, HEAD, and RC_1_34_0_freeze, and only 1.33.1 seems to show the problem I was trying to demonstrate.
That's a *big* relief. Cheers, Doug

Zach Laine wrote:
On 3/22/07, Douglas Gregor <dgregor@osl.iu.edu> wrote:
Boost Regression test failures Report time: 2007-03-22T00:20:02Z
This report lists all regression test failures on release platforms.
Detailed report: http://engineering.meta-comm.com/boost-regression/CVS-RC_1_34_0/developer/is...
79 failures in 4 libraries iostreams (6) optional (6) parameter (1) python (66)
|iostreams| bzip2_test: msvc-7.1 msvc-8.0 gzip_test: msvc-7.1 msvc-8.0 zlib_test: msvc-7.1 msvc-8.0
|optional| optional_test: msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 optional_test_ref_fail2: msvc-7.1 msvc-8.0
I think that optional_test_ref_fail2 has been removed from the list of tests, so these two can be ignored. Joe Gottman

Joe Gottman wrote:
Zach Laine wrote:
On 3/22/07, Douglas Gregor <dgregor@osl.iu.edu> wrote:
Boost Regression test failures Report time: 2007-03-22T00:20:02Z
This report lists all regression test failures on release platforms.
Detailed report: http://engineering.meta-comm.com/boost-regression/CVS-RC_1_34_0/developer/is...
|optional| optional_test: msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 optional_test_ref_fail2: msvc-7.1 msvc-8.0
I think that optional_test_ref_fail2 has been removed from the list of tests, so these two can be ignored.
Or better, make the test suite ignore them. Fernando claimed to have fixed the optional_test failure, and to have removed the optional_test_ref_fail2 test completely a few *weeks* ago, but still they show up. How can that be? Something is fishy here... Was he wrong?

----- Mensaje original ----- De: Yuval Ronen <ronen_yuval@yahoo.com> Fecha: Sábado, Marzo 24, 2007 12:16 pm Asunto: Re: [boost] [Report] 79 regressions on RC_1_34_0 (2007-03-22) Para: boost@lists.boost.org
Zach Laine wrote:
On 3/22/07, Douglas Gregor <dgregor@osl.iu.edu> wrote:
Boost Regression test failures Report time: 2007-03-22T00:20:02Z
This report lists all regression test failures on release
Joe Gottman wrote: platforms.>>>
Detailed report: http://engineering.meta-comm.com/boost-regression/CVS- RC_1_34_0/developer/issues.html>>> |optional| optional_test: msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 optional_test_ref_fail2: msvc-7.1 msvc-8.0
I think that optional_test_ref_fail2 has been removed from the list of tests, so these two can be ignored.
Or better, make the test suite ignore them.
Fernando claimed to have fixed the optional_test failure, and to have removed the optional_test_ref_fail2 test completely a few *weeks* ago, but still they show up. How can that be? Something is fishy here... Was he wrong?
In this post: http://lists.boost.org/Archives/boost/2007/03/117750.php Fernando suggests that the optional_test regressions for MSVC 6.5/7.0 be marked as expected failures, but this hasn't been done yet. I can take care of this next Monday unless someone does it before. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo

Douglas Gregor wrote:
Boost Regression test failures Report time: 2007-03-22T00:20:02Z
|optional| optional_test: msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 optional_test_ref_fail2: msvc-7.1 msvc-8.0
And another weird thing that caught my eyes - "msvc-6.5" shows up twice for "optional_test". Does this indicate a bug in the test system?

Yuval Ronen wrote:
Douglas Gregor wrote:
Boost Regression test failures Report time: 2007-03-22T00:20:02Z
|optional| optional_test: msvc-6.5 msvc-6.5 msvc-6.5_stlport4 msvc-7.0 optional_test_ref_fail2: msvc-7.1 msvc-8.0
And another weird thing that caught my eyes - "msvc-6.5" shows up twice for "optional_test". Does this indicate a bug in the test system?
The report generator aggregates test runs from whoever happens to run tests, no matter whether that toolchain/platform is already tested by someone else. There is apparently no way at present to figure out whether those results should be fused or not (say, because there are other, hidden, parameters). Another SoC project, anyone ? Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...
participants (13)
-
"JOAQUIN LOPEZ MU?Z"
-
David Abrahams
-
Douglas Gregor
-
Douglas Gregor
-
Frank Mori Hess
-
Joe Gottman
-
Markus Schöpflin
-
Martin Wille
-
Roland Schwarz
-
Stefan Seefeld
-
Thomas Witt
-
Yuval Ronen
-
Zach Laine