
Hi, can anyone please tell me the status of the following gcc failures on win32 RC_1_31_0: date_time testmicrosec_time_clock integer cstdint_test iterator interoperable_fail math octobion_test math quaternion_test test error_handling_test Won't these errors be solved for the upcomming release? I can't found any tests of spirit and mpl for the win32 gcc (mingw) platform. Aren't there any? It would be fine to have a regression table of all boost libraries for a particular platform. With kind regards Johannes

can anyone please tell me the status of the following gcc failures on win32 RC_1_31_0:
date_time testmicrosec_time_clock
The clock facility is broken on cygwin, they know about it.
integer cstdint_test
There are a few bugs in cygwin's (and mingw32's) implementation of stdint.h. They are both aware of the issue, the mingw folks have fixed theirs, but I've heard nothing from the cygwin guys.
iterator interoperable_fail math octobion_test math quaternion_test test error_handling_test
Others will have to answer those.
Won't these errors be solved for the upcomming release?
No there won't be any new fixes now. John.

"Johannes Brunen" <jbrunen@datasolid.de> writes:
Hi,
can anyone please tell me the status of the following gcc failures on win32 RC_1_31_0:
date_time testmicrosec_time_clock integer cstdint_test iterator interoperable_fail math octobion_test math quaternion_test test error_handling_test
Won't these errors be solved for the upcomming release?
Not the iterator error. See the note in the regression logs.
I can't found any tests of spirit and mpl for the win32 gcc (mingw) platform. Aren't there any?
It would be fine to have a regression table of all boost libraries for a particular platform.
I agree that would be better. I don't really know why all testers don't run them all. A test run takes a long time anyway; Spirit and Python don't add that much. -- Dave Abrahams Boost Consulting www.boost-consulting.com

At 04:25 AM 2/2/2004, Johannes Brunen wrote:
can anyone please tell me the status of the following gcc failures on win32 RC_1_31_0:
date_time testmicrosec_time_clock iterator interoperable_fail
See the footnotes on those tests.
integer cstdint_test math octobion_test math quaternion_test
Runtime failures. Reason for failure unknown. No one has stepped forward with an explanation and/or patch. Feel free to contribute fixes, but they are too late for 1.31.0.
test error_handling_test
Oops! My fault. Those are old test results that should have been removed. They should disappear shortly.
Won't these errors be solved for the upcomming release?
No, the window of opportunity has closed. They've all be failing for a long time, so clearly are not critical.
I can't found any tests of spirit and mpl for the win32 gcc (mingw) platform. Aren't there any?
Those libraries handle their own testing.
It would be fine to have a regression table of all boost libraries for a particular platform.
Yes, but that doesn't seem practical at the moment. Thanks, --Beman

Beman Dawes <bdawes@acm.org> writes:
I can't found any tests of spirit and mpl for the win32 gcc (mingw) platform. Aren't there any?
Those libraries handle their own testing.
That's a strange way of putting it. _People_ run tests on those libraries. Luckily some of the people running Boost's regression tests run those tests too. It seems wrong to me that they should be left out of the default testing regime, which is run on many more compilers than the authors/maintainers of those libraries can possibly test directly. -- Dave Abrahams Boost Consulting www.boost-consulting.com

At 09:36 AM 2/2/2004, David Abrahams wrote:
Beman Dawes <bdawes@acm.org> writes:
I can't found any tests of spirit and mpl for the win32 gcc (mingw) platform. Aren't there any?
Those libraries handle their own testing.
That's a strange way of putting it. _People_ run tests on those libraries.
Sorry, it wasn't phrased very well.
Luckily some of the people running Boost's regression tests run those tests too. It seems wrong to me that they should be left out of the default testing regime, which is run on many more compilers than the authors/maintainers of those libraries can possibly test directly.
Yes. I think we need a major upgrade to our testing infrastructure. I'd like to see a machine (perhaps running both Win XP and Linux using a virtual machine manager) constantly running Boost regression tests. The tests should be segmented into sets, including an "everything we've got set", with some sets running more often than others. As previously discussed, one set should be a "quicky test" that runs very often, and that developers can temporarily add a test to that they are concerned about. I can round-up a donation of a nice modern machine to run the tests on. That isn't hard when powerful boxes go for $1,000 or less. But I can't host here because I only have a metered ISDN Internet connection. So we would need a volunteer for that. Again, we can probably find someone. The key volunteers needed would be people who are comfortable setting up and remote administering such a test setup. More than one would be needed so that no one person becomes a bottleneck. Am I dreaming, or is this something we should actively persue? --Beman

Beman Dawes <bdawes@acm.org> writes:
I think we need a major upgrade to our testing infrastructure. I'd like to see a machine (perhaps running both Win XP and Linux using a virtual machine manager) constantly running Boost regression tests. The tests should be segmented into sets, including an "everything we've got set", with some sets running more often than others. As previously discussed, one set should be a "quicky test" that runs very often, and that developers can temporarily add a test to that they are concerned about.
It seems to me, that a lot of time is taken by Boost.Build unnecesserily trying to execute the tests which have been failing before, even though files they depend on haven't changed. If this is fixed, it would make sense to set up continuosly running regression tests: clean once a day and the updates for the rest of the day. Regarding the dividing the whole thing into sets: The whole thing has 3 aspects 1. toolsets 2. libs 3. branches As I understand the main use case for "sets" would be to allow the developer to quickly see the effect of the changes he or she have made. I this case, wouldn't the ability to specify the toolsets/libs/branch to retest be enough?
I can round-up a donation of a nice modern machine to run the tests on. That isn't hard when powerful boxes go for $1,000 or less.
Plus the cost of the software - Windows + all the compilers (less for Linux). Our attempt to get some donated has failed (we've been trying to do that through standard sales channels). -- Misha Bergal MetaCommunications Engineering

Misha Bergal <mbergal@meta-comm.com> writes:
Beman Dawes <bdawes@acm.org> writes:
I think we need a major upgrade to our testing infrastructure. I'd like to see a machine (perhaps running both Win XP and Linux using a virtual machine manager) constantly running Boost regression tests. The tests should be segmented into sets, including an "everything we've got set", with some sets running more often than others. As previously discussed, one set should be a "quicky test" that runs very often, and that developers can temporarily add a test to that they are concerned about.
It seems to me, that a lot of time is taken by Boost.Build unnecesserily trying to execute the tests which have been failing before, even though files they depend on haven't changed.
It used to work the other way, but it caused confusion.
If this is fixed, it would make sense to set up continuosly running regression tests: clean once a day and the updates for the rest of the day.
We could make it optional and use it only for the Bots. There is also the problem that the type traits tests obfuscate their include files using macros, so some changes won't cause rebuilds. There is also a similar issue with libraries that use the PP library. We can customize Boost.Build to be aware of the special inclusion macros if neccessary. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"John Maddock" <john@johnmaddock.co.uk> writes:
There is also the problem that the type traits tests obfuscate their include files using macros, so some changes won't cause rebuilds.
No that was changed a while ago.
Could've sworn I saw one recently, but now I can't find it. Sorry. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams <dave@boost-consulting.com> writes:
Misha Bergal <mbergal@meta-comm.com> writes:
Beman Dawes <bdawes@acm.org> writes:
I think we need a major upgrade to our testing infrastructure. I'd like to see a machine (perhaps running both Win XP and Linux using a virtual machine manager) constantly running Boost regression tests. The tests should be segmented into sets, including an "everything we've got set", with some sets running more often than others. As previously discussed, one set should be a "quicky test" that runs very often, and that developers can temporarily add a test to that they are concerned about.
It seems to me, that a lot of time is taken by Boost.Build unnecesserily trying to execute the tests which have been failing before, even though files they depend on haven't changed.
It used to work the other way, but it caused confusion.
If this is fixed, it would make sense to set up continuosly running regression tests: clean once a day and the updates for the rest of the day.
We could make it optional and use it only for the Bots.
Agreed. Do you have a rough estimate about what needs to be done to implement/restore it?
There is also the problem that the type traits tests obfuscate their include files using macros, so some changes won't cause rebuilds.
There is also a similar issue with libraries that use the PP library. We can customize Boost.Build to be aware of the special inclusion macros if neccessary.
The dependencies problems seem to be resolvable. So really what is needed is to: 1. Implement BuildBot. 2. Change BoostBuild to have an option of not rebuilding the failed tests. 3. Implement regression test requests for branch/lib/toolset. -- Misha Bergal MetaCommunications Engineering

Misha Bergal <mbergal@meta-comm.com> writes:
David Abrahams <dave@boost-consulting.com> writes:
Misha Bergal <mbergal@meta-comm.com> writes:
Beman Dawes <bdawes@acm.org> writes:
I think we need a major upgrade to our testing infrastructure. I'd like to see a machine (perhaps running both Win XP and Linux using a virtual machine manager) constantly running Boost regression tests. The tests should be segmented into sets, including an "everything we've got set", with some sets running more often than others. As previously discussed, one set should be a "quicky test" that runs very often, and that developers can temporarily add a test to that they are concerned about.
It seems to me, that a lot of time is taken by Boost.Build unnecesserily trying to execute the tests which have been failing before, even though files they depend on haven't changed.
It used to work the other way, but it caused confusion.
If this is fixed, it would make sense to set up continuosly running regression tests: clean once a day and the updates for the rest of the day.
We could make it optional and use it only for the Bots.
Agreed. Do you have a rough estimate about what needs to be done to implement/restore it?
I think it would take a day or two of work on testing.jam.
There is also the problem that the type traits tests obfuscate their include files using macros, so some changes won't cause rebuilds.
There is also a similar issue with libraries that use the PP library. We can customize Boost.Build to be aware of the special inclusion macros if neccessary.
The dependencies problems seem to be resolvable. So really what is needed is to:
1. Implement BuildBot. 2. Change BoostBuild to have an option of not rebuilding the failed tests. 3. Implement regression test requests for branch/lib/toolset.
Yep. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Beman Dawes <bdawes@acm.org> writes:
At 09:36 AM 2/2/2004, David Abrahams wrote:
Beman Dawes <bdawes@acm.org> writes:
I can't found any tests of spirit and mpl for the win32 gcc (mingw) platform. Aren't there any?
Those libraries handle their own testing.
That's a strange way of putting it. _People_ run tests on those libraries.
Sorry, it wasn't phrased very well.
Luckily some of the people running Boost's regression tests run those tests too. It seems wrong to me that they should be left out of the default testing regime, which is run on many more compilers than the authors/maintainers of those libraries can possibly test directly.
Yes.
I think we need a major upgrade to our testing infrastructure. I'd like to see a machine (perhaps running both Win XP and Linux using a virtual machine manager) constantly running Boost regression tests. The tests should be segmented into sets, including an "everything we've got set", with some sets running more often than others. As previously discussed, one set should be a "quicky test" that runs very often, and that developers can temporarily add a test to that they are concerned about.
I can round-up a donation of a nice modern machine to run the tests on. That isn't hard when powerful boxes go for $1,000 or less. But I can't host here because I only have a metered ISDN Internet connection. So we would need a volunteer for that. Again, we can probably find someone.
The key volunteers needed would be people who are comfortable setting up and remote administering such a test setup. More than one would be needed so that no one person becomes a bottleneck.
Am I dreaming, or is this something we should actively persue?
I think we should use The BuildBot (http://buildbot.sf.net). That way testing load can be distributed all over the world and can send people annoying emails when they break the build. I'm not sure it should be neccessary to segment the tests if we do this right. Brian, are you ready to help Boost get started with BuildBot? -- Dave Abrahams Boost Consulting www.boost-consulting.com

Beman Dawes wrote:
At 09:36 AM 2/2/2004, David Abrahams wrote:
Luckily some of the people running Boost's regression tests run those tests too. It seems wrong to me that they should be left out of the default testing regime, which is run on many more compilers than the authors/maintainers of those libraries can possibly test directly.
Yes.
I think we need a major upgrade to our testing infrastructure. I'd like to see a machine (perhaps running both Win XP and Linux using a virtual machine manager) constantly running Boost regression tests. The tests should be segmented into sets, including an "everything we've got set", with some sets running more often than others. As previously discussed, one set should be a "quicky test" that runs very often, and that developers can temporarily add a test to that they are concerned about.
I'm all for that... One of the reasons I don't run more Boost tests is the length of time they take.
I can round-up a donation of a nice modern machine to run the tests on. That isn't hard when powerful boxes go for $1,000 or less. But I can't host here because I only have a metered ISDN Internet connection. So we would need a volunteer for that. Again, we can probably find someone.
And what prevents me from dedicating a machine to running tests is no budget for such a dedicated machine. I already pay for an unmettered DSL connection.
The key volunteers needed would be people who are comfortable setting up and remote administering such a test setup. More than one would be needed so that no one person becomes a bottleneck.
Am I dreaming, or is this something we should actively persue?
All dreams should be pursued, or at least thought about ;-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq
participants (6)
-
Beman Dawes
-
David Abrahams
-
Johannes Brunen
-
John Maddock
-
Misha Bergal
-
Rene Rivera