xml_grammar.cpp and OS X

What is the state of the release build hang on xml_grammar.cpp on OS X? Has anyone endevoured to block out the offending optiminzation? This is were I saw the message discussed last. http://aspn.activestate.com/ASPN/Mail/Message/2233439 -- Alan Gutierrez - alan@engrm.com

Based on http://lists.boost.org/MailArchives/boost/msg75057.php and the fact I don't have OS X around I concluded there was nothing I could do about it. It would seen easy to address untill there's a new version of the compiler by tweaking one of the build Jamfiles to supress one or more of the optimizations for this platform. This was the thrust of my suggestion in http://lists.boost.org/MailArchives/boost/msg75058.php . I don't think this would take very long to do - but it does require understanding of how to tweak the Jamffile for the library build which could also be an obstacle. Another quick and dirty option is just to comment out the xml_grammar from the list of library source in the Jamfile. This would build the library without the xml capability. Good Luck Robert Ramey Alan wrote:
What is the state of the release build hang on xml_grammar.cpp on OS X? Has anyone endevoured to block out the offending optiminzation?
This is were I saw the message discussed last.

* Robert Ramey <ramey@rrsd.com> [2004-12-15 12:41]:
Alan wrote:
What is the state of the release build hang on xml_grammar.cpp on OS X? Has anyone endevoured to block out the offending optiminzation?
This is were I saw the message discussed last.
Based on http://lists.boost.org/MailArchives/boost/msg75057.php and the fact I don't have OS X around I concluded there was nothing I could do about it. It would seen easy to address untill there's a new version of the compiler by tweaking one of the build Jamfiles to supress one or more of the optimizations for this platform. This was the thrust of my suggestion in http://lists.boost.org/MailArchives/boost/msg75058.php . I don't think this would take very long to do - but it does require understanding of how to tweak the Jamffile for the library build which could also be an obstacle.
Yes. Done. Here is relevent bit of the Jamfile. lib boost_serialization : ## sources ## ../src/$(WSOURCES).cpp : ## requirements ## std::locale-support <msvc><*><include>$(SPIRIT_ROOT) <msvc-stlport><*><include>$(SPIRIT_ROOT) <vc7><*><include>$(SPIRIT_ROOT) <borland><*><include>$(SPIRIT_ROOT) <borland-5_5_1><*><include>$(SPIRIT_ROOT) <borland-5_6_4><*><include>$(SPIRIT_ROOT) <sysinclude>$(BOOST_ROOT) <borland><*><cxxflags>"-w-8080 -w-8071 -w-8057" <msvc><*><cxxflags>-Gy <vc7><*><cxxflags>-Gy <vc7_1><*><cxxflags>-Gy ## Darwin doesn't like optimization... <darwin><*><optimization>off <darwin><*><inlining>off <define>BOOST_TEST_NO_AUTO_LINK=1 <vacpp><*><define>BOOST_MPL_USE_APPLY_INTERNALLY : ## default-build <runtime-link>static/dynamic <threading>single/multi (The Jamfile syntax means nothing to me. I put an echo line above what I determined to be the cc command and tried different things until I saw -O0 and -fno-inline.) Now it compiles. As to the specific opimization to turn off, well, I'll have to learn more about Boost.Build. -- Alan Gutierrez - alan@engrm.com

Alan wrote:
Yes. Done. Here is relevent bit of the Jamfile.
lib boost_serialization
...
## Darwin doesn't like optimization... <darwin><*><optimization>off <darwin><*><inlining>off
...
(The Jamfile syntax means nothing to me.
welcome to the club !
I put an echo line above what I determined to be the cc command and tried different things until I saw -O0 and -fno-inline.)
Now it compiles. As to the specific opimization to turn off, well, I'll have to learn more about Boost.Build.
I would appreciated it if you could experiment just a little bit more. a) instead of <darwin><*>... try each of the following <darwin><release><optimization>space <darwin><release><optimization>space and try with and without <darwin><inlining>off Is there anyone who want's to chiime in and indicate how these statements would be applied to just one source file in the library - that is xml_grammar.cpp . This is the only one thats coughing here ( and the only one that's using spirit ). In fact, I believe that spirit test should be run in release mode - this will amost surely highlight the root cause of the problem. In fact if you want to do a good deed and you have a little time. And since you're right there in the key spot And since you're going to get a huge boost from boost, It would be great if you could run the sprit test suite in release mode. This would be of great help to us. Robert Ramey

On Dec 16, 2004, at 9:50 AM, Robert Ramey wrote:
Alan wrote:
Yes. Done. Here is relevent bit of the Jamfile.
lib boost_serialization
...
## Darwin doesn't like optimization... <darwin><*><optimization>off <darwin><*><inlining>off
...
a) instead of <darwin><*>... try each of the following
<darwin><release><optimization>space <darwin><release><optimization>space
and try with and without
<darwin><inlining>off
Only needs <darwin><release><inlining>off. -O3 is ok in the release variant.
Is there anyone who want's to chiime in and indicate how these statements would be applied to just one source file in the library - that is ...
Kon

Hey OS X serialization people: I have a workaround for the code itself: The problems appear to have been in basic_xml_grammar.ipp, specifically statements such as Name = (Letter | '_' | ':') >> *(NameChar); where presumably the template instantiations from the freestanding operators (in the spirit library) were just a bit too much for the compiler. If you refactor them as rule_t StarNameChar = *(NameChar); rule_t LetterOrUnderscoreOrColon = (Letter | '_' | ':'); Name = LetterOrUnderscoreOrColon >> StarNameChar; then things compile fine in both debug and release mode. If you run "top", you should see the vsize of the compiler top out at about 350M on xml_grammar and at 450M or so on xml_wgrammar in release mode. They do take a couple minutes each to compile. With these changes, on my machine, all the xml-related tests compile run and pass, in both debug and release mode, [with the exception of test_demo_portable_archive, which I'm guessing is due to some other issue, maybe its not even xml related... I'm outta time...] You can get the changes at: http://svn.resophonic.com/pub/boost/boost/archive/impl/basic_xml_grammar.hpp and http://svn.resophonic.com/pub/boost/libs/serialization/src/basic_xml_grammar... lemme know how it looks on your end, I have only one (slow) Mac to try this stuff on... - troy d. straszheim By the way, spectactularly cool library, thanks Mr. Ramey... Robert Ramey writes:
Alan wrote:
Yes. Done. Here is relevent bit of the Jamfile.
lib boost_serialization
...
## Darwin doesn't like optimization... <darwin><*><optimization>off <darwin><*><inlining>off
...
(The Jamfile syntax means nothing to me.
welcome to the club !
I put an echo line above what I determined to be the cc command and tried different things until I saw -O0 and -fno-inline.)
Now it compiles. As to the specific opimization to turn off, well, I'll have to learn more about Boost.Build.
I would appreciated it if you could experiment just a little bit more.
a) instead of <darwin><*>... try each of the following
<darwin><release><optimization>space <darwin><release><optimization>space
and try with and without
<darwin><inlining>off
Is there anyone who want's to chiime in and indicate how these statements would be applied to just one source file in the library - that is xml_grammar.cpp . This is the only one thats coughing here ( and the only one that's using spirit ). In fact, I believe that spirit test should be run in release mode - this will amost surely highlight the root cause of the problem.
In fact if you want to do a good deed and you have a little time. And since you're right there in the key spot And since you're going to get a huge boost from boost, It would be great if you could run the sprit test suite in release mode. This would be of great help to us.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Very impressive. I know from personal experience how much effort it takes to find stuff like this. I will incorporate your changes and test with all my other compilers. Good work. Robert Ramey troy d. straszheim wrote:
By the way, spectactularly cool library, thanks Mr. Ramey...
Glad you think so. RR.

Hi Robert, Robert Ramey wrote:
Very impressive. I know from personal experience how much effort it takes to find stuff like this. I will incorporate your changes and test with all my other compilers. Good work.
I've applied and compiled these changes on: [michael@heresy michael]$ sw_vers ; gcc -v ProductName: Mac OS X ProductVersion: 10.3.7 BuildVersion: 7S215 Reading specs from /usr/libexec/gcc/darwin/ppc/3.3/specs Thread model: posix gcc version 3.3 20030304 (Apple Computer, Inc. build 1671) All seems to be well, but for some reason the test library doesn't want to build on this machine, so I can't provide regression results. Thanks to Troy for looking into this :-) Michael

Michael van der Westhuizen writes:
I've applied and compiled these changes on:
[michael@heresy michael]$ sw_vers ; gcc -v ProductName: Mac OS X ProductVersion: 10.3.7 BuildVersion: 7S215 Reading specs from /usr/libexec/gcc/darwin/ppc/3.3/specs Thread model: posix gcc version 3.3 20030304 (Apple Computer, Inc. build 1671)
All seems to be well, but for some reason the test library doesn't want to build on this machine, so I can't provide regression results.
I've been working on running the test suites on OS X in release mode for a week or so now. Michael, have you tried? Can you give me a sanity check? This is after patching the two basic_xml_grammar files and the darwin toolset file. There are several different classes of problems, that I can see. I forget which files suffer from which problems at the moment. - a number of tests that excite the same or a similar compiler bug as basic_xml_grammar.ipp did, the compile never finishes, while the compiler leaks memory. - a number that compile, but hang when run. and then there are some tests that just fail, but they're not a concern until I can see the test suites run to completion. But at least now I can submit a list of tests that one can comment out: gregorian/testdate.cpp gregorian/testdate_duration.cpp gregorian/testperiod.cpp gregorian/testdate_iterator.cpp gregorian/testfacet.cpp gregorian/testformatters.cpp gregorian/testgenerators.cpp gregorian/testgreg_cal.cpp gregorian/testgreg_day.cpp gregorian/testgreg_month.cpp gregorian/testgreg_year.cpp dyn_bitset_unit_tests3.cpp libs/filesystem/test/path_test.cpp libs/filesystem/test/operations_test.cpp ../../utility/iterator_adaptor_examples.cpp ../../utility/counting_iterator_example.cpp ../../utility/filter_iterator_example.cpp ../../utility/fun_out_iter_example.cpp ../../utility/indirect_iterator_example.cpp ../../utility/projection_iterator_example.cpp ../../utility/reverse_iterator_example.cpp ../../utility/transform_iterator_example.cpp ../../utility/iterator_traits_test.cpp filter_iterator_test.cpp test_serialization_main.cpp $(spirit-src)distinct_tests.cpp in some cases (like gregorian) I commented out an entire block after several of them caused the run to hang, assuming the problem was in some common header file: this list comes from going through my "cvs diff" to see what I'd done. If I can submit the results of these runs, just let me know to whom and exactly what. I notice the regression results on metacomm are a little old and don't seem to match my results. troy d. straszheim

On Wed, 12 Jan 2005 05:41:14 +0100, troy d. straszheim wrote
Michael van der Westhuizen writes: - a number that compile, but hang when run.
and then there are some tests that just fail, but they're not a concern until I can see the test suites run to completion. But at least now I can submit a list of tests that one can comment out:
gregorian/testdate.cpp gregorian/testdate_duration.cpp gregorian/testperiod.cpp gregorian/testdate_iterator.cpp gregorian/testfacet.cpp gregorian/testformatters.cpp gregorian/testgenerators.cpp gregorian/testgreg_cal.cpp gregorian/testgreg_day.cpp gregorian/testgreg_month.cpp gregorian/testgreg_year.cpp
Well, you should send the date-time results to me an I can have a look at them . If all this stuff is failing there must be something fundamental that is wrong... Jeff jeff-at-crystalclearsoftware.com

Jeff Garland writes:
Well, you should send the date-time results to me an I can have a look at them . If all this stuff is failing there must be something fundamental that is wrong...
Yeah, that's just the problem. There are no results to send (no error messages from the compiler or test binary) , because either the compile never finishes, or if it does, the run of the test executable never exits. I forget which variety these were, I'll get to them next... troy d. straszheim

troy d. straszheim wrote:
Jeff Garland writes:
Well, you should send the date-time results to me an I can have a look at them . If all this stuff is failing there must be something fundamental that is wrong...
Yeah, that's just the problem. There are no results to send (no error messages from the compiler or test binary) , because either the compile never finishes, or if it does, the run of the test executable never exits. I forget which variety these were, I'll get to them next...
This sounds familiar... Had a symptomatically similar problem because of a bug causing gcc to silently ignore template specializations in some cases (note the problem is not limited to non-type bools, as described in my report, but does apply to all sorts of template arguments). [ http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14032 ] Just in case it helps... -- Tobias

troy d. straszheim writes:
If I can submit the results of these runs, just let me know to whom and exactly what.
Following instructions at http://tinyurl.com/5alw5 will allow your results to appear in the Boost-wide reports.
I notice the regression results on metacomm are a little old and don't seem to match my results.
Hmm, where do you see these? It's been a while since we run anything on OS X, and the old results are definitely not in the current reports (http://www.meta-comm.com/engineering/boost-regression/developer/). -- Aleksey Gurtovoy MetaCommunications Engineering

troy d. straszheim writes: [in another thread]
There are several different classes of problems, that I can see. I forget which files suffer from which problems at the moment.
- a number of tests that excite the same or a similar compiler bug as basic_xml_grammar.ipp did, the compile never finishes, while the compiler leaks memory.
I worked these out in the filesystem lib, they were of the "compile never finishes", not the "run never finishes" variety. Hard to say if these two were compiler "bugs", per se, but they took forever to compile and the compiler took up >1G memory, not workable.
libs/filesystem/test/path_test.cpp libs/filesystem/test/operations_test.cpp
These two files each just needed to be split in half, that's all. Compiler vsize tops out at <600M. I also took one of the big if-blocks that was if (platform == "Windows") and made it #if defined ( BOOST_WINDOWS ), to hide more code from the compiler. Effect is the same. I've put the split files and Jamfile at http://svn.resophonic.com/pub/boost/libs/filesystem/test and the darwin-tools.jam file with the linker flags fix (for static linking the test lib) is at http://svn.resophonic.com/pub/boost/tools/build/v1 more on the other problems as they come available.... troy d. straszheim

troy d. straszheim wrote:
troy d. straszheim writes: [in another thread]
There are several different classes of problems, that I can see. I forget which files suffer from which problems at the moment.
- a number of tests that excite the same or a similar compiler bug as basic_xml_grammar.ipp did, the compile never finishes, while the compiler leaks memory.
I worked these out in the filesystem lib, they were of the "compile never finishes", not the "run never finishes" variety. Hard to say if these two were compiler "bugs", per se, but they took forever to compile and the compiler took up >1G memory, not workable.
Have you reported these problems to the Darwin GCC developers? -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams wrote:
troy d. straszheim wrote:
troy d. straszheim writes: [in another thread]
There are several different classes of problems, that I can see. I forget which files suffer from which problems at the moment.
- a number of tests that excite the same or a similar compiler bug as basic_xml_grammar.ipp did, the compile never finishes, while the compiler leaks memory.
I worked these out in the filesystem lib, they were of the "compile never finishes", not the "run never finishes" variety. Hard to say if these two were compiler "bugs", per se, but they took forever to compile and the compiler took up >1G memory, not workable.
Have you reported these problems to the Darwin GCC developers?
Absolutely... as I get them worked out I'm filing bugs and giving one of the guys there a heads-up in private email. I gotta say, this is really time-intensive work... I wonder if running regressions "by default" in both debug and release mode might not be good practice... It could be better to try to catch things like this on the way in. troy d. straszheim

At 03:04 PM 1/14/2005, troy d. straszheim wrote:
I gotta say, this is really time-intensive work... I wonder if running regressions "by default" in both debug and release mode might not be good practice... It could be better to try to catch things like this on the way in.
That has been discussed, but some of the people who actually run the tests just don't have enough resources to double the test load. Seems like we need a way to partition the testing so that the load can be distributed. Or maybe some completely new ideas. --Beman

From: Beman Dawes <bdawes@acm.org>
At 03:04 PM 1/14/2005, troy d. straszheim wrote:
I gotta say, this is really time-intensive work... I wonder if running regressions "by default" in both debug and release mode might not be good practice... It could be better to try to catch things like this on the way in.
That has been discussed, but some of the people who actually run the tests just don't have enough resources to double the test load.
Alternate days? Release on weekdays and debug on Saturdays? -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

At 11:12 PM 1/14/2005, Rob Stewart wrote:
From: Beman Dawes <bdawes@acm.org>
At 03:04 PM 1/14/2005, troy d. straszheim wrote:
I gotta say, this is really time-intensive work... I wonder if running regressions "by default" in both debug and release mode might not be good practice... It could be better to try to catch things like this on the way in.
That has been discussed, but some of the people who actually run the tests
just don't have enough resources to double the test load.
Alternate days? Release on weekdays and debug on Saturdays?
Alternate days might work OK. Say debug on odd, release on even. But we would have to solve the NDEBUG problem Dave mentioned, and deal with any configuration issues. --Beman

Beman Dawes wrote:
At 11:12 PM 1/14/2005, Rob Stewart wrote:
Alternate days? Release on weekdays and debug on Saturdays?
Alternate days might work OK. Say debug on odd, release on even. But we would have to solve the NDEBUG problem Dave mentioned, and deal with any configuration issues.
Would a temporary "#undef NDEBUG" in Boost.Config, activated only while testing, work? Or perhaps a definition of the problematic constructs, like assert, to something more useful like BOOST_REQUIRE(...) (or some other exception)? -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

At 05:46 PM 1/15/2005, Rene Rivera wrote:
Beman Dawes wrote:
At 11:12 PM 1/14/2005, Rob Stewart wrote:
Alternate days? Release on weekdays and debug on Saturdays?
Alternate days might work OK. Say debug on odd, release on even. But we
would have to solve the NDEBUG problem Dave mentioned, and deal with any configuration issues.
Would a temporary "#undef NDEBUG" in Boost.Config, activated only while testing, work?
It is possible there may be cases where the #undef NDEBUG should only apply to <cassert>. So it might be better if it were given explicitly in the test program. --Beman

Beman Dawes wrote:
At 05:46 PM 1/15/2005, Rene Rivera wrote:
Would a temporary "#undef NDEBUG" in Boost.Config, activated only while testing, work?
It is possible there may be cases where the #undef NDEBUG should only apply to <cassert>. So it might be better if it were given explicitly in the test program.
Might the most straightforward thing be for test modules to have their own NDEBUG-ignorant assert macro? The vast majority of the work could just get done with a perl one-liner. I would expect people to forget to add the #undef NDEBUG, and then you just get more tests creeping in that don't do anything. -troy

At 09:52 AM 1/16/2005, troy d. straszheim wrote:
Beman Dawes wrote:
At 05:46 PM 1/15/2005, Rene Rivera wrote:
Would a temporary "#undef NDEBUG" in Boost.Config, activated only while testing, work?
It is possible there may be cases where the #undef NDEBUG should only apply to <cassert>. So it might be better if it were given explicitly in the test program.
Might the most straightforward thing be for test modules to have their own NDEBUG-ignorant assert macro? The vast majority of the work could just get done with a perl one-liner. I would expect people to forget to add the #undef NDEBUG, and then you just get more tests creeping in that don't do anything.
This whole discussion may be a bit of a red herring. I wonder how many Boost libraries still depend on assert for testing (as distinct from asserts in compiled libraries)? Many use Boost.Test. --Beman

Beman Dawes wrote: [was: Filesystem test fixes for OS X release mode]
This whole discussion may be a bit of a red herring. I wonder how many Boost libraries still depend on assert for testing (as distinct from asserts in compiled libraries)? Many use Boost.Test.
Did a quick survey... Libraries that use assert() in test files: conversion dynamic_bitset function graph integer iterator program_options python random regex serialization spirit spirit/phoenix utility variant Libraries which use assert() in example files: (Sometimes examples are used as tests.) algorithm/minmax format graph iterator mpl multi_array random rational regex serialization signals spirit thread Libraries which use assert() in source files: (Should probably be using BOOST_ASERT.) filesystem program_options python regex serialization signals thread And finally there are some number of headers which use assert(). Which also likely should be using BOOST_ASSERT. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

I never understood the reason why BOOST_ASSERT is better than the standard library version. How is it different? Robert Ramey Rene Rivera wrote:
Libraries which use assert() in source files: (Should probably be using BOOST_ASERT.)
And finally there are some number of headers which use assert(). Which also likely should be using BOOST_ASSERT.

Robert Ramey wrote:
I never understood the reason why BOOST_ASSERT is better than the standard library version. How is it different?
The simplest reason is that it gives users control as to how to handle asserts. For example doing the default of exiting the program is not desired in things like games, servers, and embedded systems. At least I think that was the basics of the original motivation. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

One issue with leaving the asserts in for release mode testing is that a common error is to include a condition in an assert that has a side-effect. This can make a debug executable work while the release mode version fails. So you can't really test the release mode with the asserts left in. The only real way is to run Boost Test in release mode. Robert Ramey Beman Dawes wrote:
Beman Dawes wrote:
At 05:46 PM 1/15/2005, Rene Rivera wrote:
Would a temporary "#undef NDEBUG" in Boost.Config, activated only while testing, work?
It is possible there may be cases where the #undef NDEBUG should only >> apply to <cassert>. So it might be better if it were given explicitly in the test program.
Might the most straightforward thing be for test modules to have
At 09:52 AM 1/16/2005, troy d. straszheim wrote: their >own NDEBUG-ignorant assert macro? The vast majority of the work could >just get done with a perl one-liner. I would expect people to forget to >add the #undef NDEBUG, and then you just get more tests creeping in that >don't do anything.
This whole discussion may be a bit of a red herring. I wonder how many Boost libraries still depend on assert for testing (as distinct from asserts in compiled libraries)? Many use Boost.Test.
--Beman
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Beman Dawes writes:
At 03:04 PM 1/14/2005, troy d. straszheim wrote:
I gotta say, this is really time-intensive work... I wonder if running regressions "by default" in both debug and release mode might not be good practice... It could be better to try to catch things like this on the way in.
That has been discussed, but some of the people who actually run the tests just don't have enough resources to double the test load.
This was not the reason why we didn't test the release configuration for 1.32 -- it was simply too late in the process when we've tried it and realized that the codebase needs non-trivial work for the results to be trusted (see http://thread.gmane.org/gmane.comp.lib.boost.devel/110127).
Seems like we need a way to partition the testing so that the load can be distributed.
We already do: testing a single compiler in one configuration is well within a reasonable amount of time (~2 hours, I think). We simply need to call for volunteers. -- Aleksey Gurtovoy MetaCommunications Engineering

At Sunday 2005-01-16 22:24, you wrote:
Beman Dawes writes:
At 03:04 PM 1/14/2005, troy d. straszheim wrote:
I gotta say, this is really time-intensive work... I wonder if running regressions "by default" in both debug and release mode might not be good practice... It could be better to try to catch things like this on the way in.
That has been discussed, but some of the people who actually run the tests just don't have enough resources to double the test load.
This was not the reason why we didn't test the release configuration for 1.32 -- it was simply too late in the process when we've tried it and realized that the codebase needs non-trivial work for the results to be trusted (see http://thread.gmane.org/gmane.comp.lib.boost.devel/110127).
Seems like we need a way to partition the testing so that the load can be distributed.
We already do: testing a single compiler in one configuration is well within a reasonable amount of time (~2 hours, I think). We simply need to call for volunteers.
borrowing a phrase from the DVD of Shrek: "pick me, pick me, ooooh pick me"
-- Aleksey Gurtovoy MetaCommunications Engineering _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

At 12:24 AM 1/17/2005, Aleksey Gurtovoy wrote:
Seems like we need a way to partition the testing so that the load can be distributed.
We already do: testing a single compiler in one configuration is well within a reasonable amount of time (~2 hours, I think). We simply need to call for volunteers.
That's a good point! Testing a single compiler in both release and debug modes should be pretty easy to cope with too. --Beman

troy d. straszheim wrote:
David Abrahams wrote:
troy d. straszheim wrote:
I gotta say, this is really time-intensive work... I wonder if running regressions "by default" in both debug and release mode might not be good practice... It could be better to try to catch things like this on the way in.
It would be good to do, but it will take some work because the tests are not all currently designed to work in release "mode." Many of them use assert() and other constructs that are switched off by NDEBUG. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

On Sat, 15 Jan 2005 11:50:53 -0500, David Abrahams wrote
troy d. straszheim wrote:
David Abrahams wrote:
troy d. straszheim wrote:
I gotta say, this is really time-intensive work... I wonder if running regressions "by default" in both debug and release mode might not be good practice... It could be better to try to catch things like this on the way in.
It would be good to do, but it will take some work because the tests are not all currently designed to work in release "mode." Many of them use assert() and other constructs that are switched off by NDEBUG.
I'm sure that's true, but what's the percentage of tests actually affected? Some libraries don't have these constraints and could benefit from the release mode tests. Here's a couple pointers to some previous disucssion on this subject of regression testing options. http://lists.boost.org/MailArchives/boost/msg64471.php http://lists.boost.org/MailArchives/boost/msg05816.php Of course, as usual, the problem with any additions and changes is we need volunteers to step up and do the work. I set up and then stepped back from running regressions since Martin seemed to have Linux well covered. But I might be willing to run some release-mode Linux regression tests if that would help. Only thing is we probably need some tweaks to the config or regression scripts to allow this? Jeff

Jeff Garland writes:
Of course, as usual, the problem with any additions and changes is we need volunteers to step up and do the work. I set up and then stepped back from running regressions since Martin seemed to have Linux well covered. But I might be willing to run some release-mode Linux regression tests if that would help. Only thing is we probably need some tweaks to the config or regression scripts to allow this?
Simply passing '-sBUILD=release' to bjam should be enough ('--bjam-options=-sBUILD=release' with 'regression.py') -- Aleksey Gurtovoy MetaCommunications Engineering

At 01:18 PM 1/14/2005, troy d. straszheim wrote:
I worked these out in the filesystem lib, they were of the "compile never finishes", not the "run never finishes" variety. Hard to say if these two were compiler "bugs", per se, but they took forever to compile and the compiler took up >1G memory, not workable.
libs/filesystem/test/path_test.cpp libs/filesystem/test/operations_test.cpp
Something is badly wrong with the compiler. GCC 3.3.1 on Windows XP never uses more than 60 megs of memory for any filesystem compile. Other compilers use even less. I have vast respect for the GCC effort, but the compiler does have a history of running away with memory. :-(
These two files each just needed to be split in half, that's all.
It wouldn't hurt to refactor the filesystem tests, but that is going to have to be done soon anyhow to cope with internationalization. In the meantime, the incentive to modify them temporarily is very low. --Beman

Beman Dawes wrote:
It wouldn't hurt to refactor the filesystem tests, but that is going to have to be done soon anyhow to cope with internationalization. In the meantime, the incentive to modify them temporarily is very low.
Whatever you think.... I did the work already, if that's the issue. Very straightforward. The split in half files are at http://svn.resophonic.com/pub/boost/libs/filesystem/test -t

At 09:02 AM 1/16/2005, troy d. straszheim wrote:
Beman Dawes wrote:
It wouldn't hurt to refactor the filesystem tests, but that is going to
have to be done soon anyhow to cope with internationalization. In the meantime, the incentive to modify them temporarily is very low.
Whatever you think.... I did the work already, if that's the issue. Very straightforward. The split in half files are at
I really don't want to refactor the two test programs into two files each that way. Having two files without much rationale as to what is in which detracts from maintainability. I'm sure there is a way to refactor the tests in ways that actually improve maintainability. Breaking the tests down into smaller functions would be a good start, without actually breaking them into multiple files. --Beman

Hi Troy, troy d. straszheim wrote:
Michael van der Westhuizen writes:
I've applied and compiled these changes on:
[michael@heresy michael]$ sw_vers ; gcc -v ProductName: Mac OS X ProductVersion: 10.3.7 BuildVersion: 7S215 Reading specs from /usr/libexec/gcc/darwin/ppc/3.3/specs Thread model: posix gcc version 3.3 20030304 (Apple Computer, Inc. build 1671)
All seems to be well, but for some reason the test library doesn't want to build on this machine, so I can't provide regression results.
I've been working on running the test suites on OS X in release mode for a week or so now. Michael, have you tried? Can you give me a sanity check? This is after patching the two basic_xml_grammar files and the darwin toolset file.
Sorry for the delay in replying - I've been on vacation for the last week. I can also get everything to build with the patches you mention above, but I have not yet tried running the regressions (I ran out of time before going away). I intend setting up the python regression scripts soon and running the regresison suite. Michael
participants (13)
-
Alan
-
Aleksey Gurtovoy
-
Beman Dawes
-
David Abrahams
-
Jeff Garland
-
Kon Lovett
-
Michael van der Westhuizen
-
Rene Rivera
-
Rob Stewart
-
Robert Ramey
-
Tobias Schwinger
-
troy d. straszheim
-
Victor A. Wagner Jr.