
The release is starting to shape up, and here is our final release schedule. We'll stick to it unless something goes horribly wrong. Code freeze - 11pm EST on Sunday, July 24 Total freeze (including documentation) - 11pm EST on Wednesday, July 27 Final release - Monday, August 1st Doug Gregor 1.33.0 Release Manager

Douglas Gregor writes:
The release is starting to shape up, and here is our final release schedule. We'll stick to it unless something goes horribly wrong.
Code freeze - 11pm EST on Sunday, July 24 Total freeze (including documentation) - 11pm EST on Wednesday, July 27 Final release - Monday, August 1st
What are we doing about a handful of the remaining regressions (17, http://engineering.meta-comm.com/boost-regression/CVS-HEAD/developer/summary... -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
Douglas Gregor writes:
The release is starting to shape up, and here is our final release schedule. We'll stick to it unless something goes horribly wrong.
Code freeze - 11pm EST on Sunday, July 24 Total freeze (including documentation) - 11pm EST on Wednesday, July 27 Final release - Monday, August 1st
What are we doing about a handful of the remaining regressions (17, http://engineering.meta-comm.com/boost-regression/CVS-HEAD/developer/summary...
I'm felt somewhat irritated by the discrepancies between OSL's results for Boost.Serialization and mine. However, I now see that exactly 10 tests are affected. I also found out that Boost.Serialization tests leave a multiple of 10 files in /tmp on my HD. Could it be that the tests are unable to create the files on the OSL2 machine? This would explain the "std::exception: stream error" output from those tests. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

On Jul 28, 2005, at 3:52 AM, Martin Wille wrote:
I'm felt somewhat irritated by the discrepancies between OSL's results for Boost.Serialization and mine.
However, I now see that exactly 10 tests are affected. I also found out that Boost.Serialization tests leave a multiple of 10 files in /tmp on my HD.
Could it be that the tests are unable to create the files on the OSL2 machine? This would explain the "std::exception: stream error" output from those tests.
I haven't been able to track these down, but every time I've run the tests outside of the regression.py, everything works fine. I've just cleared out /tmp and started running the tests again... Doug

These particular ones are due to the fact that I've never been able to force the order of tests with bjam. Those tests attempt to load archives which haven't been created yet. If one doesn'st clean out the $TMPDIR, these errors don't occur the next time. Using the markup to indicate - fails sometimes resulted in a misleading marking of the passing tests (If I remember correctly) so I just had to leave it. That is, the "fail" are really artifacts of the bjam implementation. Robert Ramey Douglas Gregor wrote:
On Jul 28, 2005, at 3:52 AM, Martin Wille wrote:
I'm felt somewhat irritated by the discrepancies between OSL's results for Boost.Serialization and mine.
However, I now see that exactly 10 tests are affected. I also found out that Boost.Serialization tests leave a multiple of 10 files in /tmp on my HD.
Could it be that the tests are unable to create the files on the OSL2 machine? This would explain the "std::exception: stream error" output from those tests.
I haven't been able to track these down, but every time I've run the tests outside of the regression.py, everything works fine. I've just cleared out /tmp and started running the tests again...
Doug
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Jul 28, 2005, at 10:09 AM, Robert Ramey wrote:
These particular ones are due to the fact that I've never been able to force the order of tests with bjam. Those tests attempt to load archives which haven't been created yet. If one doesn'st clean out the $TMPDIR, these errors don't occur the next time. Using the markup to indicate - fails sometimes resulted in a misleading marking of the passing tests (If I remember correctly) so I just had to leave it. That is, the "fail" are really artifacts of the bjam implementation.
Perhaps after the release you should consider combining the load & save tests for each archive/type combination into a single test. Just dump the file and then read it back immediately. Doug

Doug Gregor wrote:
On Jul 28, 2005, at 10:09 AM, Robert Ramey wrote:
These particular ones are due to the fact that I've never been able to force the order of tests with bjam. Those tests attempt to load archives which haven't been created yet. If one doesn'st clean out the $TMPDIR, these errors don't occur the next time. Using the markup to indicate - fails sometimes resulted in a misleading marking of the passing tests (If I remember correctly) so I just had to leave it. That is, the "fail" are really artifacts of the bjam implementation.
Perhaps after the release you should consider combining the load & save tests for each archive/type combination into a single test. Just dump the file and then read it back immediately.
Do those test really need to read from/write to a file then? Using a stringstream would save me the hassle of deleting hundreds of temporary files regularly. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

Do those test really need to read from/write to a file then?
Those 10 test do have to run in separate executables for reasons stated above.
Using a stringstream would save me the hassle of deleting hundreds of temporary files regularly.
This is a separate issue. This would apply to all the other tests. I believe that the temporary file is only left if in fact the test fails - otherwise it is deleted. I have found that stringstream doesn't generally handle code_cvt facets and locales as I would expect. In my limiited testing with them it seemed to just ignore locale issues. I don't know if this is peculiar to the the library implementation I was using or if its a universal thing. Anyway - My confidence in using stringstream for testing was shaken and I invested effort in deleting temporary files. Robert Ramey

Robert Ramey wrote:
I have found that stringstream doesn't generally handle code_cvt facets and locales as I would expect. In my limiited testing with them it seemed to just ignore locale issues. I don't know if this is peculiar to the the library implementation I was using or if its a universal thing. Anyway - My confidence in using stringstream for testing was shaken and I invested effort in deleting temporary files.
The only standard streams which use codecvts are the file streams. Jonathan

That's what all the other tests do. These tests verify that that an archive saved under one version can be loaded when the class version is changed. Class version is a program wide compile time seriailization trait for a class so one can't really can't have a single executable which saves under an old version and loads under a newer one. That is why these tests are separate. I had hope that the bjam would permit DEPENDS syntax like test_load_xml.run requires test_save_xml.run. In fact the bjam syntax does permit it but it has no effect. I would hope that bjam v2 addresses this issue in some way though I havn't checked. Robert Ramey Doug Gregor wrote:
Perhaps after the release you should consider combining the load & save tests for each archive/type combination into a single test. Just dump the file and then read it back immediately.

"Robert Ramey" <ramey@rrsd.com> writes:
These particular ones are due to the fact that I've never been able to force the order of tests with bjam. Those tests attempt to load archives which haven't been created yet. If one doesn'st clean out the $TMPDIR, these errors don't occur the next time.
I can't believe I'm only hearing about this now. Did I miss your request for help with this problem on the jamboost list?
Using the markup to indicate - fails sometimes resulted in a misleading marking of the passing tests (If I remember correctly) so I just had to leave it. That is, the "fail" are really artifacts of the bjam implementation.
Seems to me it's an artifact of the fact that you didn't figure out how to make it do what you want. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
These particular ones are due to the fact that I've never been able to force the order of tests with bjam. Those tests attempt to load archives which haven't been created yet.
I can't believe I'm only hearing about this now. Did I miss your request for help with this problem on the jamboost list?
I remember having a conversation about it, don't know which list. But I also remember implementing a solution to the problem. So what happened to the solution? -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - Grafik/jabber.org

David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
These particular ones are due to the fact that I've never been able to force the order of tests with bjam. Those tests attempt to load archives which haven't been created yet. If one doesn'st clean out the $TMPDIR, these errors don't occur the next time.
I can't believe I'm only hearing about this now. Did I miss your request for help with this problem on the jamboost list?
Oh yeah - there were quite a few emails regarding trying to get bjam to do what I needed it to do. Fixing the order of the save/load tests was just one issue. Others were a) skipping tests depending on wide characters for libraries that didn't support them. b) skipping tests which depended on installation of spirt 1.6x for compilers which don't support spirit 1.8x c) forcing tests with certain compilers which implemented locales in only static libraries d) skipping tests for compilers which can't properly build a DLL With much help from Rene, these all got sorted out - (Though not without bringing the whole test system to its knees at lease once. Rene did submit a Jamfile patch to address the final problem. It was totally opaque to me in that it depended upon internal behavior of bjam. I really had spent a lot of time with bjam and was concerned that including this might interact with all the issues resolved above in a way that I could never figure out. That is, I felt the cure was worse than the desease. So I left out this last change for the sake of transparency. I did leave in "Depends" clauses to document the requirement that the save test be run after the corresponding load test. This doesn't work but it does document my intention.
Using the markup to indicate - fails sometimes resulted in a misleading marking of the passing tests (If I remember correctly) so I just had to leave it. That is, the "fail" are really artifacts of the bjam implementation.
Seems to me it's an artifact of the fact that you didn't figure out how to make it do what you want.
LOL Robert Ramey

"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
These particular ones are due to the fact that I've never been able to force the order of tests with bjam. Those tests attempt to load archives which haven't been created yet. If one doesn'st clean out the $TMPDIR, these errors don't occur the next time.
I can't believe I'm only hearing about this now. Did I miss your request for help with this problem on the jamboost list?
Oh yeah - there were quite a few emails regarding trying to get bjam to do what I needed it to do. Fixing the order of the save/load tests was just one issue.
<snip>
Rene did submit a Jamfile patch to address the final problem. It was totally opaque to me in that it depended upon internal behavior of bjam.
So? What can you put in a Jamfile that *doesn't* depend on the internal behavior of bjam?? Are you saying you understand the details of everything else bjam is doing?
I really had spent a lot of time with bjam and was concerned that including this might interact with all the issues resolved above in a way that I could never figure out. That is, I felt the cure was worse than the desease.
Maybe for you, but having regressions is much worse for everyone else than having a Jamfile that you don't understand.
So I left out this last change for the sake of transparency. I did leave in "Depends" clauses to document the requirement that the save test be run after the corresponding load test. This doesn't work but it does document my intention.
What good is that to the rest of us? Bogus failure reports are worse than no failure reports at all, because everyone will ignore them *and* be annoyed by them. Also it will make users unnecessarily nervous. At least with no failure reports, you avoid the annoyance/nervousness factor.
Using the markup to indicate - fails sometimes resulted in a misleading marking of the passing tests (If I remember correctly) so I just had to leave it. That is, the "fail" are really artifacts of the bjam implementation.
Seems to me it's an artifact of the fact that you didn't figure out how to make it do what you want.
LOL
Now that I read your description of history, it seems to be just an artifact of your unwillingness to use Rene's fix. Rene spent quite some effort on figuring out how to let you specify ordering and make it work. I think you should make sure his efforts don't go to waste. If you need to be more comfortable with the code, get him to explain it until you understand it. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
Now that I read your description of history, it seems to be just an artifact of your unwillingness to use Rene's fix.
Lets make simple, just fix bjam so that one can use the DEPENDS clause to condition the invocation of one test upon the successful completion of another test. Thats the way I would expect the DEPENDS clause to work from looking at the bjam documentation. I never suggested it before because I assumed that the problem would solve itself with bjam v2. But that's not here yet.
Rene spent quite some effort on figuring out how to let you specify ordering and make it work. I think you should make sure his efforts don't go to waste. If you need to be more comfortable with the code, get him to explain it until you understand it.
Its hard to include code that one can't understand in a project one is responsable for. The real source the problem is that bjam is hard to use and has lots quirky behavior. I haven't dwelled upon it because I know that it is being worked on. While we're on the issue of Jamfiles, I did change my local Jamfile for comeau compilers to use static libraries and it seemed to get much better results. I could check in my change but I seem to recall that a change was going to be made in the como ... vc-7_1 toolset to address this. So far test results show this hasn't happened. So I'm confused as to whether to check in this change. Robert Ramey

Robert Ramey wrote:
David Abrahams wrote:
Now that I read your description of history, it seems to be just an artifact of your unwillingness to use Rene's fix.
Lets make simple, just fix bjam so that one can use the DEPENDS clause to condition the invocation of one test upon the successful completion of another test. Thats the way I would expect the DEPENDS clause to work from looking at the bjam documentation. I never suggested it before because I assumed that the problem would solve itself with bjam v2. But that's not here yet.
I could certainly add something, like DEPENDS, that makes it seem easy. As long as Doug approves me messing with the build system at this time.
While we're on the issue of Jamfiles, I did change my local Jamfile for comeau compilers to use static libraries and it seemed to get much better results. I could check in my change but I seem to recall that a change was going to be made in the como ... vc-7_1 toolset to address this. So far test results show this hasn't happened. So I'm confused as to whether to check in this change.
The changes where deeper than that. They involved changing the the functionality of BBv1 itself. The changes I posted for this worked only as far as user level building went. But they break testing building. Doug said to not pursue spending time on that avenue, so I didn't. Yes it is considerably easier to account for that particular issue on the project Jamfiles. So I'd say check in your changes. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - Grafik/jabber.org

Rene Rivera wrote:
I could certainly add something, like DEPENDS, that makes it seem easy. As long as Doug approves me messing with the build system at this time.
I don't think any changes should be made at this point. This is not an urgent or critical problem. Its been around since the initial release of the serialization tests a year ago and no one until now has even mentioned it - much less complained about it.
While we're on the issue of Jamfiles, I did change my local Jamfile for comeau compilers to use static libraries and it seemed to get much better results. I could check in my change but I seem to recall that a change was going to be made in the como ... vc-7_1 toolset to address this. So far test results show this hasn't happened. So I'm confused as to whether to check in this change.
The changes where deeper than that. They involved changing the the functionality of BBv1 itself. The changes I posted for this worked only as far as user level building went. But they break testing building. Doug said to not pursue spending time on that avenue, so I didn't. Yes it is considerably easier to account for that particular issue on the project Jamfiles. So I'd say check in your changes.
I agree 100% . Any available resources should be invested in V2. The change is basically the same one we use to get CW-8-3 to work and we've been using that for months satisfactorily so I would expect it would be fine. I am confused as to where it should be checked in - in the normal place or in some branch? On the other hand, Commeau has been explicitly excluded from the list of "release" compilers. Based on this I was previous informed that we shouldn't worry about as far as release is concerned and I believe we should stick to that view. That is, we're better off not changing the release requirements on the fly. I can check in the change to my Jamfile but given the circumstances I think the best is to leave things as they are. Robert Ramey

On Jul 29, 2005, at 10:56 PM, Rene Rivera wrote:
Robert Ramey wrote:
David Abrahams wrote:
Now that I read your description of history, it seems to be just an artifact of your unwillingness to use Rene's fix.
Lets make simple, just fix bjam so that one can use the DEPENDS clause to condition the invocation of one test upon the successful completion of another test. Thats the way I would expect the DEPENDS clause to work from looking at the bjam documentation. I never suggested it before because I assumed that the problem would solve itself with bjam v2. But that's not here yet.
I could certainly add something, like DEPENDS, that makes it seem easy. As long as Doug approves me messing with the build system at this time.
We shouldn't be changing the build system at this time. If there's a way to fix the serialization test ordering in the serialization Jamfile, we should do that. Otherwise, the tests should be fixed, marked-up, or removed: they make the library look less functional that it is.
The changes where deeper than that. They involved changing the the functionality of BBv1 itself. The changes I posted for this worked only as far as user level building went. But they break testing building. Doug said to not pursue spending time on that avenue, so I didn't. Yes it is considerably easier to account for that particular issue on the project Jamfiles. So I'd say check in your changes.
'tis not worth spending time on BBv1 for compilers that aren't release platforms. Doug

"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
Now that I read your description of history, it seems to be just an artifact of your unwillingness to use Rene's fix.
Lets make simple, just fix bjam so that one can use the DEPENDS clause to condition the invocation of one test upon the successful completion of another test.
bjam doesn't need to be fixed to allow that.
Thats the way I would expect the DEPENDS clause to work from looking at the bjam documentation.
That _is_ how the DEPENDS rule works. However, you can't expect it to work the way *you* invoked it. The names you used are not the names of the actual Jam targets involved.
Rene spent quite some effort on figuring out how to let you specify ordering and make it work. I think you should make sure his efforts don't go to waste. If you need to be more comfortable with the code, get him to explain it until you understand it.
Its hard
...but not impossible...
to include code that one can't understand in a project one is responsable for. The real source the problem is that bjam is hard to use and has lots quirky behavior. I haven't dwelled upon it because I know that it is being worked on.
Then learn to understand it or remove the tests that depend on that behavior. The current situation is simply unacceptable.
While we're on the issue of Jamfiles, I did change my local Jamfile for comeau compilers to use static libraries and it seemed to get much better results. I could check in my change but I seem to recall that a change was going to be made in the como ... vc-7_1 toolset to address this. So far test results show this hasn't happened. So I'm confused as to whether to check in this change.
I don't know the status of any planned como toolset changes. You should ask on the jamboost list. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Jul 28, 2005, at 3:30 AM, Aleksey Gurtovoy wrote:
Douglas Gregor writes:
The release is starting to shape up, and here is our final release schedule. We'll stick to it unless something goes horribly wrong.
Code freeze - 11pm EST on Sunday, July 24 Total freeze (including documentation) - 11pm EST on Wednesday, July 27 Final release - Monday, August 1st
What are we doing about a handful of the remaining regressions (17, http://engineering.meta-comm.com/boost-regression/CVS-HEAD/developer/ summary_release.html)?
Nothing, at this point. Most of those regressions have been there for quiet a while without being fixed, and most of those we know are harmless. Doug

Douglas Gregor <doug.gregor@gmail.com> writes:
What are we doing about a handful of the remaining regressions (17, http://engineering.meta-comm.com/boost-regression/CVS-HEAD/developer/ summary_release.html)?
Nothing, at this point. Most of those regressions have been there for quiet a while without being fixed, and most of those we know are harmless.
We still don't know about the vc-7.1/Boost.Python regressions. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
We still don't know about the vc-7.1/Boost.Python regressions.
Don't we? http://tinyurl.com/8wo7r You didn't comment this one... Stefan

Stefan Slapeta <stefan@slapeta.com> writes:
David Abrahams wrote:
We still don't know about the vc-7.1/Boost.Python regressions.
Don't we?
I know, but until I have tried the hotfix myself and reproduced the problem it's hard to tell what's really going on. -- Dave Abrahams Boost Consulting www.boost-consulting.com

We still don't know about the vc-7.1/Boost.Python regressions.
The build that was in progress when I replied the other day seemed to succeed (or at least thats what I though, maybe I didn't look close enough). Now it showing as failing again which is very strange. I'll look into it and get back to you as soon as I can. Martin -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.338 / Virus Database: 267.9.7/60 - Release Date: 28/07/2005

We still don't know about the vc-7.1/Boost.Python regressions.
I'm presuming this is related to the compiler hotfix that had been installed on that machine? If so it looks like a new compiler bug, however given the dire warnings MS places on it's web site about that fix, I don't think we should get too exercised about it. The real question is: is there any sense in which the compiler could be correct in it's error? Are there any new DR's that could be affecting things here? I believe the answer to both is "no", and the location of the error message is particularly strange (it occurs at the declaration of a typedef that doesn't try to access any base classes, just define what the type of the base class actually is). Not sure if this helps, John.

John Maddock wrote:
We still don't know about the vc-7.1/Boost.Python regressions.
I'm presuming this is related to the compiler hotfix that had been installed on that machine? If so it looks like a new compiler bug, however given the dire warnings MS places on it's web site about that fix, ...
Sorry to interrupt, but which hotfix is that?

"Peter Dimov" <pdimov@mmltd.net> writes:
John Maddock wrote:
We still don't know about the vc-7.1/Boost.Python regressions.
I'm presuming this is related to the compiler hotfix that had been installed on that machine? If so it looks like a new compiler bug, however given the dire warnings MS places on it's web site about that fix, ...
Sorry to interrupt, but which hotfix is that?
http://tinyurl.com/8wo7r -- Dave Abrahams Boost Consulting www.boost-consulting.com


I'm presuming this is related to the compiler hotfix that had been installed on that machine? If so it looks like a new compiler bug, however given the dire warnings MS places on it's web site about that fix, I don't think we should get too exercised about it. The real question is: is there any sense in which the compiler could be correct in it's error? Are there any new DR's that could be affecting things here? I believe the answer to both is "no", and the location of the error message is particularly strange (it occurs at the declaration of a typedef that doesn't try to access any base classes, just define what the type of the base class actually is).
I have noticed that when vc 7.1 overruns its internal limits (as in the case the hotfix fixed for us) it does not always give an internal compiler error. Yesterday one of our boxes where the developer hadn't installed the hotfix tried compiling and was faced with similarily bizaar compiler errors claming undefined variables in the very file we needed the fix to compile. Theses were actually defined and installing the hotfix made this problem go away. So in short all very odd and it seems the fix breaks even more obscure things than it fixes. Unfortuantely once installed it doesn't appear to be able to be uninstalled on its own so I can't test right now if this will make the problems go away. Martin -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.338 / Virus Database: 267.9.7/60 - Release Date: 28/07/2005

Jason Shirk from the vc compiler team was kind enough to get in contact about this issue and it's due to an unfortunate regression in the compiler due to the hotfix. I've pulled vc7.1 from my testing for now until I get hold of another hotfix that fixes the regression. Martin -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.338 / Virus Database: 267.9.9/62 - Release Date: 2/08/2005

Douglas Gregor writes:
On Jul 28, 2005, at 3:30 AM, Aleksey Gurtovoy wrote:
Douglas Gregor writes:
The release is starting to shape up, and here is our final release schedule. We'll stick to it unless something goes horribly wrong.
Code freeze - 11pm EST on Sunday, July 24 Total freeze (including documentation) - 11pm EST on Wednesday, July 27 Final release - Monday, August 1st
What are we doing about a handful of the remaining regressions (17, http://engineering.meta-comm.com/boost-regression/CVS-HEAD/developer/ summary_release.html)?
Nothing, at this point. Most of those regressions have been there for quiet a while without being fixed, and most of those we know are harmless.
This (the presense of regressions) needs to be made clear in the release notes, then. -- Aleksey Gurtovoy MetaCommunications Engineering
participants (12)
-
Aleksey Gurtovoy
-
David Abrahams
-
Doug Gregor
-
Douglas Gregor
-
John Maddock
-
Jonathan Turkanis
-
Martin Slater
-
Martin Wille
-
Peter Dimov
-
Rene Rivera
-
Robert Ramey
-
Stefan Slapeta