Re: [boost] Boost 1.36.0 release notice

Joe Gottman wrote:
Beman Dawes wrote:
Boost 1.36.0 has been released and is available from SourceForge. See http://sourceforge.net/projects/boost/
This release include four new libraries:
I just downloaded the version 1.36 and the release history is completely blank.
Grrr... One of my highest priority tasks for the next release is going to be to get a system in place that will automatically scan a release snapshot for the presence of certain files, and optionally verify size, date, and other properties. Thanks for the heads up. --Beman

Beman Dawes wrote:
Joe Gottman wrote:
Beman Dawes wrote:
Boost 1.36.0 has been released and is available from SourceForge. See http://sourceforge.net/projects/boost/
This release include four new libraries:
I just downloaded the version 1.36 and the release history is completely blank.
Grrr...
One of my highest priority tasks for the next release is going to be to get a system in place that will automatically scan a release snapshot for the presence of certain files, and optionally verify size, date, and other properties.
From the various posts I have read related to the release process it sounds as if there is still a fair amount of manual work to be done in the process. Not only does this make life hard for release managers, but it is also quite error-prone, as this example demonstrates. Why can't the packaging process be automatized, making it fully reproducible ? Trying to increase the amount of tests run on the final package attacs the problem from the wrong end, IMO. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

on Fri Aug 15 2008, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
Why can't the packaging process be automatized, making it fully reproducible ? Trying to increase the amount of tests run on the final package attacs the problem from the wrong end, IMO.
reproducability != correctness. I agree that automation is a good idea, but it might also be a good idea to test the results of that automation, neh? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
on Fri Aug 15 2008, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
Why can't the packaging process be automatized, making it fully reproducible ? Trying to increase the amount of tests run on the final package attacs the problem from the wrong end, IMO.
reproducability != correctness.
I agree that automation is a good idea, but it might also be a good idea to test the results of that automation, neh?
Indeed. My point, however, was that I would *first* automatize, *then* validate. The point of reproducibility is to cut down on the amount of (repeated) validation you have to do afterwards, no ? :-) Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

on Fri Aug 15 2008, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
David Abrahams wrote:
on Fri Aug 15 2008, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
Why can't the packaging process be automatized, making it fully reproducible ? Trying to increase the amount of tests run on the final package attacs the problem from the wrong end, IMO.
reproducability != correctness.
I agree that automation is a good idea, but it might also be a good idea to test the results of that automation, neh?
Indeed. My point, however, was that I would *first* automatize, *then* validate. The point of reproducibility is to cut down on the amount of (repeated) validation you have to do afterwards, no ? :-)
Makes sense to me. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Stefan Seefeld wrote:
Beman Dawes wrote:
Joe Gottman wrote:
Beman Dawes wrote:
Boost 1.36.0 has been released and is available from SourceForge. See http://sourceforge.net/projects/boost/
This release include four new libraries:
I just downloaded the version 1.36 and the release history is completely blank.
Grrr...
One of my highest priority tasks for the next release is going to be to get a system in place that will automatically scan a release snapshot for the presence of certain files, and optionally verify size, date, and other properties.
From the various posts I have read related to the release process it sounds as if there is still a fair amount of manual work to be done in the process. Not only does this make life hard for release managers, but it is also quite error-prone, as this example demonstrates.
Yep.
Why can't the packaging process be automatized, making it fully reproducible?
That's what I've been trying to do, with some success, since last October. But the lengthy tool chains involved keep breaking, forcing manual intervention. Case in point, the doc build tool chain broke for this release, forcing a really messy workaround. Joel is trying to fix it now, but it is slow going since the test cases that fail for others works for him. That's been a battle for some time now with doc builds; configurations that work for one person don't work for another. It is also hard to automate process that have to switch back and forth between Windows and Linux. Again, the tool chain is unreliable.
Trying to increase the amount of tests run on the final package attacs the problem from the wrong end, IMO.
Probably, but I don't know how to attack some of the tool problems otherwise. For example, the problem of people changing the tool chain without realizing it has an impact on the automated release tools, and the problem of the automated release tools no longer producing some component (like docs) because of a tool change, and no one noticing. Thanks, --Beman --Beman

Beman Dawes wrote:
Probably, but I don't know how to attack some of the tool problems otherwise. For example, the problem of people changing the tool chain without realizing it has an impact on the automated release tools, and the problem of the automated release tools no longer producing some component (like docs) because of a tool change, and no one noticing.
How about subjecting the tool chain the the same process as libraries are. That would be: a) proposal for boost tool - e.g. docbook. b) enough implementation, code, and test to request formal review c) normal formal review process d) normal acceptance process. e) normal test procedure. Test procedure would be similar to that of libraries. Test files, expected results etc. When all tests are passing, the release manager would have the option of permiting trunk version to be rolled into the release ready version. This would work well with other testing and release procedures. That is, testing and release would use the last released tool version, NOT the one being refined in the trunk. This would be in line with what I hope will be the future of boost in that libraries are tested against last release and rolled into the release ready version as they prove that they are ready for prime time. Robert Ramey

Robert Ramey wrote:
Beman Dawes wrote:
Probably, but I don't know how to attack some of the tool problems otherwise. For example, the problem of people changing the tool chain without realizing it has an impact on the automated release tools, and the problem of the automated release tools no longer producing some component (like docs) because of a tool change, and no one noticing.
How about subjecting the tool chain the the same process as libraries are. That would be:
a) proposal for boost tool - e.g. docbook. b) enough implementation, code, and test to request formal review c) normal formal review process d) normal acceptance process. e) normal test procedure.
Test procedure would be similar to that of libraries. Test files, expected results etc. When all tests are passing, the release manager would have the option of permiting trunk version to be rolled into the release ready version.
This would work well with other testing and release procedures. That is, testing and release would use the last released tool version, NOT the one being refined in the trunk. This would be in line with what I hope will be the future of boost in that libraries are tested against last release and rolled into the release ready version as they prove that they are ready for prime time.
I second that. And this is something I've been moving some of the build and test tools towards. For example bjam follows an abbreviated version of the above process. And I'm working on setting up some formal testing of bjam and adding some of the test tools and doc tools along the way. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Robert Ramey wrote:
Beman Dawes wrote:
Probably, but I don't know how to attack some of the tool problems otherwise. For example, the problem of people changing the tool chain without realizing it has an impact on the automated release tools, and the problem of the automated release tools no longer producing some component (like docs) because of a tool change, and no one noticing.
How about subjecting the tool chain the the same process as libraries are. That would be:
a) proposal for boost tool - e.g. docbook. b) enough implementation, code, and test to request formal review c) normal formal review process d) normal acceptance process. e) normal test procedure.
Test procedure would be similar to that of libraries. Test files, expected results etc. When all tests are passing, the release manager would have the option of permiting trunk version to be rolled into the release ready version.
This would work well with other testing and release procedures. That is, testing and release would use the last released tool version, NOT the one being refined in the trunk. This would be in line with what I hope will be the future of boost in that libraries are tested against last release and rolled into the release ready version as they prove that they are ready for prime time.
Yes, although the problems we've been having aren't so much with new tools as with existing tools that break unexpectedly (and/or worse yet, silently). Also, the breaking change may be in a tool that Booster's don't maintain, such as Doxygen or xsltproc. --Beman

Beman Dawes wrote:
Robert Ramey wrote:
This would work well with other testing and release procedures. That is, testing and release would use the last released tool version, NOT the one being refined in the trunk. This would be in line with what I hope will be the future of boost in that libraries are tested against last release and rolled into the release ready version as they prove that they are ready for prime time.
Yes, although the problems we've been having aren't so much with new tools as with existing tools that break unexpectedly (and/or worse yet, silently).
Also, the breaking change may be in a tool that Booster's don't maintain, such as Doxygen or xsltproc.
The same could be said for libraries. Actually, to me, the most important part of the testing process is detecting when something that I depend upon changes. I complain about this all the time - but even if no one introduced breaking changes, there is always new stuff - new compilers, new versions of stl, new os variations, corrections in libraries which catch previously undetected errors of my own. My real point is that there is no reason that any tools that are used by boost should be treated any differently than any libraries used by boost.
--Beman _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

on Sun Aug 17 2008, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
My real point is that there is no reason that any tools that are used by boost should be treated any differently than any libraries used by boost.
Does that assertion have any practical implications for the issue being discussed? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
Beman Dawes wrote:
Probably, but I don't know how to attack some of the tool problems otherwise. For example, the problem of people changing the tool chain without realizing it has an impact on the automated release tools, and the problem of the automated release tools no longer producing some component (like docs) because of a tool change, and no one noticing.
on Sun Aug 17 2008, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
My real point is that there is no reason that any tools that are used by boost should be treated any differently than any libraries used by boost.
Does that assertion have any practical implications for the issue being discussed?
LOL - of course it does. If regression testing were setup for boost tools, they would be demonstrated to be functioning as expected before they were used in the actual release process. Had this been procedure been in place, the issue raised above would not have occurred. Robert Ramey

on Sun Aug 17 2008, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
David Abrahams wrote:
Beman Dawes wrote:
Probably, but I don't know how to attack some of the tool problems otherwise. For example, the problem of people changing the tool chain without realizing it has an impact on the automated release tools, and the problem of the automated release tools no longer producing some component (like docs) because of a tool change, and no one noticing.
on Sun Aug 17 2008, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
My real point is that there is no reason that any tools that are used by boost should be treated any differently than any libraries used by boost.
Does that assertion have any practical implications for the issue being discussed?
LOL - of course it does. If regression testing were setup for boost tools, they would be demonstrated to be functioning as expected before they were used in the actual release process. Had this been procedure been in place, the issue raised above would not have occurred.
Thanks for spelling that out; it wasn't obvious to me. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Robert Ramey wrote:
David Abrahams wrote:
Beman Dawes wrote:
Probably, but I don't know how to attack some of the tool problems otherwise. For example, the problem of people changing the tool chain without realizing it has an impact on the automated release tools, and the problem of the automated release tools no longer producing some component (like docs) because of a tool change, and no one noticing.
on Sun Aug 17 2008, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
My real point is that there is no reason that any tools that are used by boost should be treated any differently than any libraries used by boost.
Does that assertion have any practical implications for the issue being discussed?
LOL - of course it does.
Is there any reason the above exchange makes you laughing out loud?
If regression testing were setup for boost tools, they would be demonstrated to be functioning as expected before they were used in the actual release process. Had this been procedure been in place, the issue raised above would not have occurred.
Last time I've checked, boost.serialization had some automatic tests. Does it mean that no issue has ever occurred with boost.serialization, and no issue will ever occur in future? I don't think so. Quoting Beman: Case in point, the doc build tool chain broke for this release, forcing a really messy workaround. Joel is trying to fix it now, but it is slow going since the test cases that fail for others works for him. So, it appears like the problem is not that there's no *testing infrastructure* is missing, it's that for some behaviours, there are no tests or those tests are not sufficiently reliable. Therefore, it does not seem like generic advice of "we need tests" is going to help much -- there needs to be actual work on specific tests. And even if you add a new test for each issue encountered during release process, it does not automatically mean that the resulting release packages need not be examined. For example, here at work we do have very extensive automatic tests, but still, every user-visible package goes through manual QA -- and yes, it does checks that documentation exists. - Volodya

Vladimir Prus wrote:
If regression testing were setup for boost tools, they would be demonstrated to be functioning as expected before they were used in the actual release process. Had this been procedure been in place, the issue raised above would not have occurred.
Last time I've checked, boost.serialization had some automatic tests. Does it mean that no issue has ever occurred with boost.serialization, and no issue will ever occur in future? I don't think so.
Of course not, but I'm sure that the extensive testing of the boost serialization has avoided untold numbers of problems for users. And, it has made the package much easier to develop. I believe that it would do the same for boost tools. It's well known that testing can't prove the absense of faults, only their existence. That doesn't make testing pointless.
Quoting Beman:
Case in point, the doc build tool chain broke for this release, forcing a really messy workaround. Joel is trying to fix it now, but it is slow going since the test cases that fail for others works for him.
So, it appears like the problem is not that there's no *testing infrastructure* is missing, it's that for some behaviours, there are no tests or those tests are not sufficiently reliable.
Taking the case of boost book. As far as I know there is no regression testing to verify that it works when something it depends upon changes.
Therefore, it does not seem like generic advice of "we need tests" is going to help much -- there needs to be actual work on specific tests.
My suggestion is a lot more than "we need tests". My suggestion is that boost tools should be subjected to the same procedures that boost libaries are. Using boost book as an example, I would like to see boost/tools/boostbook/test/Jamfile ... and see that boost book shows up in the test matrix just like any library does. I would like to see boost book "packaged" with documentation, examples and tests so that users would feel confident to use it for their own projects. If the "boost process" is good for libraries, I think it would be good for tools too. I would love to use boost book / quickbook for my own projects, but I don't feel its polished enough to depend upon. The only argument I can see against this is that it seems to be alot of extra work. I'm sympathetic to this. I used to think that. But, the boost requirement for a separate test suite and the development of the serialization library made me realize I was wrong. I believe that extending the application of of the "boost process" to the tools would save effort in the long run.
And even if you add a new test for each issue encountered during release process, it does not automatically mean that the resulting release packages need not be examined. For example, here at work we do have very extensive automatic tests, but still, every user-visible package goes through manual QA -- and yes, it does checks that documentation exists.
No one has suggested that testing proves the absense of bugs. Only that unit testing is a cost effective proposition. Robert Ramey

Robert Ramey wrote:
Quoting Beman:
Case in point, the doc build tool chain broke for this release, forcing a really messy workaround. Joel is trying to fix it now, but it is slow going since the test cases that fail for others works for him.
So, it appears like the problem is not that there's no *testing infrastructure* is missing, it's that for some behaviours, there are no tests or those tests are not sufficiently reliable.
Taking the case of boost book. As far as I know there is no regression testing to verify that it works when something it depends upon changes.
I don't know, either. But large part of the doc toolchain is outside of Boost -- e.g. Docbook and FOP. Those are huge, and it's not likely somebody will be able to create tests for those. So, manual inspection of results remain the only solution.
Therefore, it does not seem like generic advice of "we need tests" is going to help much -- there needs to be actual work on specific tests.
My suggestion is a lot more than "we need tests". My suggestion is that boost tools should be subjected to the same procedures that boost libaries are. Using boost book as an example, I would like to see boost/tools/boostbook/test/Jamfile ... and see that boost book shows up in the test matrix just like any library does.
I think that presence or lack of boost/tools/boostbook/test/Jamfile is an implementation detail.
I would like to see boost book "packaged" with documentation, examples and tests so that users would feel confident to use it for their own projects. If the "boost process" is good for libraries, I think it would be good for tools too. I would love to use boost book / quickbook for my own projects, but I don't feel its polished enough to depend upon.
I'm positively sure I saw documentation for boost.book. Quickbook is also documented. There's tools/quickbook/test/Jamfile.v2, and some test files there. For Boost.Build, there's both documentation and extensive test suite. So, it appears that your proposal boils down to: 1. Boost.Book should have tests 2. Test for tools should be included in the main test matrix Have I left out something?
The only argument I can see against this is that it seems to be alot of extra work. I'm sympathetic to this. I used to think that. But, the boost requirement for a separate test suite and the development of the serialization library made me realize I was wrong. I believe that extending the application of of the "boost process" to the tools would save effort in the long run.
And even if you add a new test for each issue encountered during release process, it does not automatically mean that the resulting release packages need not be examined. For example, here at work we do have very extensive automatic tests, but still, every user-visible package goes through manual QA -- and yes, it does checks that documentation exists.
No one has suggested that testing proves the absense of bugs.
I've understood your prior statement ("the issue raised above would not have occurred.") in exactly this way. I apologize if I've misunderstood what you was saying. - Volodya

Vladimir Prus wrote:
Robert Ramey wrote:
Taking the case of boost book. As far as I know there is no regression testing to verify that it works when something it depends upon changes.
I don't know, either. But large part of the doc toolchain is outside of Boost -- e.g. Docbook and FOP. Those are huge, and it's not likely somebody will be able to create tests for those. So, manual inspection of results remain the only solution.
I'm not advocating that tests be create for those things outside of boost. I'm advocating tests for those things in boost which depend on things outside of boost as well as the thing itself. We do the same thing with our libraries. We write code which depends up STL. In the course of testing ,we discover that we either made a mistake in using STL, or that various verstions of STL have differing interpretations of ambiguous points in some standard or have bugs in thier own.
My suggestion is a lot more than "we need tests". My suggestion is that boost tools should be subjected to the same procedures that boost libaries are. Using boost book as an example, I would like to see boost/tools/boostbook/test/Jamfile ... and see that boost book shows up in the test matrix just like any library does.
I think that presence or lack of boost/tools/boostbook/test/Jamfile is an implementation detail.
That is the essential difference in out view point.
I would like to see boost book "packaged" with documentation, examples and tests so that users would feel confident to use it for their own projects. If the "boost process" is good for libraries, I think it would be good for tools too. I would love to use boost book / quickbook for my own projects, but I don't feel its polished enough to depend upon.
I'm positively sure I saw documentation for boost.book. Quickbook is also documented. There's tools/quickbook/test/Jamfile.v2, and some test files there. For Boost.Build, there's both documentation and extensive test suite.
So, it appears that your proposal boils down to: 1. Boost.Book should have tests 2. Test for tools should be included in the main test matrix
Have I left out something?
Not much. The only thing I might add is that I would like to see the directory structure code/tests/headers, etc reflect the boost pattern for libraries. This would make it easy to verify that everything exists.
No one has suggested that testing proves the absense of bugs.
I've understood your prior statement ("the issue raised above would not have occurred.") in exactly this way. I apologize if I've misunderstood what you was saying.
Well, maybe I didn't say it right. The reason I chose this particular instance as an example was that I believe that the application of the "boost process" would have made a difference in this particular example. I didn't mean that that this would solve ALL problems. I believe that it would result in a net improvement in quality and productivity. Robert Ramey

Robert Ramey wrote:
LOL - of course it does. If regression testing were setup for boost tools, they would be demonstrated to be functioning as expected before they were used in the actual release process. Had this been procedure been in place, the issue raised above would not have occurred.
This is a good point, in fact is there any reason why the doc build shouldn't be part of the regression tests? Ah.... but it requires external tools that might not be available (especially now that accumulators has introduced a dependency on Latex and Ghostscript) :-( John.
participants (7)
-
Beman Dawes
-
David Abrahams
-
John Maddock
-
Rene Rivera
-
Robert Ramey
-
Stefan Seefeld
-
Vladimir Prus