[test] Trunk broken: What happened to test_exec_monitor?

Testing on Trunk is apparently broken as the test_exec_monitor target has been removed from Boost.Test Trunk as of revision #74642. A quick grep shows 22 Jamfiles and over 200 targets dependent upon this. Whats happened? John.

Hi John, On Sunday, 2. October 2011 18:51:56 John Maddock wrote:
Testing on Trunk is apparently broken as the test_exec_monitor target has been removed from Boost.Test Trunk as of revision #74642.
The main issue seems to be that Gennadiy forgot to commit "decorator.cpp" in this commit. error: Unable to find file or target named error: 'decorators.cpp' error: referred from project at error: '/home/hunold/src/devel/boost/libs/test/build'
A quick grep shows 22 Jamfiles and over 200 targets dependent upon this.
Wow. That is right, those should probably link against "boost_unit_test_framework". Gennadiy? Yours, Jürgen -- Dipl.-Math. Jürgen Hunold | IVE mbH Software-Entwickler | Lützerodestraße 10 Tel: +49 511 897668 33 | 30161 Hannover, Germany Fax: +49 511 897668 29 | http://www.ivembh.de juergen.hunold@ivembh.de | | Geschäftsführer: Sitz des Unternehmens: Hannover | Univ.-Prof. Dr.-Ing. Thomas Siefer Amtsgericht Hannover, HRB 56965 | PD Dr.-Ing. Alfons Radtke

Hi, On Sunday, 2. October 2011 19:43:03 Jürgen Hunold wrote:
Hi John,
On Sunday, 2. October 2011 18:51:56 John Maddock wrote:
Testing on Trunk is apparently broken as the test_exec_monitor target has been removed from Boost.Test Trunk as of revision #74642.
The main issue seems to be that Gennadiy forgot to commit "decorator.cpp" in this commit.
And gcc-4.6.2 fails to compile the changed execution_monitor.cpp gcc.compile.c++ /home/hunold/src/devel/boost/bin.v2/libs/test/build/gcc-4.6/debug/execution_monitor.o In file included from /home/hunold/src/devel/boost/libs/test/src/execution_monitor.cpp:16:0: /home/hunold/src/devel/boost/boost/test/impl/execution_monitor.ipp: In member function ‘void boost::execution_monitor::vexecute(const boost::function<void()>&)’: /home/hunold/src/devel/boost/boost/test/impl/execution_monitor.ipp:1287:27: error: no matching function for call to ‘boost::execution_monitor::execute(boost::execution_monitor::vexecute(const boost::function<void()>&)::forward)’ /home/hunold/src/devel/boost/boost/test/impl/execution_monitor.ipp:1287:27: note: candidate is: /home/hunold/src/devel/boost/boost/test/impl/execution_monitor.ipp:1151:1: note: int boost::execution_monitor::execute(const boost::function<int()>&) /home/hunold/src/devel/boost/boost/test/impl/execution_monitor.ipp:1151:1: note: no known conversion for argument 1 from ‘boost::execution_monitor::vexecute(const boost::function<void()>&)::forward’ to ‘const boost::function<int()>&’
Gennadiy?
Yours, Jürgen -- Dipl.-Math. Jürgen Hunold | IVE mbH Software-Entwickler | Lützerodestraße 10 Tel: +49 511 897668 33 | 30161 Hannover, Germany Fax: +49 511 897668 29 | http://www.ivembh.de juergen.hunold@ivembh.de | | Geschäftsführer: Sitz des Unternehmens: Hannover | Univ.-Prof. Dr.-Ing. Thomas Siefer Amtsgericht Hannover, HRB 56965 | PD Dr.-Ing. Alfons Radtke

Jürgen Hunold <juergen.hunold <at> ivembh.de> writes:
And gcc-4.6.2 fails to compile the changed execution_monitor.cpp
error: no matching function for call to ‘execution_monitor::execute(forward)’
Can you tell why it fails to perform implicit construction of boost::function<int ()>? Should I use explicit one? Gennadiy

Hi Gennadiy, On Sunday, 2. October 2011 20:44:10 Gennadiy Rozental wrote:
Jürgen Hunold <juergen.hunold <at> ivembh.de> writes:
And gcc-4.6.2 fails to compile the changed execution_monitor.cpp
error: no matching function for call to ‘execution_monitor::execute(forward)’
Can you tell why it fails to perform implicit construction of boost::function<int ()>? Should I use explicit one?
It seems the nested struct is the problem. clang trunk compiles, but warns: warning: template argument uses local type 'boost::execution_monitor::forward' [-Wlocal-type-template-args] moving the struct "forward" out of the function make gcc compile this. Just putting this into the "detail" namespace works. Please find a quick patch attached. (git diff from git svn). Yours, Jürgen -- Dipl.-Math. Jürgen Hunold | IVE mbH Software-Entwickler | Lützerodestraße 10 Tel: +49 511 897668 33 | 30161 Hannover, Germany Fax: +49 511 897668 29 | http://www.ivembh.de juergen.hunold@ivembh.de | | Geschäftsführer: Sitz des Unternehmens: Hannover | Univ.-Prof. Dr.-Ing. Thomas Siefer Amtsgericht Hannover, HRB 56965 | PD Dr.-Ing. Alfons Radtke

John Maddock <boost.regex <at> virgin.net> writes:
Testing on Trunk is apparently broken as the test_exec_monitor target has been removed from Boost.Test Trunk as of revision #74642.
Test Execution Monitor has been deprecated for more than 5 years I believe (since 1.34). I do not believe it's being used anywhere but internally in boost.
A quick grep shows 22 Jamfiles and over 200 targets dependent upon this.
To switch to test framework you really only need to change 2 lines: #include <boost/test/included/test_execution_monitor.hpp> to #include <boost/test/included/unit_test.hpp> and int test_main() to BOOST_AUTO_TEST_CASE(test_main) If no one mind I can go ahead and apply these. Gennadiy

Testing on Trunk is apparently broken as the test_exec_monitor target has been removed from Boost.Test Trunk as of revision #74642.
Test Execution Monitor has been deprecated for more than 5 years I believe (since 1.34). I do not believe it's being used anywhere but internally in boost.
A quick grep shows 22 Jamfiles and over 200 targets dependent upon this.
To switch to test framework you really only need to change 2 lines:
#include <boost/test/included/test_execution_monitor.hpp> to #include <boost/test/included/unit_test.hpp>
and
int test_main() to BOOST_AUTO_TEST_CASE(test_main)
If no one mind I can go ahead and apply these.
I'm not sure if it's that simple - a quick grep shows 815 files with a test_main. What should have happened is that: * You would announce loud and clear that this feature was going to be removed, and then. * Work with library authors to remove all uses of this feature and verify that nothing is broken in the process. * Merge the changes (and only these changes) to the release branch once everyone is happy. * Only when all uses of the feature have been removed, can the feature actually be removed from Trunk. Any other procedure is sure to cause chaos, not only to Trunk, but all over again to the Release branch if we're not *very* careful. So I think I'd like to see either: 1) This change is reverted, and the procedure above followed, or: 2) The test_exec_monitor is reinstated Option (2) might be easier - test_exec_monitor could be an alias for the unit test lib, and the headers could just declare an auto-unit-test case that calls test_main? Whatever happens, the changes need to be verified as actually fixing the problem before being committed, and since the schedule for the next release has been announced we need to see this fixed ASAP. We simply can't afford a couple of weeks of thrashing around before Trunk is stable again. Regards, John. PS even with current SVN I still get: compile-c-c++ ..\..\..\bin.v2\libs\test\build\msvc-10.0\debug\asynch-exceptions-on\threading-multi\decorators.obj decorators.cpp ..\src\decorators.cpp(16) : fatal error C1083: Cannot open include file: 'boost/test/impl/decorators.ipp': No such file or directory :-(

On Mon, Oct 3, 2011 at 12:03 PM, John Maddock <boost.regex@virgin.net> wrote:
Testing on Trunk is apparently broken as the test_exec_monitor target has been removed from Boost.Test Trunk as of revision #74642.
Test Execution Monitor has been deprecated for more than 5 years I believe (since 1.34). I do not believe it's being used anywhere but internally in boost.
A quick grep shows 22 Jamfiles and over 200 targets dependent upon this.
To switch to test framework you really only need to change 2 lines:
#include <boost/test/included/test_execution_monitor.hpp> to #include <boost/test/included/unit_test.hpp>
and
int test_main() to BOOST_AUTO_TEST_CASE(test_main)
If no one mind I can go ahead and apply these.
I'm not sure if it's that simple - a quick grep shows 815 files with a test_main. What should have happened is that:
* You would announce loud and clear that this feature was going to be removed, and then. * Work with library authors to remove all uses of this feature and verify that nothing is broken in the process. * Merge the changes (and only these changes) to the release branch once everyone is happy. * Only when all uses of the feature have been removed, can the feature actually be removed from Trunk.
Any other procedure is sure to cause chaos, not only to Trunk, but all over again to the Release branch if we're not *very* careful.
So I think I'd like to see either:
1) This change is reverted, and the procedure above followed, or: 2) The test_exec_monitor is reinstated
Option (2) might be easier - test_exec_monitor could be an alias for the unit test lib, and the headers could just declare an auto-unit-test case that calls test_main?
Whatever happens, the changes need to be verified as actually fixing the problem before being committed, and since the schedule for the next release has been announced we need to see this fixed ASAP. We simply can't afford a couple of weeks of thrashing around before Trunk is stable again.
Gennadiy, please revert all of your changes. This mess needs to be cleared up right away. Wholesale breakage of trunk isn't acceptable anytime, much less this late in a release cycle. --Beman

on Mon Oct 03 2011, Beman Dawes <bdawes-AT-acm.org> wrote:
On Mon, Oct 3, 2011 at 12:03 PM, John Maddock <boost.regex@virgin.net> wrote:
What should have happened is that:
* You would announce loud and clear that this feature was going to be removed, and then. * Work with library authors to remove all uses of this feature and verify that nothing is broken in the process. * Merge the changes (and only these changes) to the release branch once everyone is happy. * Only when all uses of the feature have been removed, can the feature actually be removed from Trunk.
Any other procedure is sure to cause chaos, not only to Trunk, but all over again to the Release branch if we're not *very* careful.
Wholesale breakage of trunk isn't acceptable anytime, much less this late in a release cycle.
This is so obvious, and things like this have happened so many times, that I'm amazed they're still happening. Gennadiy, what do we have to do to get you to take appropriate care with respect to your dependent libraries' test results? Is there some philosophical disagreement with the expectations of the group that you just can't bring yourself to meet them? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave <at> boostpro.com> writes:
This is so obvious, and things like this have happened so many times, that I'm amazed they're still happening.
Not sure what you refer to. I did not make any major changes for many many years.
Gennadiy, what do we have to do to get you to take appropriate care with respect to your dependent libraries' test results? Is there some philosophical disagreement with the expectations of the group that you just can't bring yourself to meet them?
Aside from the test_exec_monitor removal (which I'll reinstate for now) is there any other way in current setup for me to check in and test my changes? There is always a chance that due to compiler differences trunk will be broken for short period of time. As you said - this is so obvious. The only reason we talking about this is because any changes I made bound to have higher exposure (in comparison with other boost libraries). Gennadiy

Hi Gennadiy, On Oct 3, 2011, at 2:09 PM, Gennadiy Rozental wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
This is so obvious, and things like this have happened so many times, that I'm amazed they're still happening.
Not sure what you refer to. I did not make any major changes for many many years.
I wish that were the case Gennadiy. In actuality though, I sent you this email (partially reproduced below) following your large commit last May that caused, and continues to cause, significant problems for the MacOSX Intel toolset. I did copy you on this email. On May 18, 2010, at 3:12 PM, Belcourt, Kenneth wrote:
Hi,
The Sandia Darwin Intel testers have been stable for some time but a recent change seems to have broken the tester. This test in Boost.Test, seems to be the source of the problem (test_tools_test).
brisc: kbelco$ pwd /Volumes/Scratch/kbelco/boost/results/boost/bin.v2/libs/test/test/ test_tools_test.test/intel-darwin-11.0/debug
brisc: kbelco$ more test_tools_test.output terminate called after throwing an instance of 'boost::system_error' Running 22 test cases... terminate called recursively
-- Noel

Belcourt, K. Noel <kbelco <at> sandia.gov> writes:
I wish that were the case Gennadiy. In actuality though, I sent you this email (partially reproduced below) following your large commit last May that caused, and continues to cause, significant problems for the MacOSX Intel toolset.
It was not major change, though it might have touched large number of files. And I do not believe this was a problem with toolset. That said, I believe I have fixed the issue with this test module in latest trunk. Gennadiy

On Oct 3, 2011, at 4:29 PM, Gennadiy Rozental wrote:
Belcourt, K. Noel <kbelco <at> sandia.gov> writes:
I wish that were the case Gennadiy. In actuality though, I sent you this email (partially reproduced below) following your large commit last May that caused, and continues to cause, significant problems for the MacOSX Intel toolset.
It was not major change, though it might have touched large number of files. And I do not believe this was a problem with toolset.
That said, I believe I have fixed the issue with this test module in latest trunk.
Thank you. -- Noel

on Mon Oct 03 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
This is so obvious, and things like this have happened so many times, that I'm amazed they're still happening.
Not sure what you refer to. I did not make any major changes for many many years.
Call it my perception then, if you like. I don't have time right now to dig through history to prove it quantitatively. You have a reputation for committing changes that cause pervasive test failures in other libraries, and for doing so close to a release when it's a more alarming and inconvenient than necessary. I know we've had situations like this on multiple occasions. I would like to think that the reactions you've received over the years would make you more cautious.
Gennadiy, what do we have to do to get you to take appropriate care with respect to your dependent libraries' test results? Is there some philosophical disagreement with the expectations of the group that you just can't bring yourself to meet them?
Aside from the test_exec_monitor removal (which I'll reinstate for now) is there any other way in current setup for me to check in and test my changes?
How about at least testing them before you check them in? Another thing you can do is test them on as many compilers as you can possibly get your hands on. That's what I do.
There is always a chance that due to compiler differences trunk will be broken for short period of time.
Yes, but that isn't the case here, is it? I'm sure all compilers are pretty much equally unforgiving of the removal of a component that's in use.
As you said - this is so obvious. The only reason we talking about this is because any changes I made bound to have higher exposure (in comparison with other boost libraries).
Yes, but luckily you also have at your disposal a suite of real-world usage tests that exactly checks for the particular breakages we're concerned with :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave <at> boostpro.com> writes:
libraries, and for doing so close to a release when it's a more alarming and inconvenient than necessary. I know we've had situations like this
Frankly, I hopped that making changes in trunk would shield me from need to keep an eye on Boost release schedule. I rarely can sync my development with Boost release schedule.
How about at least testing them before you check them in?
I did run my own test suite with the compiler I use for development. I can't realistically run full regression test suite before every check in. Not only because of time constrains, but as well due to the fact that I am working on boost development when I have free window, on a computer I have at hand at the moment and I need to check in changes to move then between different development environments.
Another thing you can do is test them on as many compilers as you can possibly get your hands on. That's what I do.
I usually test against msvc and one gcc version under cygwin (if avaialble). Boost.Build is currently broken for me - I have to make quite some number of local changes to trunk version to be able to run my own test regression suite, but some of the functionality is still broken (and somehow I am not planning to switch to something else). Many other NT compilers are not available to me. Linux setups I use only when I observe regressions - these are not trivial for me to get a hold of, especially if I work not in a usual setup. Gennadiy

On 10/04/2011 09:04 PM, Gennadiy Rozental wrote:
I did run my own test suite with the compiler I use for development. I can't realistically run full regression test suite before every check in. Not only because of time constrains, but as well due to the fact that I am working on boost development when I have free window, on a computer I have at hand at the moment and I need to check in changes to move then between different development environments.
The important bit is to compile the tests of all libraries affected by your library, that is to say all of them. If you can't do that, maybe you should ask for assistance to maintain your library.

Beman Dawes <bdawes <at> acm.org> writes:
Gennadiy, please revert all of your changes. This mess needs to be cleared up right away.
I'll reinstate the missing component for the time being.
Wholesale breakage of trunk isn't acceptable anytime, much less this late in a release cycle.
I thought release is based on release branch. Sometimes I do break trunk due to missed commit or compiler differences, but these quickly resolved. I need some way to make sure my changes are working. Gennadiy

on Mon Oct 03 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Beman Dawes <bdawes <at> acm.org> writes:
Gennadiy, please revert all of your changes. This mess needs to be cleared up right away.
I'll reinstate the missing component for the time being.
Wholesale breakage of trunk isn't acceptable anytime, much less this late in a release cycle.
I thought release is based on release branch.
It is. But as you've seen over the years, it causes an unworkable amount of upset and alarm when large numbers of failures appear on the trunk all at once, and people who would otherwise be dealing with release issues now have trunk issues to worry about.
Sometimes I do break trunk due to missed commit or compiler differences, but these quickly resolved. I need some way to make sure my changes are working.
But that can't have been the case here, can it? Surely if you'd run the whole boost regression suite on your local machine before and after your changes, you'd have seen the differences, no? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave <at> boostpro.com> writes:
It is. But as you've seen over the years, it causes an unworkable amount of upset and alarm when large numbers of failures appear on the trunk all at once, and people who would otherwise be dealing with release issues now have trunk issues to worry about.
That's why I always advocated independent library development. Until I make some kind of "release" of Boost.Test I would prefer only my own unit tests to run in a trunk version
Sometimes I do break trunk due to missed commit or compiler differences, but these quickly resolved. I need some way to make sure my changes are working.
But that can't have been the case here, can it? Surely if you'd run the whole boost regression suite on your local machine before and after your changes, you'd have seen the differences, no?
No, I can't. I have limited time I can spend working on boost development. I cannot wait hours for full regression test to finish even once, not mention twice. I expect regression test system to deal with this. I resolving issues once I observe them online. I do run my own regression tests and they pass. Gennadiy

On 10/4/2011 11:26 AM, Gennadiy Rozental wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
Surely if you'd run the whole boost regression suite on your local machine before and after your changes, you'd have seen the differences, no?
No, I can't. I have limited time I can spend working on boost development. I cannot wait hours for full regression test to finish even once, not mention twice. I expect regression test system to deal with this. I resolving issues once I observe them online. I do run my own regression tests and they pass.
This is an unfortunate attitude, Gennadiy. As a maintainer of a critical piece of boost infrastructure, you have a greater responsibility than most to keep things in the green. Have you considered taking on an assistant maintainer who can help you test changes against trunk? -- Eric Niebler BoostPro Computing http://www.boostpro.com

Eric Niebler <eric <at> boostpro.com> writes:
On 10/4/2011 11:26 AM, Gennadiy Rozental wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
Surely if you'd run the whole boost regression suite on your local machine before and after your changes, you'd have seen the differences, no?
No, I can't. I have limited time I can spend working on boost development. I cannot wait hours for full regression test to finish even once, not mention twice. I expect regression test system to deal with this. I resolving issues once I observe them online. I do run my own regression tests and they pass.
This is an unfortunate attitude, Gennadiy. As a maintainer of a critical piece of boost infrastructure, you have a greater responsibility than most to keep things in the green.
I make my changes when I have time to look at regression test reports and fix regressions if any. Usually (unless I am doing some major changes) there are few and these goes unnoticed. This time I wanted to remove some long deprecated symbols. I might have handled that better admittedly.
Have you considered taking on an assistant maintainer who can help you test changes against trunk?
I don't mind help, if anyone volunteering, but not sure how much it would help with new development. I would still need to run full regression test on all platforms for every commit, don't I? Gennadiy

On 10/4/2011 1:24 PM, Gennadiy Rozental wrote:
Eric Niebler <eric <at> boostpro.com> writes:
Have you considered taking on an assistant maintainer who can help you test changes against trunk?
I don't mind help, if anyone volunteering, but not sure how much it would help with new development. I would still need to run full regression test on all platforms for every commit, don't I?
Maybe not after *every* commit, but certainly after interface-breaking ones like this. If you don't have the time or the resources to do full Boost regression tests, then perhaps an assistant maintainer could help. -- Eric Niebler BoostPro Computing http://www.boostpro.com

on Tue Oct 04 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 10/4/2011 1:24 PM, Gennadiy Rozental wrote:
Eric Niebler <eric <at> boostpro.com> writes:
Have you considered taking on an assistant maintainer who can help you test changes against trunk?
I don't mind help, if anyone volunteering, but not sure how much it would help with new development. I would still need to run full regression test on all platforms for every commit, don't I?
Maybe not after *every* commit,
And certainly not on "all platforms" (whatever that means; aren't there essentially an infinite number?) In this case, one platform would have been enough. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave <at> boostpro.com> writes:
And certainly not on "all platforms" (whatever that means; aren't there essentially an infinite number?) In this case, one platform would have been enough.
NT build wouldn't show up Linux crashes. And this really the only big question mark now. Gennadiy

on Tue Oct 04 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
It is. But as you've seen over the years, it causes an unworkable amount of upset and alarm when large numbers of failures appear on the trunk all at once, and people who would otherwise be dealing with release issues now have trunk issues to worry about.
That's why I always advocated independent library development.
We all (well, many of us) want that. We're not there yet.
Sometimes I do break trunk due to missed commit or compiler differences, but these quickly resolved. I need some way to make sure my changes are working.
But that can't have been the case here, can it? Surely if you'd run the whole boost regression suite on your local machine before and after your changes, you'd have seen the differences, no?
No, I can't. I have limited time I can spend working on boost development. I cannot wait hours for full regression test to finish even once, not mention twice. I expect regression test system to deal with this. I resolving issues once I observe them online.
So you do have a philosophical disagreement with the expectations of the group. You think you ought to be able to use that development model, but everyone else expects library authors with boost dependents to test the dependents before they commit changes. So every time you do this and things break on trunk, it causes a big kerfuffle. At this point, the magnitude of the actual inconvenience to others is irrelevant; they're going to be upset because they've been through this with you over and over. Do you really think that after 5 years of waiting to remove this facility, kicking off a boost-wide test and looking for problems would have cost you more time than it's costing you to deal with all this fallout from the problems you've caused?
I do run my own regression tests and they pass.
The fact is that your (Boost) customers aren't happy when you develop this way. If you won't change your development model and your customers won't change their expectations, the only solution for them is to stop using Boost.Test... which I did long ago, for this very reason. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave <at> boostpro.com> writes:
So you do have a philosophical disagreement with the expectations of the group. You think you ought to be able to use that development model, but everyone else expects library authors with boost dependents to test the dependents before they commit changes.
I think that even existing system can be easily improved by testing all components against release brunch version of dependents. After all this is all everyone cares about. Once this passes library author is free to push the changes into release branch.
So every time you do this and things break on trunk, it causes a big kerfuffle. At this point, the magnitude of the actual inconvenience to others is irrelevant; they're going to be upset because they've been through this with you over and over.
I think you might be overstating the severity of the problem. Is there any way to see in regression report which libraries are still affected by Boost.Test changes (since I reverted test_exec_monitor removal) If not, I guess I'll try to run full test with msvc tonight and see what will show up. Gennadiy

on Tue Oct 04 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
So every time you do this and things break on trunk, it causes a big kerfuffle. At this point, the magnitude of the actual inconvenience to others is irrelevant; they're going to be upset because they've been through this with you over and over.
I think you might be overstating the severity of the problem.
I think you might be failing to read what I wrote. My point is that the actual severity of the problem has become irrelevant at this point. You're going to cause cultural friction even by creating a small problem because people are frustrated.
Is there any way to see in regression report which libraries are still affected by Boost.Test changes (since I reverted test_exec_monitor removal)
I don't know any more about what regression reports can be seen than you do. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

So you do have a philosophical disagreement with the expectations of the group. You think you ought to be able to use that development model, but everyone else expects library authors with boost dependents to test the dependents before they commit changes.
I think that even existing system can be easily improved by testing all components against release brunch version of dependents. After all this is all everyone cares about. Once this passes library author is free to push the changes into release branch.
That's fine until you push a new version of Boost.Test to the release branch and then it's the release branch that breaks.... John.

John Maddock <boost.regex <at> virgin.net> writes:
So you do have a philosophical disagreement with the expectations of the group. You think you ought to be able to use that development model, but everyone else expects library authors with boost dependents to test the dependents before they commit changes.
I think that even existing system can be easily improved by testing all components against release brunch version of dependents. After all this is all everyone cares about. Once this passes library author is free to push the changes into release branch.
That's fine until you push a new version of Boost.Test to the release branch and then it's the release branch that breaks....
That will be my problem (even though infrastructure in theory can help as well). Once I'm ready to do my release, I will have to make sure release does not break. First I can make sure trunk builds fine (doing full regression test on as many platforms I can get my hands on), Next I need to go after pushing necessary changes of dependent libraries into a release. It's a process, specifically called integration, and it's not simple, but thankfully I'll have to do it once, and not before every commit. DVCS going through the procedure routinely. Gennadiy

It is. But as you've seen over the years, it causes an unworkable amount of upset and alarm when large numbers of failures appear on the trunk all at once, and people who would otherwise be dealing with release issues now have trunk issues to worry about.
That's why I always advocated independent library development.
We all (well, many of us) want that. We're not there yet.
Unless I'm missing something, that model doesn't entirely solve these problems, we'd have a situation where: * Everyone tests independently against the last release, and everything on it's own looks great. * A change that removes features gets merged to release and.... oops now the release branch is broken. So we'd need some kind of integration testing as well.... Cheers, John.

On Wed, Oct 5, 2011 at 10:37 AM, John Maddock <boost.regex@virgin.net> wrote:
Unless I'm missing something, that model doesn't entirely solve these problems, we'd have a situation where:
* Everyone tests independently against the last release, and everything on it's own looks great. * A change that removes features gets merged to release and.... oops now the release branch is broken.
So we'd need some kind of integration testing as well....
Some projects test an update before that updates gets merged (into trunk or release) with build farms. Can't Boost do the same? Olaf

on Wed Oct 05 2011, Olaf van der Spek <ml-AT-vdspek.org> wrote:
On Wed, Oct 5, 2011 at 10:37 AM, John Maddock <boost.regex@virgin.net> wrote:
Unless I'm missing something, that model doesn't entirely solve these
problems, we'd have a situation where:
* Everyone tests independently against the last release, and everything on it's own looks great. * A change that removes features gets merged to release and.... oops now the release branch is broken.
So we'd need some kind of integration testing as well....
Some projects test an update before that updates gets merged (into trunk or release) with build farms. Can't Boost do the same?
Yes -- Dave Abrahams BoostPro Computing http://www.boostpro.com

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Dave Abrahams Sent: Tuesday, October 04, 2011 8:16 PM To: boost@lists.boost.org Subject: Re: [boost] [test] Trunk broken: What happened to test_exec_monitor?
on Tue Oct 04 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
It is. But as you've seen over the years, it causes an unworkable amount of upset and alarm when large numbers of failures appear on the trunk all at once, and people who would otherwise be dealing with release issues now have trunk issues to worry about.
That's why I always advocated independent library development.
We all (well, many of us) want that. We're not there yet.
Sometimes I do break trunk due to missed commit or compiler differences, but these quickly resolved. I need some way to make sure my changes are working.
But that can't have been the case here, can it? Surely if you'd run the whole boost regression suite on your local machine before and after your changes, you'd have seen the differences, no?
No, I can't. I have limited time I can spend working on boost development. I cannot wait hours for full regression test to finish even once, not mention twice. I expect regression test system to deal with this. I resolving issues once I observe them online.
So you do have a philosophical disagreement with the expectations of the group. You think you ought to be able to use that development model, but everyone else expects library authors with boost dependents to test the dependents before they commit changes.
So every time you do this and things break on trunk, it causes a big kerfuffle. At this point,
the magnitude
of the actual inconvenience to others is irrelevant; they're going to be upset because they've been through this with you over and over.
Do you really think that after 5 years of waiting to remove this facility, kicking off a boost-wide test and looking for problems would have cost you more time than it's costing you to deal with all this fallout from the problems you've caused?
I do run my own regression tests and they pass.
The fact is that your (Boost) customers aren't happy when you develop this way. If you won't change your development model and your customers won't change their expectations, the only solution for them is to stop using Boost.Test... which I did long ago, for this very reason.
I'd be reluctant to stop using Boost.Test - it does what it says on the tin. And I think there are very big advantages for every Boost library to use it too - it's the devil that we know. But this makes it a special case that every library will be dependent on it, so deprecation needs to have a really good reason, and be really, really well tested. I've been annoyed to have to change hundreds of projects to accommodate the changes you want to make. Ok - the changes are small, but "if it ain't broke, don't fix it". Is there a really, really, really good case for making these changes? Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

Paul A. Bristow <pbristow <at> hetp.u-net.com> writes:
I've been annoyed to have to change hundreds of projects to accommodate the changes you want to make. Ok - the changes are small, but "if it ain't broke, don't fix it".
I am going to make all these changes myself. It's really pretty much just global replacement. Now that trunk is (getting) health, it should take me maybe couple more days to finish the move. Gennadiy P.S. Some of the changes are already in place: BOOST_MESSAGE and unit_test_framework are gone.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental Sent: Wednesday, October 05, 2011 5:53 PM To: boost@lists.boost.org Subject: Re: [boost] [test] Trunk broken: What happened to test_exec_monitor?
Paul A. Bristow <pbristow <at> hetp.u-net.com> writes:
I've been annoyed to have to change hundreds of projects to accommodate the changes you want to make. Ok - the changes are small, but "if it ain't broke, don't fix it".
I am going to make all these changes myself. It's really pretty much just global replacement. Now that trunk is (getting) health, it should take me maybe couple more days to finish the move.
But what about all the hundreds of my personal projects that need changing - you can't change those! Paul

Paul A. Bristow <pbristow <at> hetp.u-net.com> writes:
But what about all the hundreds of my personal projects that need changing - you can't change those!
What interface you specifically you concerned about? Gennadiy

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Gennadiy Rozental Sent: Thursday, October 06, 2011 1:10 PM To: boost@lists.boost.org Subject: Re: [boost] [test] Trunk broken: What happened to test_exec_monitor?
Paul A. Bristow <pbristow <at> hetp.u-net.com> writes:
But what about all the hundreds of my personal projects that need changing - you can't change those!
What interface you specifically you concerned about?
Well just that my test project that used to work now don't without some Boost.Test changes. BOOST_MESSAGE for one, and that the include files are different and new (and more convenient) test structure is needed. But you've done it now. Next time please can we have at least a one year notice, and big 'skull and crossbones' notice telling us exactly what we need to *do* to get things changed, and reminders before it happens. Boost.Test is different from other libraries - similar to config.hpp. Best of all, don't change anything ;-) Paul

On 10/4/2011 10:04 AM, Dave Abrahams wrote:
on Mon Oct 03 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
I thought release is based on release branch.
It is. But as you've seen over the years, it causes an unworkable amount of upset and alarm when large numbers of failures appear on the trunk all at once, and people who would otherwise be dealing with release issues now have trunk issues to worry about.
It also interferes with those of us who use trunk health to know when it's safe to merge fixes to the release branch. We're basically flying blind right now. I'd like to second the call to revert these changes. -- Eric Niebler BoostPro Computing http://www.boostpro.com

Eric Niebler <eric <at> boostpro.com> writes:
On 10/4/2011 10:04 AM, Dave Abrahams wrote:
on Mon Oct 03 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
I thought release is based on release branch.
It is. But as you've seen over the years, it causes an unworkable amount of upset and alarm when large numbers of failures appear on the trunk all at once, and people who would otherwise be dealing with release issues now have trunk issues to worry about.
It also interferes with those of us who use trunk health to know when it's safe to merge fixes to the release branch. We're basically flying blind right now.
Do you have any particular library in mind?
I'd like to second the call to revert these changes.
And while we at it, let's revert all the changes to Boost.Build, so I can run my own regression test against trunk version of it. I already reverted test_exec_monitor for now. I can reinstate other deprecated interfaces, but let's see if there is actually a problem (aside boost.math). Gennadiy

On 10/4/2011 1:29 PM, Gennadiy Rozental wrote:
Eric Niebler <eric <at> boostpro.com> writes:
On 10/4/2011 10:04 AM, Dave Abrahams wrote:
on Mon Oct 03 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
I thought release is based on release branch.
It is. But as you've seen over the years, it causes an unworkable amount of upset and alarm when large numbers of failures appear on the trunk all at once, and people who would otherwise be dealing with release issues now have trunk issues to worry about.
It also interferes with those of us who use trunk health to know when it's safe to merge fixes to the release branch. We're basically flying blind right now.
Do you have any particular library in mind?
All of them? I use trunk health to determine when to merge to release for all of my libraries. And that's the current suggested development model for all Boost libraries, AFAIK.
I'd like to second the call to revert these changes.
And while we at it, let's revert all the changes to Boost.Build, so I can run my own regression test against trunk version of it.
I wasn't aware there was a problem with Boost.Build on trunk. What are you referring to? Link pls.
I already reverted test_exec_monitor for now. I can reinstate other deprecated interfaces, but let's see if there is actually a problem (aside boost.math).
I think there is. All of my tests that use Boost.Test are crashing at runtime with any version of gcc. I don't use any deprecated Boost.Test interfaces, AFAIK. And my tests also fail to compile with msvc 9 and 8 due to this: ..\..\..\boost/test/utils/lazy_ostream.hpp(61) : warning C4181: qualifier applied to reference type; ignored c:\boost\org\trunk\libs\xpressive\test\./test.hpp(104) : see reference to class template instantiation 'boost::unit_test::lazy_ostream_impl<Pr evType,T,StorageT>' being compiled with [ PrevType=const boost::unit_test::lazy_ostream &, T=std::string, StorageT=const std::string & ] Gennadiy, you've heard enough people complain now. I implore you: please revert all your changes, and seriously consider testing them exhaustively on a branch against *all* of Boost before re-committing them. And on more compilers than just msvc-10. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Oct 4, 2011, at 2:52 PM, Eric Niebler wrote:
Gennadiy, you've heard enough people complain now. I implore you: please revert all your changes, and seriously consider testing them exhaustively on a branch against *all* of Boost before re-committing them. And on more compilers than just msvc-10.
I agree with Eric, please revert all your changes and be much more careful. This is ridiculous (http://www.boost.org/development/tests/trunk/developer/summary.html). A suggestion, once you've reverted all your recent changes, wait for tests to cycle and concentrate on fixing ALL the broken Boost.Test testers before you apply any of your new changes. I like to first see a green Boost.Test for all testers before we start down this path again. -- Noel

Belcourt, Kenneth <kbelco <at> sandia.gov> writes:
A suggestion, once you've reverted all your recent changes, wait for tests to cycle and concentrate on fixing ALL the broken Boost.Test testers before you apply any of your new changes. I like to first see a green Boost.Test for all testers before we start down this path again.
What testers you mean? Unit tests? Do you suggest they were broken before I checked in my changes? Gennadiy

On Oct 4, 2011, at 5:10 PM, Gennadiy Rozental wrote:
Belcourt, Kenneth <kbelco <at> sandia.gov> writes:
A suggestion, once you've reverted all your recent changes, wait for tests to cycle and concentrate on fixing ALL the broken Boost.Test testers before you apply any of your new changes. I like to first see a green Boost.Test for all testers before we start down this path again.
What testers you mean? Unit tests? Do you suggest they were broken before I checked in my changes?
Sorry if I wasn't clear. Boost.Test in trunk was not green for several Sandia testers on Darwin and perhaps other platforms but I can't recall them all at the moment. But I've been most concerned about Boost.Test on Darwin as a number of developers have asked me to debug Darwin Intel problems with their libraries that, from what I can tell, originate in Boost.Test. I've replied to those developers that I think we need Boost.Test on Darwin Intel to work correctly (passing all tests) before I can invest much time trying to debug their code. I suspect many of the other library problems will vanish once your library is passing all of its tests. -- Noel

Belcourt, Kenneth <kbelco <at> sandia.gov> writes:
Sorry if I wasn't clear. Boost.Test in trunk was not green for several Sandia testers on Darwin and perhaps other platforms but I can't recall them all at the moment. But I've been most concerned about Boost.Test on Darwin
Unfortunately I do not have an access to this platform, but I'll be happy to work with someone who has issues. Gennadiy

Gennadiy Rozental <rogeeff <at> gmail.com> writes:
Unfortunately I do not have an access to this platform, but I'll be happy to work with someone who has issues.
Err. Meant to say: "... with someone who has *access to resolve* issues." Gennadiy

Hi Gennadiy, On Oct 4, 2011, at 8:10 PM, Gennadiy Rozental wrote:
Belcourt, Kenneth <kbelco <at> sandia.gov> writes:
Sorry if I wasn't clear. Boost.Test in trunk was not green for several Sandia testers on Darwin and perhaps other platforms but I can't recall them all at the moment. But I've been most concerned about Boost.Test on Darwin
Unfortunately I do not have an access to this platform, but I'll be happy to work with someone who has issues.
<small rant> I have very limited time to debug code for Boost. Frankly what little time I have I'd rather be working on library submissions to Boost. I'd hoped that providing access to a broad range of Boost testers would permit Boost developers to figure out most problems without me, by and large that seems to be working pretty well. </small rant> Attached is a stack trace from one of the failing tests. From what I can tell, there's one serious problem that impacts all the failing Boost.Test tests as they all seem to fail with the same seg. fault. From my perspective it looks like stack corruption (not alignment issues as that would trigger a bus error). Perhaps conditional compilation specific to Darwin Intel is mismatched between header and source? Another idea, what's different about the seven Boost.Test results that pass for Darwin Intel from the rest of those that fail? -- Noel

Belcourt, Kenneth <kbelco <at> sandia.gov> writes:
I have very limited time to debug code for Boost. Frankly what little time I have I'd rather be working on library submissions to Boost.
And I on developing new features. Spending it on guesswork based on close to zero information may not be productive as well.
Attached is a stack trace from one of the failing tests. From what I can tell, there's one serious problem that impacts all the failing Boost.Test tests as they
Yes, but it's unclear from the stack you provided. Stack looks fine to me.
Perhaps conditional compilation specific to Darwin Intel is mismatched between header and source?
How would they look like?
Another idea, what's different about the seven Boost.Test results that pass for Darwin Intel from the rest of those that fail?
Six of them are not using UTF at all and one does not do anything with it. Gennadiy

on Thu Oct 06 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Belcourt, Kenneth <kbelco <at> sandia.gov> writes:
I have very limited time to debug code for Boost. Frankly what little time I have I'd rather be working on library submissions to Boost.
And I on developing new features. Spending it on guesswork based on close to zero information may not be productive as well.
Attached is a stack trace from one of the failing tests. From what I can tell, there's one serious problem that impacts all the failing Boost.Test tests as they
Yes, but it's unclear from the stack you provided. Stack looks fine to me.
Gennadiy, I can loan you an account on an Intel Mac if you will take it from there. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave <at> boostpro.com> writes:
Gennadiy, I can loan you an account on an Intel Mac if you will take it from there.
Yes. I think I can handle it. Will it have compiler there (the once which fails)? Gennadiy

on Fri Oct 07 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
Gennadiy, I can loan you an account on an Intel Mac if you will take it from there.
Yes. I think I can handle it. Will it have compiler there (the once which fails)?
I don't know what's there. Get me the information about what you need and I'll try to set it up. Noel, maybe you have all the details? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Eric Niebler <eric <at> boostpro.com> writes:
I wasn't aware there was a problem with Boost.Build on trunk. What are you referring to? Link pls.
Just go into boost/lib/test/test directory and try to run test on windows: Off the top of my head I've seen: install rule not found boost home directory is missing gcc is not found ar is not found link is not found I introduced some hacks to be able to run, but this is really inconvenient, since I can't check them in.
I think there is. All of my tests that use Boost.Test are crashing at runtime with any version of gcc. I don't use any deprecated Boost.Test
Hmm. It worked for me with latest cygwin's gcc. Can you see where it fails? Otherwise I'll try to figure it out tonight.
interfaces, AFAIK. And my tests also fail to compile with msvc 9 and 8
Ah, right. Did not have a time to fix double ref last night, but this should be simple. If I can't figure out crashes I'll revert changes. Gennadiy

Le 04/10/2011 22:52, Eric Niebler a écrit :
I wasn't aware there was a problem with Boost.Build on trunk. What are you referring to? Link pls.
I have some concerns not directly related to boost.build, but to boost/tools/regression. It looks to me to be the same kind of severity as a breakage in problems in Boost.test, since if effectively prevents generating test reports on one computer. It's mostly related to breaking interface changes in Boost.Filesystem (and also the breaking interface change in the build system from bjam to b2). And it's been broken for several releases. -- Loïc

On 06.10.2011 10:40, Loïc Joly wrote:
Le 04/10/2011 22:52, Eric Niebler a écrit :
I wasn't aware there was a problem with Boost.Build on trunk. What are you referring to? Link pls.
I have some concerns not directly related to boost.build, but to boost/tools/regression. It looks to me to be the same kind of severity as a breakage in problems in Boost.test, since if effectively prevents generating test reports on one computer. It's mostly related to breaking interface changes in Boost.Filesystem (and also the breaking interface change in the build system from bjam to b2). And it's been broken for several releases.
What do you mean re bjam->b2 change? It is supposed to be entirely transparent, since 'bjam' binary is still created in all the places where it was created previously. - Volodya

Le 20/09/2012 14:16, Vladimir Prus a écrit :
On 06.10.2011 10:40, Loïc Joly wrote:
Le 04/10/2011 22:52, Eric Niebler a écrit :
I wasn't aware there was a problem with Boost.Build on trunk. What are you referring to? Link pls.
I have some concerns not directly related to boost.build, but to boost/tools/regression. It looks to me to be the same kind of severity as a breakage in problems in Boost.test, since if effectively prevents generating test reports on one computer. It's mostly related to breaking interface changes in Boost.Filesystem (and also the breaking interface change in the build system from bjam to b2). And it's been broken for several releases.
What do you mean re bjam->b2 change? It is supposed to be entirely transparent, since 'bjam' binary is still created in all the places where it was created previously.
Sorry, I don't remember the exact situation such a long time after my previous post, and I no longer have access to the computer where I had those problems. It might have been a consequence of copying an old bjam binary in the folder before discovering that bjam was replaced by b2. Is the old binary replaced in this case? Is there a message somewhere when we bootstrap?

It also interferes with those of us who use trunk health to know when it's safe to merge fixes to the release branch. We're basically flying blind right now.
Do you have any particular library in mind?
I'd like to second the call to revert these changes.
And while we at it, let's revert all the changes to Boost.Build, so I can run my own regression test against trunk version of it.
I already reverted test_exec_monitor for now. I can reinstate other deprecated interfaces, but let's see if there is actually a problem (aside boost.math).
Gennadiy, I really think you need to be more pro-active in tracking down possible breakages, I used Boost.Math as an example because I have stalled changes right now that I need to complete testing. But 10 seconds with grep shows me that: BOOST_MESSAGE is used 189 times in several libraries. unit_test_framework is used in 48 files. I'm sure there's more.... but I must go off and do something productive now, John.

John Maddock <boost.regex <at> virgin.net> writes:
I'm not sure if it's that simple - a quick grep shows 815 files with a test_main. What should have happened is that:
* You would announce loud and clear that this feature was going to be removed, and then.
Ok. I'll post the notification (though I'd imagine this thread should be loud enough ;) )
* Work with library authors to remove all uses of this feature and verify that nothing is broken in the process.
It'll take forever to sync between 20+ people. The change is trivial I can do this myself.
* Merge the changes (and only these changes) to the release branch once everyone is happy.
Why do we need this? Different libraries have different release schedule. What if library author do not want to push trunk version to the release branch? Boost.Test was not release for 3+ years I believe. In theory I do want these changes to be pushed into release, since otherwise I can not release myself, but that is separate step.
* Only when all uses of the feature have been removed, can the feature actually be removed from Trunk.
I can remove all the usage of this component in trunk. Trunk will build after that and any library pushed into release branch will build as well. I need my changes in a trunk for now so that I can check they compile.
1) This change is reverted, and the procedure above followed, or:
Ok. I'll reinstate this component for the time being (later today), but I do plan to remove it.
PS even with current SVN I still get: ..\src\decorators.cpp(16) : fatal error C1083: Cannot open include file: 'boost/test/impl/decorators.ipp': No such file or directory
Right. This was renamed. Something is not checked in. Gennadiy

Ok. I'll reinstate this component for the time being (later today), but I do plan to remove it.
This is done. I'll send an announcement next and start switching existing use cases to UTF.
'boost/test/impl/decorators.ipp': No such file or directory
Right. This was renamed. Something is not checked in.
And this is fixed in a trunk. Gennadiy

Ok. I'll reinstate this component for the time being (later today), but I do plan to remove it.
This is done. I'll send an announcement next and start switching existing use cases to UTF.
'boost/test/impl/decorators.ipp': No such file or directory
Right. This was renamed. Something is not checked in.
And this is fixed in a trunk.
There are still issues: special_functions_test.cpp special_functions_test.cpp(16) : fatal error C1083: Cannot open include file: 'boost/test/test_case_template.hpp': No such file or directory and also: msvc.link ..\..\..\bin.v2\libs\math\test\test_legacy_nonfinite.test\msvc-10.0\debug\asynch-exceptions-on\threading-multi\test_legacy_nonfinite.exe test_legacy_nonfinite.obj : error LNK2019: unresolved external symbol "public: __thiscall boost::unit_test::ut_detail::auto_test_unit_registrar::auto_test_unit_registrar(class boost::unit_test::test_case *,unsigned long)" (??0auto_test_unit_registrar@ut_detail@unit_test@boost@@QAE@PAVtest_case@23@K@Z) referenced in function "void __cdecl `anonymous namespace'::`dynamic initializer for 'legacy_test_registrar53''(void)" (??__Elegacy_test_registrar53@?A0x669780f0@@YAXXZ) test_legacy_nonfinite.obj : error LNK2019: unresolved external symbol "public: __thiscall boost::unit_test::test_case::test_case(class boost::unit_test::basic_cstring<char const >,class boost::unit_test::callback0<struct boost::unit_test::ut_detail::unused> const &)" (??0test_case@unit_test@boost@@QAE@V?$basic_cstring@$$CBD@12@ABV?$callback0@Uunused@ut_detail@unit_test@boost@@@12@@Z) referenced in function "class boost::unit_test::test_case * __cdecl boost::unit_test::make_test_case(class boost::unit_test::callback0<struct boost::unit_test::ut_detail::unused> const &,class boost::unit_test::basic_cstring<char const >)" (?make_test_case@unit_test@boost@@YAPAVtest_case@12@ABV?$callback0@Uunused@ut_detail@unit_test@boost@@@12@V?$basic_cstring@$$CBD@12@@Z) ..\..\..\bin.v2\libs\math\test\test_legacy_nonfinite.test\msvc-10.0\debug\asynch-exceptions-on\threading-multi\test_legacy_nonfinite.exe : fatal error LNK1120: 2 unresolved externals John.

John Maddock <boost.regex <at> virgin.net> writes:
special_functions_test.cpp special_functions_test.cpp(16) : fatal error C1083: Cannot open include file: 'boost/test/test_case_template.hpp': No such file or directory
This one was deprecated as well long time ago. I can put it back, but it's empty. I'll reinstate it for now.
and also:
msvc.link ..\..\..\bin.v2\libs\math\test\test_legacy_nonfinite.test\msvc-10.0\ debug\asynch-exceptions-on\threading-multi\test_legacy_nonfinite.exe test_legacy_nonfinite.obj : error LNK2019: unresolved external symbol "public: __thiscall ... boost::unit_test::callback0<struct boost::unit_test::ut_detail::unused> const &)"
Something is not right. I eliminated my header callback.hpp in favor of boost::function. There should not be any reference to callback0 template. Is it possible to run a clean build? Gennadiy

special_functions_test.cpp special_functions_test.cpp(16) : fatal error C1083: Cannot open include file: 'boost/test/test_case_template.hpp': No such file or directory
This one was deprecated as well long time ago. I can put it back, but it's empty. I'll reinstate it for now.
Assuming that it really is empty, I removed those includes from Boost.Math, but now get: compile-c-c++ ..\..\..\bin.v2\libs\math\test\special_functions_test.test\msvc-10.0\debug\asynch-exceptions-on\threading-multi\special_functions_test.obj special_functions_test.cpp m:\data\boost\trunk\libs\math\test\sinc_test.hpp(64) : error C3861: 'BOOST_MESSAGE': identifier not found m:\data\boost\trunk\libs\math\test\sinc_test.hpp(65) : error C3861: 'BOOST_MESSAGE': identifier not found + lots of other errors, then: compile-c-c++ ..\..\..\bin.v2\libs\math\test\quaternion_mult_incl_test.test\msvc-10.0\debug\asynch-exceptions-on\threading-multi\quaternion_mult_incl_test.obj quaternion_mult_incl_test.cpp ..\quaternion\quaternion_mult_incl_test.cpp(16) : error C3083: 'unit_test_framework': the symbol to the left of a '::' must be a type ..\quaternion\quaternion_mult_incl_test.cpp(16) : error C2039: 'test_suite' : is not a member of 'boost'
and also:
msvc.link ..\..\..\bin.v2\libs\math\test\test_legacy_nonfinite.test\msvc-10.0\ debug\asynch-exceptions-on\threading-multi\test_legacy_nonfinite.exe test_legacy_nonfinite.obj : error LNK2019: unresolved external symbol "public: __thiscall ... boost::unit_test::callback0<struct boost::unit_test::ut_detail::unused> const &)"
Something is not right. I eliminated my header callback.hpp in favor of boost::function. There should not be any reference to callback0 template. Is it possible to run a clean build?
A clean build still gives: msvc.link ..\..\..\bin.v2\libs\math\test\test_legacy_nonfinite.test\msvc-10.0\debug\asynch-exceptions-on\threading-multi\test_legacy_nonfinite.exe test_legacy_nonfinite.obj : error LNK2019: unresolved external symbol "public: __thiscall boost::unit_test::ut_detail::auto_test_unit_registrar::auto_test_unit_registrar(class boost::unit_test::test_case *,unsigned long)" (??0auto_test_unit_registrar@ut_detail@unit_test@boost@@QAE@PAVtest_case@23@K@Z) referenced in function "void __cdecl `anonymous namespace'::`dynamic initializer for 'legacy_test_registrar53''(void)" (??__Elegacy_test_registrar53@?A0x669780f0@@YAXXZ) test_legacy_nonfinite.obj : error LNK2019: unresolved external symbol "public: __thiscall boost::unit_test::test_case::test_case(class boost::unit_test::basic_cstring<char const >,class boost::unit_test::callback0<struct boost::unit_test::ut_detail::unused> const &)" (??0test_case@unit_test@boost@@QAE@V?$basic_cstring@$$CBD@12@ABV?$callback0@Uunused@ut_detail@unit_test@boost@@@12@@Z) referenced in function "class boost::unit_test::test_case * __cdecl boost::unit_test::make_test_case(class boost::unit_test::callback0<struct boost::unit_test::ut_detail::unused> const &,class boost::unit_test::basic_cstring<char const >)" (?make_test_case@unit_test@boost@@YAPAVtest_case@12@ABV?$callback0@Uunused@ut_detail@unit_test@boost@@@12@V?$basic_cstring@$$CBD@12@@Z) BTW you can easily test these issues for yourself by building the Boost.Math tests - frankly all of my "boost time" the last few days has been taken up with chasing down Boost.Test issues - and I'm sure I'm not alone. Last time this happened - not that long ago as it happens - I rewrote a bunch of code to not use Boost.Test anymore. I now see that as a very wise move, even though there should be much better uses for my time. Frustrated yours, John.

John Maddock <boost.regex <at> virgin.net> writes:
special_functions_test.cpp special_functions_test.cpp(16) : fatal error C1083: Cannot open include file: 'boost/test/test_case_template.hpp': No such file or directory
This one was deprecated as well long time ago. I can put it back, but it's empty. I'll reinstate it for now.
Assuming that it really is empty, I removed those includes from Boost.Math, but now get:
Most of these issues are due to this test modules using 5+ years deprecated interfaces, which I'd like to remove now. Since you never bothered to switch to updated interfaces, they came up now.
'BOOST_MESSAGE': identifier not found
BOOST_MESSAGE is deprecated. Correct name is BOOST_TEST_MESSAGE.
'unit_test_framework': the symbol to the left of a '::' must be a type
unit_test_framework is deprecated. Correct name is unit_test
A clean build still gives: boost::unit_test::callback0<struct boost::unit_test::ut_detail::unused>
That's ... strange. I removed the header which defines this template. I am not sure how is it possible for you to observe this.
BTW you can easily test these issues for yourself by building the Boost.Math
I'll do later tonight.
tests - frankly all of my "boost time" the last few days has been taken up with chasing down Boost.Test issues
You can easily switch back your local trunk to version of Boost.Test from couple weeks ago.
- and I'm sure I'm not alone.
I actually hope that there are few instances of deprecated interfaces being used. I did not hear any mention of them for a long time.
Last time this happened - not that long ago as it happens - I rewrote a bunch of code to not use Boost.Test anymore. I now see that as a very wise move, even though there should be much better uses for my time.
We live in a same boat. We both need to do some development and we can't always be in sync. For your development I'd recommend you to test against release branch version of Boost.Test. I will not push my changes into release until full regression test run is clear. If you do not mind I can make changes tonight to your test modules so that they compile, unless you insist you want to use deprecated interfaces. Regards, Gennadiy

On Tue, Oct 04, 2011 at 06:43:51PM +0000, Gennadiy Rozental wrote:
Most of these issues are due to this test modules using 5+ years deprecated interfaces, which I'd like to remove now. Since you never bothered to switch to updated interfaces, they came up now.
BOOST_MESSAGE is deprecated. Correct name is BOOST_TEST_MESSAGE.
unit_test_framework is deprecated. Correct name is unit_test
I actually hope that there are few instances of deprecated interfaces being used. I did not hear any mention of them for a long time.
How are these things "deprecated"? Shouldn't deprecated functionality being used result in rather violent and blatantly obvious compiler noise? It seems like all users of them have been completely unaware of their deprecation. Was this only via docs/comments? To me it seems that the underlying cause isn't really removing deprecated things (which one might consider they have the right to do eventually), but that things that are deprecated are not known enough to be. -- Lars Viklund | zao@acc.umu.se

Lars Viklund <zao <at> acc.umu.se> writes:
How are these things "deprecated"? Shouldn't deprecated functionality being used result in rather violent and blatantly obvious compiler noise?
Ummm. How can I do this for deprecated macro or namespace?
It seems like all users of them have been completely unaware of their deprecation. Was this only via docs/comments?
I can probably dig this out. but the was an announcement on a list here and at some point this was part of release notes (these disappeared once I reworked docs to new format). In new documentation these are not even mentioned.
To me it seems that the underlying cause isn't really removing deprecated things (which one might consider they have the right to do eventually), but that things that are deprecated are not known enough to be.
It's possible. Even I forgot about some of them already ;) Gennadiy

on Tue Oct 04 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Lars Viklund <zao <at> acc.umu.se> writes:
How are these things "deprecated"? Shouldn't deprecated functionality being used result in rather violent and blatantly obvious compiler noise?
Ummm. How can I do this for deprecated macro or namespace?
There are ways to generate warnings. Not 100% reliable or portable ways, but still... -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 10/05/2011 12:59 AM, Gennadiy Rozental wrote:
I can probably dig this out. but the was an announcement on a list here and at some point this was part of release notes (these disappeared once I reworked docs to new format). In new documentation these are not even mentioned.
So no one is aware of anything being deprecated. Great. I hope you can see this is a problem.

This one was deprecated as well long time ago. I can put it back, but it's empty. I'll reinstate it for now.
Assuming that it really is empty, I removed those includes from Boost.Math, but now get:
Most of these issues are due to this test modules using 5+ years deprecated interfaces, which I'd like to remove now. Since you never bothered to switch to updated interfaces, they came up now.
I understand the desire to grandfather/remove deprecated interfaces. However, 1) We're not all following Boost.Test development, just trying our best to maintain old stuff. I suspect like most Boosters I have no idea what is and isn't deprecated, and I certainly don't want to have to spend time rewriting old tests to a new interface. 2) It would have taken you 20 seconds max to do a grep to discover if any of these old interfaces were still in use. 3) I'll just point out that you won't see *any* online regression test results if you break the bjam build. John.

John Maddock <boost.regex <at> virgin.net> writes:
1) We're not all following Boost.Test development, just trying our best to maintain old stuff. I suspect like most Boosters I have no idea what is and isn't deprecated, and I certainly don't want to have to spend time rewriting old tests to a new interface.
That's understandable. Too much time passed and even if it used to be prominently announced, now no one remembers it. Good thing though it does not require much to be changes and I in fact volunteer to do it myself. Gennadiy
participants (14)
-
Belcourt, K. Noel
-
Belcourt, Kenneth
-
Beman Dawes
-
Dave Abrahams
-
Eric Niebler
-
Gennadiy Rozental
-
John Maddock
-
Jürgen Hunold
-
Lars Viklund
-
Loïc Joly
-
Mathias Gaunard
-
Olaf van der Spek
-
Paul A. Bristow
-
Vladimir Prus