All tests for lockfree in both master and develop branch seem to fail. Error message is "../boost/system/config.hpp", line 34: Error: #error Must not define both BOOST_SYSTEM_DYN_LINK and BOOST_SYSTEM_STATIC_LINK. See develop branch: http://www.boost.org/development/tests/develop/developer/lockfree.html See master branch: http://www.boost.org/development/tests/master/developer/lockfree.html I looked at lockfree/test/Jamfile.v2 but am not sure what change needs to be made. Any suggestions? Thanks, Aparna
All tests for lockfree in both master and develop branch seem to fail. Error message is
"../boost/system/config.hpp", line 34: Error: #error Must not define both BOOST_SYSTEM_DYN_LINK and BOOST_SYSTEM_STATIC_LINK.
See develop branch: http://www.boost.org/development/tests/develop/developer/lockfree.html
See master branch: http://www.boost.org/development/tests/master/developer/lockfree.html
I looked at lockfree/test/Jamfile.v2 but am not sure what change needs to be made.
Any suggestions?
boost lockfree's testsuite hasn't changed for a long time. i have no idea why the tests are failing, so something must have changed in boost.test. if the boost.test developers push some changes, it would be highly appreciated if people could actually check if the change doesn't introduce any issues for other libraries. i've had a similar issue with boost.test in one of my boost libraries in the past, where all tests on the master branch failed, while tests on the develop branch passed. as the tests of many boost libraries depend on boost.test, i'd suggest to run the complete tests of *all* boost libraries before pushing a change ... --- fwiw, maybe one of the boost.test developers has an understanding, what's going on there? unfortunately i don't really have a lot of time to look into this atm ... thanks a lot, tim
Le 03/10/15 16:21, Tim Blechmann a écrit :
All tests for lockfree in both master and develop branch seem to fail. Error message is
"../boost/system/config.hpp", line 34: Error: #error Must not define both BOOST_SYSTEM_DYN_LINK and BOOST_SYSTEM_STATIC_LINK.
See develop branch: http://www.boost.org/development/tests/develop/developer/lockfree.html
See master branch: http://www.boost.org/development/tests/master/developer/lockfree.html
I looked at lockfree/test/Jamfile.v2 but am not sure what change needs to be made.
Any suggestions?
boost lockfree's testsuite hasn't changed for a long time. i have no idea why the tests are failing, so something must have changed in boost.test.
The current master on boost.test is pointing to the released version 1.59. Were those problems already there for 1.59?
if the boost.test developers push some changes, it would be highly appreciated if people could actually check if the change doesn't introduce any issues for other libraries. i've had a similar issue with boost.test in one of my boost libraries in the past, where all tests on the master branch failed, while tests on the develop branch passed. as the tests of many boost libraries depend on boost.test, i'd suggest to run the complete tests of *all* boost libraries before pushing a change ...
Right. OTOH, the runners are there on develop for being able to test those changes. Right now I do not know what would be the best strategy: resetting develop to the commit before the problems started, or fixing the issues. This does not help boost.test very much since: - we cannot take full benefits of the runners: as soon as there is an issue with boost.test, it breaks a lot of other libraries. Which makes the current testing procedure of boost.test quite fragile. We need to deploy our own runners somehow. - as a central library, any change in design in boost.test is spanning a lot of libraries, including the not so maintained ones, which is a waste of efforts. It looks like the best way to cope with those issues is to have a boost.test2 (test3 in fact), but that is also a bit confusing for end users.
---
fwiw, maybe one of the boost.test developers has an understanding, what's going on there? unfortunately i don't really have a lot of time to look into this atm ...
To come back to your issue, I think the Jamfile needs some fixes. I will give a try tonight and probably make a PR, if this sounds ok for you. Best, Raffi
Le 03/10/15 16:21, Tim Blechmann a écrit :
All tests for lockfree in both master and develop branch seem to fail. Error message is
"../boost/system/config.hpp", line 34: Error: #error Must not define both BOOST_SYSTEM_DYN_LINK and BOOST_SYSTEM_STATIC_LINK.
See develop branch: http://www.boost.org/development/tests/develop/developer/lockfree.html
See master branch: http://www.boost.org/development/tests/master/developer/lockfree.html
I looked at lockfree/test/Jamfile.v2 but am not sure what change needs to be made.
Any suggestions?
boost lockfree's testsuite hasn't changed for a long time. i have no idea why the tests are failing, so something must have changed in boost.test.
[snip]
---
fwiw, maybe one of the boost.test developers has an understanding, what's going on there? unfortunately i don't really have a lot of time to look into this atm ...
thanks a lot, tim
Me again, So, if I change lockfree/test/Jamfile.v2 - <library>../../test/build//boost_test_exec_monitor + <library>../../test/build//boost_unit_test_framework I can compile it with C++11 support without any issue: ../../../b2 -j8 toolset=clang cxxflags="-stdlib=libc++ -std=c++11" linkflags="-stdlib=libc++" I cannot compile it though without C++11 support for 2 reasons: - in lockfree commit 9f52c24 unconditionally uses <atomic>, but this one is available only with C++11 support - in boost.test, references to C++11 jargon. For boost.test, Gennadiy and me have to come up with a solution. Best, Raffi
On 10/3/2015 3:15 PM, Raffi Enficiaud wrote:
Le 03/10/15 16:21, Tim Blechmann a écrit :
All tests for lockfree in both master and develop branch seem to fail. Error message is
"../boost/system/config.hpp", line 34: Error: #error Must not define both BOOST_SYSTEM_DYN_LINK and BOOST_SYSTEM_STATIC_LINK.
See develop branch: http://www.boost.org/development/tests/develop/developer/lockfree.html
See master branch: http://www.boost.org/development/tests/master/developer/lockfree.html
I looked at lockfree/test/Jamfile.v2 but am not sure what change needs to be made.
Any suggestions?
boost lockfree's testsuite hasn't changed for a long time. i have no idea why the tests are failing, so something must have changed in boost.test.
[snip]
---
fwiw, maybe one of the boost.test developers has an understanding, what's going on there? unfortunately i don't really have a lot of time to look into this atm ...
thanks a lot, tim
Me again,
So, if I change lockfree/test/Jamfile.v2 - <library>../../test/build//boost_test_exec_monitor + <library>../../test/build//boost_unit_test_framework
I can compile it with C++11 support without any issue:
../../../b2 -j8 toolset=clang cxxflags="-stdlib=libc++ -std=c++11" linkflags="-stdlib=libc++"
I cannot compile it though without C++11 support for 2 reasons: - in lockfree commit 9f52c24 unconditionally uses <atomic>, but this one is available only with C++11 support - in boost.test, references to C++11 jargon.
For boost.test, Gennadiy and me have to come up with a solution.
First, Boost Test can not require C++11 support. if you want to create a Boost Test which does require C++11 support make a Boost Test2 or whatever you want to call your new library that requires C++11 support. Others have said the same thing. It is beyond me how you or Gennadiy arbitrarily decided that libraries using Boost Test must run with C++11 support when you both know that there are many Boost libraries that do not require or need C++11, and these libraries use Boost Test. Second, if lockfree requires C++11 support and it tries to compile without it, then that is lockfree's problem and not Boost Test's problem.
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Edward Diener Sent: 03 October 2015 22:11 To: boost@lists.boost.org Subject: Re: [boost] boost.test regression or behavior change (was Re: Boost.lockfree)
On 10/3/2015 3:15 PM, Raffi Enficiaud wrote:
Le 03/10/15 16:21, Tim Blechmann a écrit :
All tests for lockfree in both master and develop branch seem to fail.
<snip>
For boost.test, Gennadiy and me have to come up with a solution.
First, Boost Test can not require C++11 support. if you want to create a Boost Test which does require C++11 support make a Boost Test2 or whatever you want to call your new library that requires C++11 support. Others have said the same thing. It is beyond me how you or Gennadiy arbitrarily decided that libraries using Boost Test must run with C++11 support when you both know that there are many Boost libraries that do not require or need C++11, and these libraries use Boost Test.
+1 This is a major decision that requires discussion and agreement, and needs prior announcement at least a year ahead. Springing a major change like this without any notice or announcement is just not acceptable. Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
This is a major decision that requires discussion and agreement, and needs
Paul A. Bristow
least a year ahead.
Springing a major change like this without any notice or announcement is just not acceptable.
We are not springing any major changes just yet. Nothing is in the master. This appeared in develop partially because I forgot we need these workarounds and partially because I do not have access to setup with c++03 anymore, so everything just worked for me. Sooner rather than later we should have this discussion and setup timeline. IMO it had very little sense to continue to maintain c++03 workarounds. Boost code should be an example how modern c++ libraries should look like. And c++03 compatibility is directly in a way of this goal. Gennadiy
On 10/04/2015 12:18 PM, Gennadiy Rozental wrote:
Sooner rather than later we should have this discussion and setup timeline. IMO it had very little sense to continue to maintain c++03 workarounds. Boost code should be an example how modern c++ libraries should look like. And c++03 compatibility is directly in a way of this goal.
You appear to have missed the many discussions on this topic. While Boost started out to design cutting-edge libraries, it has been caught by its own success. Today there is a large user-base that still uses C++03, and that are unlikely to upgrade in the foreseeable future. Therefore, the current consensus is that existing libraries should not increase their standards requirements. New libraries are free to decide their standards requirements (although it will probably be questioned during a formal review.) That means that you have two options: 1. Add a new C++11-only test library (like Boost.Coroutine2.) 2. Maintain both a C++03 and C++11 API within the same library.
On 04/10/2015 12:09, Bjorn Reese wrote:
On 10/04/2015 12:18 PM, Gennadiy Rozental wrote:
Sooner rather than later we should have this discussion and setup timeline. IMO it had very little sense to continue to maintain c++03 workarounds. Boost code should be an example how modern c++ libraries should look like. And c++03 compatibility is directly in a way of this goal.
You appear to have missed the many discussions on this topic.
While Boost started out to design cutting-edge libraries, it has been caught by its own success. Today there is a large user-base that still uses C++03, and that are unlikely to upgrade in the foreseeable future.
Therefore, the current consensus is that existing libraries should not increase their standards requirements. New libraries are free to decide their standards requirements (although it will probably be questioned during a formal review.)
That means that you have two options:
1. Add a new C++11-only test library (like Boost.Coroutine2.) 2. Maintain both a C++03 and C++11 API within the same library.
+1. As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing. As for testing in C++03 mode - that's easy, just use GCC's default compiler mode ;-) John.
Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
Also special in the sense that boost.test cannot take full benefit from the current test dashboard setup: we have to test all libraries before being able to push to develop, which means hours and hours of testing and infrastructure deployment/maintenance for a single push to a branch that is supposed to help us develop boost.test. To be frank, I do not think that this requirement on boost.test makes sense.
As for testing in C++03 mode - that's easy, just use GCC's default compiler mode ;-)
I have also a similar setup on OSX, but this does not prevent us from making mistakes, and capturing those mistakes before it goes to master is the very purpose of the develop branch. Raffi
On 10/04/2015 08:49 AM, Raffi Enficiaud wrote:
Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
Also special in the sense that boost.test cannot take full benefit from the current test dashboard setup: we have to test all libraries before being able to push to develop
Ideally yes, but in practicality you should be able to determine whether or not a change to Boost Test is working properly by only testing a very few libraries which you know use Boost Test's facilities extensively. Furthermore this situation will make absolutely no difference whether you use C++03 or C++11.
, which means hours and hours of testing and infrastructure deployment/maintenance for a single push to a branch that is supposed to help us develop boost.test. To be frank, I do not think that this requirement on boost.test makes sense.
First you claim a completely unreasonable practical requirement and then you say it makes no sense.
As for testing in C++03 mode - that's easy, just use GCC's default compiler mode ;-)
I have also a similar setup on OSX, but this does not prevent us from making mistakes, and capturing those mistakes before it goes to master is the very purpose of the develop branch.
What does what John suggested have to do with the 'develop' branch versus the 'master' branch ?
Le 04/10/15 15:37, Edward Diener a écrit :
On 10/04/2015 08:49 AM, Raffi Enficiaud wrote:
Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
Also special in the sense that boost.test cannot take full benefit from the current test dashboard setup: we have to test all libraries before being able to push to develop
Ideally yes, but in practicality you should be able to determine whether or not a change to Boost Test is working properly by only testing a very few libraries which you know use Boost Test's facilities extensively. Furthermore this situation will make absolutely no difference whether you use C++03 or C++11.
, which means hours and hours of testing and infrastructure deployment/maintenance for a single push to a branch that is supposed to help us develop boost.test. To be frank, I do not think that this requirement on boost.test makes sense.
First you claim a completely unreasonable practical requirement and then you say it makes no sense.
I thought it was your claim, but apparently you're saying that our test setup is not good enough, and the compilation of a subset of boost with our changes to develop should be part of the pre-push tests. How easy it is to add tests from other libraries directly in boost.test tests? I am far from being a Bjam expert.
As for testing in C++03 mode - that's easy, just use GCC's default compiler mode ;-)
I have also a similar setup on OSX, but this does not prevent us from making mistakes, and capturing those mistakes before it goes to master is the very purpose of the develop branch.
What does what John suggested have to do with the 'develop' branch versus the 'master' branch ?
The problems that are arising in the ML are /only/ about the develop branch, mainly because sometimes we do not have the proper setup to catch some of the problems (MSVC8 for instance). According to the whole discussion thread here, boost.test is not supposed to make mistakes in the develop branch. OTOH, develop goes to master if and only if tests are ok, and only master is eligible for a release. I do not see there anything not related to develop vs master, or the development workflow in general. Since according to you, I am missing something, please tell me why "develop vs. master" is off-topic. Best, Raffi
On 10/05/2015 03:54 AM, Raffi Enficiaud wrote:
Le 04/10/15 15:37, Edward Diener a écrit :
On 10/04/2015 08:49 AM, Raffi Enficiaud wrote:
Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
Also special in the sense that boost.test cannot take full benefit from the current test dashboard setup: we have to test all libraries before being able to push to develop
Ideally yes, but in practicality you should be able to determine whether or not a change to Boost Test is working properly by only testing a very few libraries which you know use Boost Test's facilities extensively. Furthermore this situation will make absolutely no difference whether you use C++03 or C++11.
, which means hours and hours of testing and infrastructure deployment/maintenance for a single push to a branch that is supposed to help us develop boost.test. To be frank, I do not think that this requirement on boost.test makes sense.
First you claim a completely unreasonable practical requirement and then you say it makes no sense.
I thought it was your claim, but apparently you're saying that our test setup is not good enough, and the compilation of a subset of boost with our changes to develop should be part of the pre-push tests.
I am saying that your claim that Boost Test has to test all libraries before you push to 'develop' is an unreasonable practical requirement, and then you follow up by saying 'I do not think that this requirement on boost.test makes sense'. You yourself establish the requirement and then you complain of it as taking up too much time.
How easy it is to add tests from other libraries directly in boost.test tests? I am far from being a Bjam expert.
As for testing in C++03 mode - that's easy, just use GCC's default compiler mode ;-)
I have also a similar setup on OSX, but this does not prevent us from making mistakes, and capturing those mistakes before it goes to master is the very purpose of the develop branch.
What does what John suggested have to do with the 'develop' branch versus the 'master' branch ?
The problems that are arising in the ML are /only/ about the develop branch, mainly because sometimes we do not have the proper setup to catch some of the problems (MSVC8 for instance). According to the whole discussion thread here, boost.test is not supposed to make mistakes in the develop branch. OTOH, develop goes to master if and only if tests are ok, and only master is eligible for a release.
Nobody is arguing that making mistakes in the 'develop' branch does not occur. Gennady's response, however, was not that this was a mistake but a chosen decision to drop support for testing in anything other than C++11 mode. In other words he was saying that the change in 'develop' was not done by accident but done knowingly on purpose. Everybody is asking Boost Test to desist in the 'develop' branch from requiring libraries which use Boost Test to be compiled with C++11 support. Doing so can easily break the 'develop' test matrix for libraries which compile their tests in C++03 mode. John Maddock's comment about using gcc in it's default compiler mode of C++03 support was in response to your complaint that testing Boost Test using C++03 was a resource burden for Boost Test. But let's just move on. No one is seeking to lay blame on anyone for anything. Lots of libraries use Boost Test which need to be tested in C++03 mode so if Boost Test wants to move forward with a version which only supports testing in C++11 mode in order to use C++11 facilities, which is perfectly reasonable, it should do so as a separate library forked from the current version of Boost Test.
I do not see there anything not related to develop vs master, or the development workflow in general. Since according to you, I am missing something, please tell me why "develop vs. master" is off-topic.
Edward Diener wrote:
But let's just move on. No one is seeking to lay blame on anyone for anything. Lots of libraries use Boost Test which need to be tested in C++03 mode so if Boost Test wants to move forward with a version which only supports testing in C++11 mode in order to use C++11 facilities, which is perfectly reasonable, it should do so as a separate library forked from the current version of Boost Test.
Sorry if someone answered this already, but I'm curious: 1) Why not let Boost.Test define its own requirements? I thought that was a maintainer decision only. I thought that was a core value of Boost? 2) Why not let people fork it to Boost.TestLegacyVersion if they want legacy compatibility? Why suggest that the new version be 'the fork'? Why not fork for legacy and drop the legacy when the time for doing that comes? 3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future? Thanks, Steve.
On 10/5/2015 1:51 PM, Stephen Kelly wrote:
Edward Diener wrote:
But let's just move on. No one is seeking to lay blame on anyone for anything. Lots of libraries use Boost Test which need to be tested in C++03 mode so if Boost Test wants to move forward with a version which only supports testing in C++11 mode in order to use C++11 facilities, which is perfectly reasonable, it should do so as a separate library forked from the current version of Boost Test.
Sorry if someone answered this already, but I'm curious:
1) Why not let Boost.Test define its own requirements? I thought that was a maintainer decision only. I thought that was a core value of Boost?
If your library is depended on by upteenth other Boost libraries plus who knows how many other end-users, many of whom's use will be broken by your change, don't you think it behooves you to think that your change may not be the best thing to do ? If CMake were changed to only support builds where C++11 mode was being used, don't you think you might here about from your end-users ? I know that would be a ridiculous change, but I hope I have made my point.
2) Why not let people fork it to Boost.TestLegacyVersion if they want legacy compatibility? Why suggest that the new version be 'the fork'? Why not fork for legacy and drop the legacy when the time for doing that comes?
3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
On 10/5/2015 7:18 PM, Edward Diener wrote:
On 10/5/2015 1:51 PM, Stephen Kelly wrote:
Edward Diener wrote:
But let's just move on. No one is seeking to lay blame on anyone for anything. Lots of libraries use Boost Test which need to be tested in C++03 mode so if Boost Test wants to move forward with a version which only supports testing in C++11 mode in order to use C++11 facilities, which is perfectly reasonable, it should do so as a separate library forked from the current version of Boost Test.
Sorry if someone answered this already, but I'm curious:
1) Why not let Boost.Test define its own requirements? I thought that was a maintainer decision only. I thought that was a core value of Boost?
If your library is depended on by upteenth other Boost libraries plus who knows how many other end-users, many of whom's use will be broken by your change, don't you think it behooves you to think that your change may not be the best thing to do ?
If CMake were changed to only support builds where C++11 mode was being used, don't you think you might here about from your end-users ?
Corrected: "If CMake were changed to only support builds where C++11 mode was being used, don't you think you might hear about it from your end-users ?"
I know that would be a ridiculous change, but I hope I have made my point.
2) Why not let people fork it to Boost.TestLegacyVersion if they want legacy compatibility? Why suggest that the new version be 'the fork'? Why not fork for legacy and drop the legacy when the time for doing that comes?
3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
Edward Diener wrote:
On 10/5/2015 1:51 PM, Stephen Kelly wrote:
Edward Diener wrote:
But let's just move on. No one is seeking to lay blame on anyone for anything. Lots of libraries use Boost Test which need to be tested in C++03 mode so if Boost Test wants to move forward with a version which only supports testing in C++11 mode in order to use C++11 facilities, which is perfectly reasonable, it should do so as a separate library forked from the current version of Boost Test.
Sorry if someone answered this already, but I'm curious:
1) Why not let Boost.Test define its own requirements? I thought that was a maintainer decision only. I thought that was a core value of Boost?
If your library is depended on by upteenth other Boost libraries plus who knows how many other end-users, many of whom's use will be broken by your change, don't you think it behooves you to think that your change may not be the best thing to do ?
If CMake were changed to only support builds where C++11 mode was being used, don't you think you might here about from your end-users ? I know that would be a ridiculous change, but I hope I have made my point.
Thanks for sharing your perspective on that first question! Steve.
On October 5, 2015 1:51:57 PM EDT, Stephen Kelly
Edward Diener wrote:
But let's just move on. No one is seeking to lay blame on anyone for anything. Lots of libraries use Boost Test which need to be tested in C++03 mode so if Boost Test wants to move forward with a version which only supports testing in C++11 mode in order to use C++11 facilities, which is perfectly reasonable, it should do so as a separate library forked from the current version of Boost Test.
Sorry if someone answered this already, but I'm curious:
1) Why not let Boost.Test define its own requirements? I thought that was a maintainer decision only. I thought that was a core value of Boost?
That is within the maintainers' rights. The argument is that they are making an ill-informed decision and should reconsider it. There has been much controversy over Boost.Test over the years. It is a much-used library within Boost. Disturbances like this aren't helpful.
2) Why not let people fork it to Boost.TestLegacyVersion if they want legacy compatibility? Why suggest that the new version be 'the fork'? Why not fork for legacy and drop the legacy when the time for doing that comes?
Forcing all other projects to make changes is more work than forking the one project.
3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes. ___ Rob (Sent from my portable computation engine)
3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes.
This is a clear example of the drawbacks of a monolithic boost distribution.
On Tue, Oct 6, 2015 at 11:56 AM, M.A. van den Berg
3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes.
This is a clear example of the drawbacks of a monolithic boost distribution.
What, exactly, and how it it related to monolithic structure? Opting in for such breaking changes is the only sensible way, IMHO.
On 6 Oct 2015, at 11:17, Andrey Semashev
On Tue, Oct 6, 2015 at 11:56 AM, M.A. van den Berg
wrote: 3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes.
This is a clear example of the drawbacks of a monolithic boost distribution.
What, exactly, and how it it related to monolithic structure? Opting in for such breaking changes is the only sensible way, IMHO.
The way I see it, it that Test2, Test3 is poor-mans versioning effect by creating new libraries with version numbers added in the name, .. and then shipping all three of them in a boost release? This solution gives very limited version dependency capabilities. When boost moved to git there was an effort to reduce dependencies between libraries. One of wish -by some- was to have a future of boost where individual libraries and their version tagged dependencies would all be separate downloadable. This is not the current situation, of even a goal that’s on the agenda, but I wish it was. It would solve a lot of scalability issues IMO. Having the current monolithic boost releases means that adding version numbers to libraries seems to be the best way forward.
On Tue, Oct 6, 2015 at 12:31 PM, M.A. van den Berg
On 6 Oct 2015, at 11:17, Andrey Semashev
wrote: On Tue, Oct 6, 2015 at 11:56 AM, M.A. van den Berg
wrote: 3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes.
This is a clear example of the drawbacks of a monolithic boost distribution.
What, exactly, and how it it related to monolithic structure? Opting in for such breaking changes is the only sensible way, IMHO.
The way I see it, it that Test2, Test3 is poor-mans versioning effect by creating new libraries with version numbers added in the name, .. and then shipping all three of them in a boost release? This solution gives very limited version dependency capabilities.
No, it's more than just poor-man's versioning. The key point is that Test, Test2 and Test3 are different libraries that are maintained separately (from the user's perspective) and can be used side by side. If a library or user's code sticks to C++03 for whatever reason, it can keep using Test and receive updates for it in a timely manner. This is not achieved by simply having different versions of Test available for download, even if multiple versions could be somehow used together in a single build of Boost or user's application.
On 6 Oct 2015, at 11:38, Andrey Semashev
On Tue, Oct 6, 2015 at 12:31 PM, M.A. van den Berg
wrote: On 6 Oct 2015, at 11:17, Andrey Semashev
wrote: On Tue, Oct 6, 2015 at 11:56 AM, M.A. van den Berg
wrote: 3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes.
This is a clear example of the drawbacks of a monolithic boost distribution.
What, exactly, and how it it related to monolithic structure? Opting in for such breaking changes is the only sensible way, IMHO.
The way I see it, it that Test2, Test3 is poor-mans versioning effect by creating new libraries with version numbers added in the name, .. and then shipping all three of them in a boost release? This solution gives very limited version dependency capabilities.
No, it's more than just poor-man's versioning. The key point is that Test, Test2 and Test3 are different libraries that are maintained separately (from the user's perspective) and can be used side by side. If a library or user's code sticks to C++03 for whatever reason, it can keep using Test and receive updates for it in a timely manner. This is not achieved by simply having different versions of Test available for download, even if multiple versions could be somehow used together in a single build of Boost or user's application.
Yes I agree, that’s orthogonal to versioning then. If a library was published with an implicit prerequisites/promise that it compiles on C+03 then that’s something you shouldn't break. You can decide to no longer maintain it, but actively breaking backwards compatibility without the users having the ability to downgrade (a subset of libraries in boost that break backwards compatibility for them) is horrible for users.
On 06-Oct-15 12:31 PM, M.A. van den Berg wrote:
On 6 Oct 2015, at 11:17, Andrey Semashev
wrote: On Tue, Oct 6, 2015 at 11:56 AM, M.A. van den Berg
wrote: 3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes.
This is a clear example of the drawbacks of a monolithic boost distribution.
What, exactly, and how it it related to monolithic structure? Opting in for such breaking changes is the only sensible way, IMHO.
The way I see it, it that Test2, Test3 is poor-mans versioning effect by creating new libraries with version numbers added in the name, .. and then shipping all three of them in a boost release? This solution gives very limited version dependency capabilities.
When boost moved to git there was an effort to reduce dependencies between libraries. One of wish -by some- was to have a future of boost where individual libraries and their version tagged dependencies would all be separate downloadable. This is not the current situation, of even a goal that’s on the agenda, but I wish it was. It would solve a lot of scalability issues IMO.
I think that's a little bit too abstract. If there are dozens of libraries that semantically depend on Boost.Test, it does not matter in practice whether it's monolithic distribution or a fully modular one put together on demand by supernatural powers - if end users actively use Boost libraries with C++03 and want to run tests, they need Boost.Test.C++03. If there were 2 or 3 niche libraries with such dependency, the situation would be different, but as it is now, this is painful breaking change regardless of distribution mechanics. - Volodya
Rob Stewart wrote:
On October 5, 2015 1:51:57 PM EDT, Stephen Kelly
wrote: Edward Diener wrote:
But let's just move on. No one is seeking to lay blame on anyone for anything. Lots of libraries use Boost Test which need to be tested in C++03 mode so if Boost Test wants to move forward with a version which only supports testing in C++11 mode in order to use C++11 facilities, which is perfectly reasonable, it should do so as a separate library forked from the current version of Boost Test.
Sorry if someone answered this already, but I'm curious:
1) Why not let Boost.Test define its own requirements? I thought that was a maintainer decision only. I thought that was a core value of Boost?
That is within the maintainers' rights. The argument is that they are making an ill-informed decision and should reconsider it. There has been much controversy over Boost.Test over the years. It is a much-used library within Boost. Disturbances like this aren't helpful.
2) Why not let people fork it to Boost.TestLegacyVersion if they want legacy compatibility? Why suggest that the new version be 'the fork'? Why not fork for legacy and drop the legacy when the time for doing that comes?
Forcing all other projects to make changes is more work than forking the one project.
Thanks for sharing your perspective! Can you qualify what 'all other projects' means?
3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes.
You seem to prefer to punish those people who have already moved with the times :). Or would they otherwise have to do something too? Anyway, I consider my curiosity satisfied on that one :). Thanks, Steve.
On October 6, 2015 2:36:05 PM EDT, Stephen Kelly
Rob Stewart wrote:
On October 5, 2015 1:51:57 PM EDT, Stephen Kelly
wrote: 2) Why not let people fork it to Boost.TestLegacyVersion if they want legacy compatibility? Why suggest that the new version be 'the fork'? Why not fork for legacy and drop the legacy when the time for doing that comes?
Forcing all other projects to make changes is more work than forking The one project.
Can you qualify what 'all other projects' means?
I don't have a specific number at hand, but I'm referring to all of the Boost projects that rely on Boost.Test for their tests. The tests for every one of those projects would have to be modified to reference a new library. That means changing include directives and link information. Furthermore, those maintainers would have to find someone to create the legacy fork to even make that possible.
3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes.
You seem to prefer to punish those people who have already moved with the times :). Or would they otherwise have to do something too?
How is wanting the Boost.Test maintainers to create a fork, make all the banking changes they like to form a new library, and then offering that library punishment? The appropriate alternative is to announce that breaking changes are coming, use conditional compilation to opt in to the changes for several releases, then make the changes the default and use conditional compilation to opt out for several more releases, and finally drop the original. That is how numerous other libraries manage the issue. In many ways, that's harder on the maintainers, but it does preserve the library name and avoids the likely need for a review of a fork. ___ Rob (Sent from my portable computation engine)
Rob Stewart wrote:
On October 6, 2015 2:36:05 PM EDT, Stephen Kelly
wrote: Rob Stewart wrote:
On October 5, 2015 1:51:57 PM EDT, Stephen Kelly
wrote: 2) Why not let people fork it to Boost.TestLegacyVersion if they want legacy compatibility? Why suggest that the new version be 'the fork'? Why not fork for legacy and drop the legacy when the time for doing that comes?
Forcing all other projects to make changes is more work than forking The one project.
Can you qualify what 'all other projects' means?
I don't have a specific number at hand, but I'm referring to all of the Boost projects that rely on Boost.Test for their tests.
Ok, thanks! You're only thinking about in-tree consumers of the library!
The tests for every one of those projects would have to be modified to reference a new library. That means changing include directives and link information.
I don't know aything about the boost buildsystem, but I am surprised that is difficult.
Furthermore, those maintainers would have to find someone to create the legacy fork to even make that possible.
I'm surprised that is difficult too. Seems like something mostly mechanical. It also seems kind of reasonable that the people who want a legacy library could create it... It is clear that whoever changed Boost.Test has already moved with the times :).
3) Why make users change their code to use 'Test2' instead of 'Test', and then to 'Test3' in the future?
That allows users to opt in to the changes.
You seem to prefer to punish those people who have already moved with the times :). Or would they otherwise have to do something too?
How is wanting the Boost.Test maintainers to create a fork, make all the banking changes they like to form a new library, and then offering that library punishment?
External (not in-tree) consumers of boost have moved with the times. On this mailing list that is not clear as everyone here likes to use GCC 4.1 and MSVC 7.1 apparently :). But yes, the reality is that many many projects not in the boost tree use GCC 4.8 and later and MSVC 2012 and later. They can use Boost.Foo today, which might conditionally use modern C++ features. If Boost.Foo some day increases compiler requirements, then it is apparently a requirement to create Boost.Foo2 (Rob, note that you are responding to my question to Edward here: http://thread.gmane.org/gmane.comp.lib.boost.devel/263519/focus=263572 ) If the requirement is that a fork is created, then some group is 'punished' with having to follow the rename. You want to punish the people who have already moved with the times.
The appropriate alternative is to announce that breaking changes are coming, use conditional compilation to opt in to the changes for several releases, then make the changes the default and use conditional compilation to opt out for several more releases, and finally drop the original.
That seems like a reasonable thing to do, but it is not what we are talking about :). See that we are talking about Edwards suggestion to create a fork library for the new compiler requirements: http://thread.gmane.org/gmane.comp.lib.boost.devel/263519/focus=263572 That is what we are talking about. The suggestion is to fork with a new name when updating compiler requirements. That has come up before: http://thread.gmane.org/gmane.comp.lib.boost.devel/257194/focus=257295
That is how numerous other libraries manage the issue.
Yes, it seems like one of many reasonable approaches.
In many ways, that's harder on the maintainers, but it does preserve the library name and avoids the likely need for a review of a fork.
Yes, preserving the library name is good :). That's what we are discussing :). Thanks, Steve.
Edward Diener
Nobody is arguing that making mistakes in the 'develop' branch does not occur. Gennady's response, however, was not that this was a mistake but a chosen decision
This was not a decision at the point I checked in the code, but I am trying to defend the notion of dropping c++03 in general. Who wants to admit a mistake - let's make a political statement out of it ;o). Seriously though - this *is* the subject worth discussing.
But let's just move on. No one is seeking to lay blame on anyone for anything. Lots of libraries use Boost Test which need to be tested in C++03 mode so if Boost Test wants to move forward with a version which only supports testing in C++11 mode in order to use C++11 facilities, which is perfectly reasonable, it should do so as a separate library forked from the current version of Boost Test.
I wish we can have an established procedure. Introducing new Boost.Test3 or Boost.Test-c++11 does not look appealing. Gennadiy
On 04 Oct 2015, at 14:49, Raffi Enficiaud
wrote: Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
This sort of problem has been discussed before on this list without any real progress. I think a solution to this is needed to allow boost tools maintainers (boost.test is also a tool), similar services that library maintainers enjoy. A solution may also provide better test services for all boost developers and possibly other projects. An idea of a possible way forward providing a test_request service at boost.org/test_request is outlined below. I would like thoughts on how useful or feasible such a service would be, these are some questions I would like to have answered; - Will library maintainers use a boost.org/test_request service? - How valuable would it be, as compared to merging to develop and waiting for current test reports? - How much of a challenge would it be to get test runners (new and old) onboard? - How feasible is it to set up a service as outlined below based on modification of the current system for regression testing in boost? - What alternatives exist providing same kind of, or better value to the community, hopefully with less effort? E.g.: can Jenkins or other such test dashboards / frameworks easily be configured to provide the flexibility and features needed here? First a bit of motivation. When source changes are made in source code that is intended to work on multiple tool chains and target platforms, the testing challenge is vastly more complicated that just testing the compiler and operating system (host platform) you use for development. Conceptually it does not need to be that much harder if a single host platform action, even before local commit of the changes, caused compilation and testing to be staged and executed on any number of remote build hosts and target platforms, and timely results where made available in a suitable form on the host initiating it all. The suggested test_request service outlined below is an attempt to achieve this. A test request service is a mechanism that would allow library maintainers to post a test request indicating version of sources to build and test. The intention would be allowing the library maintainers a way of testing their changes on specified targets against a specified known to be stable baseline of other libraries that is defined as part of the request. A method of selecting test runners or indicating properties of the test runners you request the test to be performed for is needed. Also it should be possible to specify which libraries to only compile and which to test. The output test results need to be managed in the context of the test request, not the overall Boost develop or master test results. Test runners should probably be able to control the extent they are willing to pick up and process test requests, as opposed to only running the regular boost regression tests. Some sort of scheduling may be desirable or needed to automate well while preserving the precedence of the main boost regression tests and not exhausting test runner resources. This may be achieved by deliberately starving test requests that are resource hungry and often requested to allow leaner, quicker, or less often requested test requests to be processed first. Such smart scheduling is probably not trivial, so the best thing would be to ignore it if it is not needed, but I have a feeling it may be needed to throttle load on test runner hardware and to ensure the more critical tests for the overall community success is serviced. Beyond the needs og Boost.Test in this topic, I think test request will allow all maintainers to test on all relevant targets, given tester runners are available, and to perform these tests before merging to develop. Thus allowing more issues to be resolved before merge, a more stable develop branch, and less disruptions in the main develop test reports. For many libraries, test requests will require very small amounts of test runner hardware resources as compared to full boost regression tests, which more or less blindly run all tests. This is opening the prospects for quick responses to test requests and thus a more interactive work-flow. Such quick test could possibly possibly run specific test cases in a specific library on specified testers. It seems possible that such test requests should be serviced in seconds, piping failures back into the requesters development environment or even into an IDE issues list. But that is details that can be delt with later, a simple response back to the submitter with URL to use for fetching progress information and results, is a good start. The OGC WPS protocol uses this approach, and that sort of protocol may be a good fit for test requests. If the test request get a web URL, a web version of the results could be available there for a given number of days before it is purged or archived. As there will no longer be only a couple of specific places to find boost test results, RSS, AtomPub or similar protocols may be useful to allow users to subscribe to test results for a given library or even for a specific test request. One likely desirable feature, that is a challenge, would be to allow testing of changes before they go into a commit that is pushed to a public git repository. That could be achieved by specifiying a public commit and use git to create a patch in client that is part of the test request. That way the the test runners servicing the request can use git to apply the patch onto the specified commit before performing the tests. If there are no way of doing this with existing available tools and new tools are needed, the following is what I could envision as one proposal for a solution. 1. A client command line tool to make the test request is needed. Tighter API based integration into IDEs and other GUI environments may be possible, but not essential as the command line too can be used. A successful local build of boost is a logical prerequisite for posting a test request to a service, hence this mean that the the client tool itself can depend on boost build and possibly other parts of boost such as asio for networking. Also it can be assumed that you boost sources checked out locally with git available to check status, log, and extract patches onto last public commit on gthub. The tool may allow user to invoke it in same fashion as b2 to specify what to test, or it may require using a user defined profile configuration for the test request specification, possibly also a combination of the two invocation methods can be supported as well. A user may define more than one profile in a local configuration file, one is specified as default or the first listed become the default. Based on the specified or default test request profile, the tool create and post a test request with respective git commit IDs and patches from currently local boost working directories whenever source code is changed. The client tool should allow special parameters canceling further processing on the last posted or a specific request, or similarly superseding it with a new request. Think of it as stopping the compiler locally changing some code and compile again, in that case we do not want the old test requests to be in effect at the service. 2. A service at a well known address, e.g.: www.boost.org/test_request receives the request and give it priority according to current administrator policies and possibly some scheduling scheme. Policies may if needed be changed in different phases of boost release cycles. The test request is rejected, or a test ID is selected, the specification and status is made available to testers and other clients. The client is provided a response accordingly with URL to status data, or reason for rejection. Possibly a second URL with ownership privileges to the request, e.g. the ability to cancel the test request, renew it or supersede it with a new. The service maintain a table of outstanding test requests that is fetched on demand by testers. 3. Modify the existing test runner scripts such that, when a teste runner start, or when it has more time available for boost testing, the table of currently outstanding test requests is fetched from boost.org and a suitable job is picked based on som simple rule using data in the table, tester properties, and remaining available time for test requests. The test request details is fetched from boost.org and a message is posted to the service signalling start of processing of the request at the test runner. At regular interval the tester script should post progress as to the service and check if the request is cancelled or superseded, in which case further processing can be stopped. Finally when processing is completed, the tester script need to provide the results to the service. 4. The boost.org/test_request service will maintain a table of active requests, after a time duration specified in the request, the request is deactivated and removed from the table by the service to avoid that test runners continue to pick the test request. A sensible default and maximum duration is defined by the service. The table may be made viewable at a well known location. E.g.: boost.org/test_request/active. as html, and in simpler machine readable forms as used by test runner script. 5. The boost.org/test_request service may have a table of recent requests as well, keeping URLs available for test requests that still have request and result data available on the service. After a configurable number of days a test request´s data should be purged to clean up resource usage at the service host. Before that, it should be possible for clients to download and archive the request data, both request and results. The owner of a test request may be allowed to renew an active request to prevent it from being deactivated, or even re-acticate de-acticated requests. This way it should be possible to wait out a low priority on your request without always failing with no results, scheduling should allow any accepted request eventually get priority regardless of active policy, or it would be better to reject with a reason stating that a higher priority is required or a smaller scope must be selected for the test. — Bjørn
On 04 Oct 2015, at 14:49, Raffi Enficiaud
wrote: Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
This sort of problem has been discussed before on this list without any real progress. I think a solution to this is needed to allow boost tools maintainers (boost.test is also a tool), similar services that library maintainers enjoy. A solution may also provide better test services for all boost developers and possibly other projects. An idea of a possible way forward providing a test_request service at boost.org/test_request is outlined below.
I would like thoughts on how useful or feasible such a service would be, these are some questions I would like to have answered;
- Will library maintainers use a boost.org/test_request service? - How valuable would it be, as compared to merging to develop and waiting for current test reports? - How much of a challenge would it be to get test runners (new and old) onboard? - How feasible is it to set up a service as outlined below based on modification of the current system for regression testing in boost? - What alternatives exist providing same kind of, or better value to the community, hopefully with less effort? E.g.: can Jenkins or other such test dashboards / frameworks easily be configured to provide the flexibility and features needed here?
First a bit of motivation. When source changes are made in source code that is intended to work on multiple tool chains and target platforms, the testing challenge is vastly more complicated that just testing the compiler and operating system (host platform) you use for development. Conceptually it does not need to be that much harder if a single host platform action, even before local commit of the changes, caused compilation and testing to be staged and executed on any number of remote build hosts and target platforms, and timely results where made available in a suitable form on the host initiating it all. The suggested test_request service outlined below is an attempt to achieve this.
A test request service is a mechanism that would allow library maintainers to post a test request indicating version of sources to build and test. The intention would be allowing the library maintainers a way of testing their changes on specified targets against a specified known to be stable baseline of other libraries that is defined as part of the request. A method of selecting test runners or indicating properties of the test runners you request the test to be performed for is needed. Also it should be possible to specify which libraries to only compile and which to test. The output test results need to be managed in the context of the test request, not the overall Boost develop or master test results.
Test runners should probably be able to control the extent they are willing to pick up and process test requests, as opposed to only running the regular boost regression tests. Some sort of scheduling may be desirable or needed to automate well while preserving the precedence of the main boost regression tests and not exhausting test runner resources. This may be achieved by deliberately starving test requests that are resource hungry and often requested to allow leaner, quicker, or less often requested test requests to be processed first. Such smart scheduling is probably not trivial, so the best thing would be to ignore it if it is not needed, but I have a feeling it may be needed to throttle load on test runner hardware and to ensure the more critical tests for the overall community success is serviced.
Beyond the needs og Boost.Test in this topic, I think test request will allow all maintainers to test on all relevant targets, given tester runners are available, and to perform these tests before merging to develop. Thus allowing more issues to be resolved before merge, a more stable develop branch, and less disruptions in the main develop test reports. For many libraries, test requests will require very small amounts of test runner hardware resources as compared to full boost regression tests, which more or less blindly run all tests. This is opening the prospects for quick responses to test requests and thus a more interactive work-flow. Such quick test could possibly possibly run specific test cases in a specific library on specified testers. It seems possible that such test requests should be serviced in seconds, piping failures back into the requesters development environment or even into an IDE issues list. But that is details that can be delt with later, a simple resp onse back to the submitter with URL to use for fetching progress information and results, is a good start. The OGC WPS protocol uses this approach, and that sort of protocol may be a good fit for test requests. If the test request get a web URL, a web version of the results could be available there for a given number of days before it is purged or archived. As there will no longer be only a couple of specific places to find boost test results, RSS, AtomPub or similar protocols may be useful to allow users to subscribe to test results for a given library or even for a specific test request.
One likely desirable feature, that is a challenge, would be to allow testing of changes before they go into a commit that is pushed to a public git repository. That could be achieved by specifiying a public commit and use git to create a patch in client that is part of the test request. That way the the test runners servicing the request can use git to apply the patch onto the specified commit before performing the tests.
If there are no way of doing this with existing available tools and new tools are needed, the following is what I could envision as one proposal for a solution.
1. A client command line tool to make the test request is needed. Tighter API based integration into IDEs and other GUI environments may be possible, but not essential as the command line too can be used. A successful local build of boost is a logical prerequisite for posting a test request to a service, hence this mean that the the client tool itself can depend on boost build and possibly other parts of boost such as asio for networking. Also it can be assumed that you boost sources checked out locally with git available to check status, log, and extract patches onto last public commit on gthub. The tool may allow user to invoke it in same fashion as b2 to specify what to test, or it may require using a user defined profile configuration for the test request specification, possibly also a combination of the two invocation methods can be supported as well. A user may define more than one profile in a local configuration file, one is specified as default or the first listed become
On 10/8/2015 1:46 PM, Bjørn Roald wrote: the default. Based on the specified or default test request profile, the tool create and post a test request with respective git commit IDs and patches from currently local boost working directories whenever source code is changed. The client tool should allow special parameters canceling further processing on the last posted or a specific request, or similarly superseding it with a new request. Think of it as stopping the compiler locally changing some code and compile again, in that case we do not want the old test requests to be in effect at the service.
2. A service at a well known address, e.g.: www.boost.org/test_request receives the request and give it priority according to current administrator policies and possibly some scheduling scheme. Policies may if needed be changed in different phases of boost release cycles. The test request is rejected, or a test ID is selected, the specification and status is made available to testers and other clients. The client is provided a response accordingly with URL to status data, or reason for rejection. Possibly a second URL with ownership privileges to the request, e.g. the ability to cancel the test request, renew it or supersede it with a new. The service maintain a table of outstanding test requests that is fetched on demand by testers.
3. Modify the existing test runner scripts such that, when a teste runner start, or when it has more time available for boost testing, the table of currently outstanding test requests is fetched from boost.org and a suitable job is picked based on som simple rule using data in the table, tester properties, and remaining available time for test requests. The test request details is fetched from boost.org and a message is posted to the service signalling start of processing of the request at the test runner. At regular interval the tester script should post progress as to the service and check if the request is cancelled or superseded, in which case further processing can be stopped. Finally when processing is completed, the tester script need to provide the results to the service.
4. The boost.org/test_request service will maintain a table of active requests, after a time duration specified in the request, the request is deactivated and removed from the table by the service to avoid that test runners continue to pick the test request. A sensible default and maximum duration is defined by the service. The table may be made viewable at a well known location. E.g.: boost.org/test_request/active. as html, and in simpler machine readable forms as used by test runner script.
5. The boost.org/test_request service may have a table of recent requests as well, keeping URLs available for test requests that still have request and result data available on the service. After a configurable number of days a test request´s data should be purged to clean up resource usage at the service host. Before that, it should be possible for clients to download and archive the request data, both request and results. The owner of a test request may be allowed to renew an active request to prevent it from being deactivated, or even re-acticate de-acticated requests. This way it should be possible to wait out a low priority on your request without always failing with no results, scheduling should allow any accepted request eventually get priority regardless of active policy, or it would be better to reject with a reason stating that a higher priority is required or a smaller scope must be selected for the test.
I think that what you have written is extremely valuable but I think you may underrate the need for individual developers to be able to test their library, or any other Boost library, on their local machine using a testing environment that is part of Boost. Currently that testing tool is usually either Boost.Test or lightweight test. I am not against, in general, testing tools which are outside of Boost, but that would have to be co-ordinated in such a way where any end-user of a Boost library would be able to have easy access to some other testing tool outside of Boost. I don't think we can reduce testing Boost libraries solely to test runners which are part of some online service. Nonetheless I would welcome online testing services which would automate the regression testing of Boost libraries on the appropriate branches ( currently 'develop' and 'master' ). This would remove the onus of testing and resources from individual testers and would provide for a much wider range of operating systems/compiler/versions than we currently have.
On 08 Oct 2015, at 22:06, Edward Diener
wrote: On 10/8/2015 1:46 PM, Bjørn Roald wrote:
On 04 Oct 2015, at 14:49, Raffi Enficiaud
wrote: Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
This sort of problem has been discussed before on this list without any real progress. I think a solution to this is needed to allow boost tools maintainers (boost.test is also a tool), similar services that library maintainers enjoy. A solution may also provide better test services for all boost developers and possibly other projects. An idea of a possible way forward providing a test_request service at boost.org/test_request is outlined below.
I would like thoughts on how useful or feasible such a service would be, these are some questions I would like to have answered;
- Will library maintainers use a boost.org/test_request service? - How valuable would it be, as compared to merging to develop and waiting for current test reports? - How much of a challenge would it be to get test runners (new and old) onboard? - How feasible is it to set up a service as outlined below based on modification of the current system for regression testing in boost? - What alternatives exist providing same kind of, or better value to the community, hopefully with less effort? E.g.: can Jenkins or other such test dashboards / frameworks easily be configured to provide the flexibility and features needed here?
removed most of message, see original post.
I think that what you have written is extremely valuable but I think you may underrate the need for individual developers to be able to test their library, or any other Boost library, on their local machine using a testing environment that is part of Boost. Currently that testing tool is usually either Boost.Test or lightweight test.
Thanks, I am not trying to underrate local testing. Local testing must be simple and shall in general be performed prior to using a test request service or other remote testing. As before. The practical question is on how many target platforms this is feasible for boost developers to test locally at a reasonable cost in hardware, software licenses and time. Local testing should clearly be performed at least the development platform, preferably with more than one compiler. Ideally tools for local testing, whether it is test libraries, frameworks, dashboards, reporting, or virtualisation should be improved to the point where remote testing would not be needed, but that is probably not feasible. Thus the need to support remote testing in a flexible and efficient way for the developers. It is there to fill the holes local testing cannot or will not fill, not to replace local testing.
I am not against, in general, testing tools which are outside of Boost, but that would have to be co-ordinated in such a way where any end-user of a Boost library would be able to have easy access to some other testing tool outside of Boost.
I am not sure I follow, but I see this sort of service as much or as little as a part of boost as the current regression test runners and report generators are. Some sort of access control limiting this to changes in boost.org repositories and boost developers is likely needed, if not for anything else to ease the security concerns of the test runners. Clearly this could be expanded to something more general outside the scope of boost.org repositories, but the main issue with that is computing resources at the test runners and their willingness of test runners to serve some more general cause. So that complicates too much I think, at least as a goal to start with. It is probably for some other organisation, possibly with the same or similar tools. Boost users shall use local tools at their disposal to test that boost work on their target platform, like today. Nothing changes there, they likely have the computing resources to do that. If they do not, it is not a Boost mission to fix that. Test programs that come with boost is available to them to make this simple. As before, hopefully they are improved as well as they will be tested well with test requests.
I don't think we can reduce testing Boost libraries solely to test runners which are part of some online service.
agreed
Nonetheless I would welcome online testing services which would automate the regression testing of Boost libraries on the appropriate branches ( currently 'develop' and 'master' ). This would remove the onus of testing and resources from individual testers and would provide for a much wider range of operating systems/compiler/versions than we currently have.
That is the idea, basically putting the control of "what to test” remotely in the hands of the individual developer, not only the bot managing the boost.org develop and master branch. The challenge I believe is that the added flexibility will cause less structure and easily exhaust the test runner resources unless it is under some sort of control. It may be too simple post test requests invoking compilation and testing parts of boost you do not need to test and on more runners than you need. — Bjørn
On 10/9/2015 1:38 AM, Bjørn Roald wrote:
On 08 Oct 2015, at 22:06, Edward Diener
wrote: On 10/8/2015 1:46 PM, Bjørn Roald wrote:
On 04 Oct 2015, at 14:49, Raffi Enficiaud
wrote: Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
This sort of problem has been discussed before on this list without any real progress. I think a solution to this is needed to allow boost tools maintainers (boost.test is also a tool), similar services that library maintainers enjoy. A solution may also provide better test services for all boost developers and possibly other projects. An idea of a possible way forward providing a test_request service at boost.org/test_request is outlined below.
I would like thoughts on how useful or feasible such a service would be, these are some questions I would like to have answered;
- Will library maintainers use a boost.org/test_request service? - How valuable would it be, as compared to merging to develop and waiting for current test reports? - How much of a challenge would it be to get test runners (new and old) onboard? - How feasible is it to set up a service as outlined below based on modification of the current system for regression testing in boost? - What alternatives exist providing same kind of, or better value to the community, hopefully with less effort? E.g.: can Jenkins or other such test dashboards / frameworks easily be configured to provide the flexibility and features needed here?
removed most of message, see original post.
I think that what you have written is extremely valuable but I think you may underrate the need for individual developers to be able to test their library, or any other Boost library, on their local machine using a testing environment that is part of Boost. Currently that testing tool is usually either Boost.Test or lightweight test.
snipped...
That is the idea, basically putting the control of "what to test” remotely in the hands of the individual developer, not only the bot managing the boost.org develop and master branch. The challenge I believe is that the added flexibility will cause less structure and easily exhaust the test runner resources unless it is under some sort of control. It may be too simple post test requests invoking compilation and testing parts of boost you do not need to test and on more runners than you need.
Could you please try to break up your messages/response into manageable lines for the viewing by others ? The bot managing the boost.org 'master' and 'develop' tests does not currently determine what tests are there. What does determine which regression tests appear are individual testers who have the resources and the time on their computers to run a regression test. If there were some sort of test runners on the Internet it should be fairly easy to setup regression tests for the major compilers on various platforms, as long as the testing apparatus supported a number of platforms and a wide variety of compilers and their versions. The biggest problem as I see it is co-ordination between changes to Boost 'develop' and 'master' and the testing apparatus accessing the Boost git repository and periodically running regression tests. Because Boost as a whole is a very large number of individual libraries it would be wasteful to run a regression tests on all libraries every time a change was made to any individual library on whatever branches, presumably 'master' and 'develop' for now, for which we want to automate regression testing. So some other schema would have to be created to determine how often regression testing would be run, for a particular environment, on Boost as a whole. Also when an automatic test runner actually runs regression tests it would need to take a snapshot of the Boost libraries at a given time to prevent the regression test from running its tests from a changing source tree.
Le 08/10/15 19:46, Bjørn Roald a écrit :
On 04 Oct 2015, at 14:49, Raffi Enficiaud
wrote: Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
This sort of problem has been discussed before on this list without any real progress. I think a solution to this is needed to allow boost tools maintainers (boost.test is also a tool), similar services that library maintainers enjoy. A solution may also provide better test services for all boost developers and possibly other projects. An idea of a possible way forward providing a test_request service at boost.org/test_request is outlined below.
I think the problem are simple: - the "develop" branch is currently a soup. - the regression dashboard should be improved. I will detail those two bullets.
I would like thoughts on how useful or feasible such a service would be, these are some questions I would like to have answered;
- Will library maintainers use a boost.org/test_request service? - How valuable would it be, as compared to merging to develop and waiting for current test reports? - How much of a challenge would it be to get test runners (new and old) onboard?
As far as I can see, some libraries have testing alternatives. Some are building on Travis. Yesterday, I created a build plan on my local Atlassian Bamboo instance, running the tests on all branches of boost.test against develop, on several platforms. Obviously, "several" platforms/compilers (5) is not in the same scale as the current regression dashboard, but it is a good start. What I need now is a way to publish this information on a public place, because my Bamboo CI is on an internal network.
- How feasible is it to set up a service as outlined below based on modification of the current system for regression testing in boost?
I think if we want to reuse or build upon the current system, it is hard and limiting.
- What alternatives exist providing same kind of, or better value to the community, hopefully with less effort? E.g.: can Jenkins or other such test dashboards / frameworks easily be configured to provide the flexibility and features needed here?
I think that what you propose is well covered by already existing tools in the industry. For instance, having a look to Atlassian Bamboo might be a good start: - it's **free for open source projects** - it's compiling/testing **one** specific version across many runners, so we have a clear status on one version. The dashboard is currently showing many different versions. - builds can be manually triggered or triggered on events: eg. change on core libraries, change on one specific library, scheduled (nightly) - it's trivial to set up, we can also have many different targets (continuous, stable, release candidate, etc). It has an extensive way of expressing a build in small jobs (can be just a script). - it understands git and submodules: one version is checked out on the central server, and dispatches on all runners. Runners can fully cache the git repository locally to lower the traffic and update time. - it provides metrics on the tests/compilations: this would then be used for release managers to make appropriate decisions on what would be the next stable version to build/test against. - it understands branches, and can automatically fork the build on new branches: it is then easy to test topic branches on several runners. - it maintains an history of the build/test sessions (configurable) that allow us to go back in time readily to check what happened. - it has a very nice interface - it can dispatch build/test based on requirements on the runners: instead of making a run on all available runners, you express the build as having requirements such as Windows+VS2008, Clang6+OSX10.9, etc. The load is also dispatched on runners. - it's Java based, available as soon as there is a Java VM for a platform. - etc etc. The only thing I do not think it addresses today is the asynchronism of the current runner setup: in the current setup, the runners may or not be available and provide complementary information (some of them are running once a month or so), but without being strongly synchronized on the versions of the superprojects. In the Bamboo setup, the version is the same on all runners, so if runners are not available, it is blocking the completion of the build. It's easy to address this issue by having lots of runners providing overlapping requirements though. The way I see it is: 1-/ some "continuous" frequent compilation and test is running, using a synchronized version on several runners. 2-/ based on the results (eg. increased stability, bad commit disaster, unplanned breaking change), a branch on the superproject eg. develop-stable is moved forward and pointing to a new, tested/confirmed revision of the previous stage 3-/ the current runners test against the "develop-stable", and provide information on the existing dashboard 4-/ metrics are deployed on the dashboard to see what is happening with boost during the development (number of compilation or test failure, etc). 5-/ a general policy/convention is used for master and develop: master is a public candidate, stable and tested. Develop is isolating every module/component and building against master or develop-stable (or both). For instance, boost.test[develop] builds against master (last known public version), except for boost.test which is on develop (next version). The advantages would be the following: - develop-stable moves by increment in a stable manner, less frequently and more surely than the current develop - develop-stable is already tested on several mainstream configuration, so it is an already viable test candidate for the runners. It avoids wasting resources (mostly checkout/compilation/test time, but also human: interpreting the results, this time with less results to parse ) - with "develop-stable", we have real increment of functionality: every step in develop-stable is an improvement on the overall boost, according to metrics universally accepted (yet to be defined). - having this scheme with the bullet 5-/ on master/develop/develop-stable allows to test the changes wrt. what was provided to the end-user (building against master) and wrt. the future release of boost (building against develop-stable). It also decouples the different potentially unstable states of the different components. - if we have a candidate on develop-stable or master that is missing some important runners, we can synchronize (humanly) with the runner maintainers to make them available for a specific version. Again less resource waste, better responsiveness. The shortcomings are: - having a develop-stable does not prevent the runners from running on different versions. - someone/something has the power/decision of moving develop-stable to a new version. - triggers more builds (has to be tampered though, a build on eg. boost.test would happen only if boost.test[develop] changes). What is lacking now: - a clear stable development branch at the superproject level. The superproject is an **integration** project of many components, and should be used to test the integration of versions of its components (if they are playing well together). As I said, the current develop branch is a soup, where all the coupling we want to avoid are happening. - a way to have a quick feedback on each of the components, against a stable state. Quick also means less runners, available 95% of the time. - a dashboard summarizing much better the information, keeping an history based on versions, and providing good metrics for evaluating the quality of the integration As a side note, I created a build plan with Bamboo for boost.test, testing all the branches of boost.test against boost[develop]. This is quite easy to do. An example of log is here: http://pastebin.com/raw.php?i=4aGPnD1a Build+test of boost.test took 12min on a windows runner, including checkout, b2 construction and b2 header. Raffi
On 10/4/15 4:38 AM, John Maddock wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
LOL - I've said this before and I'll say for the umpteenth time. This is very easy to address. a) set your local boost super project to "master" branch. Make sure all the subprojects are set to master. b) select the library you're working on and set the branch to "develop" or some feature branch c) make and test your changes. You're now isolated from any transitory issues, experments or whatever from other boost libraries on the develop branch. When you done - merge to develop and push to the repo. This works very, very, very well. It's the way Git was designed to work - I presume to address exactly this problem. Try it, you'll like it. of course this doesn't address misleading results in the develop test matrix which doesn't use this system - but that's not my problem. Robert Ramey
On 04/10/2015 18:46, Robert Ramey wrote:
On 10/4/15 4:38 AM, John Maddock wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
LOL - I've said this before and I'll say for the umpteenth time.
This is very easy to address.
a) set your local boost super project to "master" branch. Make sure all the subprojects are set to master.
b) select the library you're working on and set the branch to "develop" or some feature branch
c) make and test your changes. You're now isolated from any transitory issues, experments or whatever from other boost libraries on the develop branch. When you done - merge to develop and push to the repo.
This works very, very, very well. It's the way Git was designed to work - I presume to address exactly this problem.
Try it, you'll like it.
of course this doesn't address misleading results in the develop test matrix which doesn't use this system - but that's not my problem.
I can't speak for you, but I nearly always find issues in the online testing matrix that are simply not exposed by local testing (and for the record I test locally with MSVC (various versions), GCC (various versions), Intel, clang, and Oracle). Perhaps if I had local access to hardware that wasn't Intel based that might change.... but then I'd be running my own testing matrix! John.
On 10/4/15 11:35 AM, John Maddock wrote:
I can't speak for you, but I nearly always find issues in the online testing matrix that are simply not exposed by local testing (and for the record I test locally with MSVC (various versions), GCC (various versions), Intel, clang, and Oracle). Perhaps if I had local access to hardware that wasn't Intel based that might change.... but then I'd be running my own testing matrix!
Hmmm - I DO run my own testing matrix made with library_status. It's much simpler than the official one and doesn't require python or anything else. Just need to compile b2, process_jam_log and library_test. very simple, I run it everytime a make any kind of change. It almost always fails somewhere - the serialization library has a lot of tests - but it's easy to fix. And with CLang/Xcode - (which I use CMake/CTest) it's very, very fast - a few seconds for all the serialization library tests. GCC is slower, but still fine.
John.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
John Maddock
I can't speak for you, but I nearly always find issues in the online testing matrix that are simply not exposed by local testing
Indeed. Yet another reason, why local testing of whole boost is not practical. Gennadiy
On 10/4/15 11:35 AM, John Maddock wrote:
I can't speak for you, but I nearly always find issues in the online testing matrix that are simply not exposed by local testing
I'm not disputing this. a) suggesting a method which improves one's local testing. b) suggesting changes in the online testing which would make it better. Finally, it's not the online testing itself which is valuable/necessary. Its testing on platforms/configurations other than one's own that is important. Whether this happens on some "official" tester's site or on someone else's local machine is not really relevant. (and for the
record I test locally with MSVC (various versions), GCC (various versions), Intel, clang, and Oracle). Perhaps if I had local access to hardware that wasn't Intel based that might change.... but then I'd be running my own testing matrix!
Bjorn Reese
On 10/04/2015 12:18 PM, Gennadiy Rozental wrote:
Sooner rather than later we should have this discussion and setup
IMO it had very little sense to continue to maintain c++03 workarounds. Boost code should be an example how modern c++ libraries should look
timeline. like.
And c++03 compatibility is directly in a way of this goal.
You appear to have missed the many discussions on this topic.
Can you please give me some references?
While Boost started out to design cutting-edge libraries, it has been caught by its own success. Today there is a large user-base that still uses C++03, and that are unlikely to upgrade in the foreseeable future.
1. Without data backing this fact, this statement as good as "Most of our users already moved to c++11". If we measure by the compilers used by our test runners, 80% of them are running c++11 enabled compilers. 2. Those who are not ready to upgrade to new version of the compiler, are very likely not going to upgrade to new version of boost, so this discussion is irrelevant for them.
Therefore, the current consensus is that existing libraries should not increase their standards requirements. New libraries are free to decide their standards requirements (although it will probably be questioned during a formal review.)
1. There also libraries which are actively maintained and extended and those which are not. 2. If new libraries have c++11 requirement, what is the reason for anyone restricted to c++03 to upgrade to new version of boost? 3. In general, what is the formal criteria for changing the decision? At which point we'll be ready to say: no - we do not test against c++03 anymore? The presence of at least one c++03 test runner can't be a criteria. Realistically your concern only applies to users, restricted to c++03, who found an issue in old release (let's say 1.55) in specific library boost.abc. I am pretty sure (I was in similar position few years ago) they would be much more happy with patch release for specific library or at the very least a patch release to boost 1.55. instead of requiring them to upgrade to 1.6x, which brings who knows how many changes. I wish we have some formal regulations for this decisions, instead of some hand waving. Our personal backgrounds can't play into this. Gennadiy
On 6/10/2015 08:53, Gennadiy Rozental wrote:
2. If new libraries have c++11 requirement, what is the reason for anyone restricted to c++03 to upgrade to new version of boost?
Bugfixes (and new features, to a more limited extent) don't get backported. If someone reports a bug in older Boost, generally the first response is "use the latest Boost". Which is not an unreasonable response, but it illustrates the problem with assuming that those using C++03 can just stick with an older version forever.
On Mon, Oct 5, 2015 at 2:53 PM, Gennadiy Rozental
Bjorn Reese
writes: While Boost started out to design cutting-edge libraries, it has been caught by its own success. Today there is a large user-base that still uses C++03, and that are unlikely to upgrade in the foreseeable future.
1. Without data backing this fact, this statement as good as "Most of our users already moved to c++11". If we measure by the compilers used by our test runners, 80% of them are running c++11 enabled compilers.
I think this is more because 1) compiler versions have been coming out much more rapidly in recent years and 2) the people who run testers have the wherewithal to upgrade the tester. As one of the people who run test cases, I look forward to hearing that a new compiler has been release, so that I can go get it into the matrix and keep being complete. However, I would generally consider the old testers (msvc-8.0, gcc-4.6) the more important ones, as that is what a lot of people are still stuck with. These people aren't really represented will in the boost developer community, as developers are much more likely to make the jump to a new toolset, but I still think it is important that they are supported by our project.
2. Those who are not ready to upgrade to new version of the compiler, are very likely not going to upgrade to new version of boost, so this discussion is irrelevant for them.
From my experience in multiple organizations, this is true about 50% of the time. The other half, someone stuck with an old compiler (there are lots of
these people still doing active development of new features) wants to get a bug fix or new feature from boost. I think it is important that we support them for the foreseeable future. Tom
> 2. Those who are not ready to upgrade to new version of the compiler, are > very likely not going to upgrade to new version of boost, so this > discussion is irrelevant for them. I disagree, folks don't upgrade just because they have a new compiler, they upgrade Boost to get the latest bug fixes. >> Therefore, the current consensus is that existing libraries should not >> increase their standards requirements. New libraries are free to decide >> their standards requirements (although it will probably be questioned >> during a formal review.) > 1. There also libraries which are actively maintained and extended and > those which are not. The point is, your changes break Boost.Test on innumerable older compilers. I just had a look at the test matrix for the Math lib and there's so much stuff failing now from Boost.Test that I can simply no longer tell what needs fixing. That at least is a maintained library, if I can find the time I might do something about it by removing all Boost.Test dependencies - though heaven only knows there are way better uses for my time. The situation for older less-well maintained libraries is frankly pretty dire: do you suppose that the community maintenance team is going to rewrite the the test suites for all the unmaintained libraries? How about libraries where the maintainer is only occasionally around here? I think you greatly underestimate how much this hurts (again). I sincerely hope I'm wrong, but for authors who are only just finding time to maintain their Boost stuff, this could easily cause them to walk. Regards, John.
On October 5, 2015 3:53:16 PM EDT, Gennadiy Rozental
Bjorn Reese
writes: Today there is a large user-base that still uses C++03, and that are unlikely to upgrade in the foreseeable future.
1. Without data backing this fact, this statement as good as "Most of our users already moved to c++11". If we measure by the compilers used by our test runners, 80% of them are running c++11 enabled compilers.
Here's more anecdotal evidence: we deploy software on multiple Linux and Windows versions from the same code base. In some cases, we reference different versions of Boost on the various platforms, but in others we use the same or a restricted set of recent versions to get desired fixes our features. That said, only some of those platforms offer is support for C++11, so we can't use it yet in that code.
2. Those who are not ready to upgrade to new version of the compiler, are very likely not going to upgrade to new version of boost, so this discussion is irrelevant for them.
That is contrary to my experience.
3. In general, what is the formal criteria for changing the decision? At which point we'll be ready to say: no - we do not test against c++03 anymore? The presence of at least one c++03 test runner can't be a criteria.
Boost will not make that decision. Each library maintainer will. You have to decide whether and how Boost.Test will support the Boost libraries that support C++03. ___ Rob (Sent from my portable computation engine)
Le 03/10/15 23:10, Edward Diener a écrit :
On 10/3/2015 3:15 PM, Raffi Enficiaud wrote:
I cannot compile it though without C++11 support for 2 reasons: - in lockfree commit 9f52c24 unconditionally uses <atomic>, but this one is available only with C++11 support - in boost.test, references to C++11 jargon.
For boost.test, Gennadiy and me have to come up with a solution.
First, Boost Test can not require C++11 support. if you want to create a Boost Test which does require C++11 support make a Boost Test2 or whatever you want to call your new library that requires C++11 support. Others have said the same thing. It is beyond me how you or Gennadiy arbitrarily decided that libraries using Boost Test must run with C++11 support when you both know that there are many Boost libraries that do not require or need C++11, and these libraries use Boost Test.
This is more or less what I said.
Second, if lockfree requires C++11 support and it tries to compile without it, then that is lockfree's problem and not Boost Test's problem.
And this is more or less what I suggested.
Tim Blechmann
All tests for lockfree in both master and develop branch seem to fail.
Are you saying that master worked in 1.59, but fails now? Raffi, did we push any changes to master past 1.59 already? I didn't think so
as the tests of many boost libraries depend on boost.test, i'd suggest to run the complete tests of *all* boost libraries before pushing a change
Do you mean pushing the change in develop or master? In either case, how do you practically suggest one can do it? Which configurations? Which changes? Gennadiy
Le 04/10/15 12:09, Gennadiy Rozental a écrit :
Tim Blechmann
writes: All tests for lockfree in both master and develop branch seem to fail.
Are you saying that master worked in 1.59, but fails now? Raffi, did we push any changes to master past 1.59 already? I didn't think so
Nothing has been pushed to master since 1.59. As I said, some errors are due to lockfree, some others to boost.test. Raffi
All tests for lockfree in both master and develop branch seem to fail.
Are you saying that master worked in 1.59, but fails now? Raffi, did we push any changes to master past 1.59 already? I didn't think so
Nothing has been pushed to master since 1.59. As I said, some errors are due to lockfree, some others to boost.test.
well, the bug for c++03 support of boost.lockfree was only present on clang, while almost all tests on all platforms were failing ... so something must have changed in master, as at one point the tests did compile. even if this was caused by a user bug, it must have been exposed by some change to boost.test, which would most likely have been caught by running the complete tests of boost before and after a merge from develop to master. if the results are different, it might indicate a bug (or user bug). if it shows a bug, please don't push the merge. if it shows a user bug, please notify the corresponding library maintainers. thnx, tim who never experienced this situation with any other testing framework, while it's the second time for boost.test ...
Le 04/10/15 15:58, Tim Blechmann a écrit :
All tests for lockfree in both master and develop branch seem to fail.
Are you saying that master worked in 1.59, but fails now? Raffi, did we push any changes to master past 1.59 already? I didn't think so
Nothing has been pushed to master since 1.59. As I said, some errors are due to lockfree, some others to boost.test.
well, the bug for c++03 support of boost.lockfree was only present on clang, while almost all tests on all platforms were failing ... so something must have changed in master, as at one point the tests did compile.
I do not think we are taking about the same problem. Just to make sure, I am mentioning the fact that lockfree is now including <atomic> solely relying on the version of clang. However, clang can be compiled using C++11 support or not, in the latter case, <atomic> is not available. So just the compiler version is not enough. But it seems you have addressed this issue today, with the commit a2bbf2c.
even if this was caused by a user bug, it must have been exposed by some change to boost.test, which would most likely have been caught by running the complete tests of boost before and after a merge from develop to master. if the results are different, it might indicate a bug (or user bug). if it shows a bug, please don't push the merge. if it shows a user bug, please notify the corresponding library maintainers.
As we said, we haven't not pushed any change to master since 1.59. OTOH, the website encountered some problems for updating the test results (which might explain why some of those were unnoticed), and the problem I mentioned above concerning lockfree appeared at commit 9f52c24 (master), which is post 1.59. I believe: - lockfree[master] vs boost.test[master] is failing, because of the <atomic> issues - lockfree[develop] vs boost.test[master] is ok with your latest changes - lockfree[develop] vs boost.test[develop] is failing because of the boost.test[develop] C++11 issues. So you *should* (in theory) see your tests ok once you merge lockfree[develop] into master. Lockfree tests in develop will fail until we fix the issue in develop. Raffi
Tim Blechmann
All tests for lockfree in both master and develop branch seem to
fail.
Are you saying that master worked in 1.59, but fails now? Raffi, did we push any changes to master past 1.59 already? I didn't think so
Nothing has been pushed to master since 1.59. As I said, some errors are due to lockfree, some others to boost.test.
well, the bug for c++03 support of boost.lockfree was only present on clang, while almost all tests on all platforms were failing ... so something must have changed in master, as at one point the tests did compile.
Are you saying this bug in present in 1.59? As far I know we did not make any change to master since then.
thnx, tim who never experienced this situation with any other testing framework, while it's the second time for boost.test ...
What situation do you mean? Library, you depend upon, which is being developed? Gennadiy
On 03/10/2015 15:21, Tim Blechmann wrote:
All tests for lockfree in both master and develop branch seem to fail. Error message is
"../boost/system/config.hpp", line 34: Error: #error Must not define both BOOST_SYSTEM_DYN_LINK and BOOST_SYSTEM_STATIC_LINK.
See develop branch: http://www.boost.org/development/tests/develop/developer/lockfree.html
See master branch: http://www.boost.org/development/tests/master/developer/lockfree.html
I looked at lockfree/test/Jamfile.v2 but am not sure what change needs to be made.
Any suggestions? boost lockfree's testsuite hasn't changed for a long time. i have no idea why the tests are failing, so something must have changed in boost.test.
Actually this *may* be a Boost.Build issue, test_exec_monitor has as a usage requirement: lib boost_test_exec_monitor : # sources $(TEST_EXEC_MON_SOURCES).cpp : # requirements <link>static : # default build : # usage-requirements <link>shared:<define>BOOST_TEST_DYN_LINK=1 ; And it's that <link>static that's causing the issue, if I build lockfree with: bjam link=static Then everything is fine. But otherwise there are errors inside Boost.System as both BOOST_SYSTEM_STATIC_LINK and BOOST_SYSTEM_DYN_LINK are getting defined. Anyone understand this? John.
Le 04/10/15 19:09, John Maddock a écrit :
On 03/10/2015 15:21, Tim Blechmann wrote:
All tests for lockfree in both master and develop branch seem to fail. Error message is
"../boost/system/config.hpp", line 34: Error: #error Must not define both BOOST_SYSTEM_DYN_LINK and BOOST_SYSTEM_STATIC_LINK.
See develop branch: http://www.boost.org/development/tests/develop/developer/lockfree.html
See master branch: http://www.boost.org/development/tests/master/developer/lockfree.html
I looked at lockfree/test/Jamfile.v2 but am not sure what change needs to be made.
Any suggestions? boost lockfree's testsuite hasn't changed for a long time. i have no idea why the tests are failing, so something must have changed in boost.test.
Actually this *may* be a Boost.Build issue, test_exec_monitor has as a usage requirement:
lib boost_test_exec_monitor : # sources $(TEST_EXEC_MON_SOURCES).cpp : # requirements <link>static : # default build : # usage-requirements <link>shared:<define>BOOST_TEST_DYN_LINK=1 ;
And it's that <link>static that's causing the issue, if I build lockfree with:
bjam link=static
Then everything is fine. But otherwise there are errors inside Boost.System as both BOOST_SYSTEM_STATIC_LINK and BOOST_SYSTEM_DYN_LINK are getting defined.
Anyone understand this?
We had such an issue with Boost.thread when Boost.Test was using (or attempted to use) the new timer library, bringing boost.chrono in the dependency graph. The dependency chain of the final test modules were: test_module <- boost.thread <- boost.chrono <- boost.system <- boost.chrono <- boost.system The problem raised because those two subchains were using different compilation options for boost.chrono<-boost.system Maybe this is related. Raffi
John Maddock
boost lockfree's testsuite hasn't changed for a long time. i have no idea why the tests are failing, so something must have changed in boost.test.
Actually this *may* be a Boost.Build issue, test_exec_monitor has as a usage requirement:
lib boost_test_exec_monitor
Test Exec Monitor was deprecated about ... 7 or 8 *years* ago. Very publicly. Not that it is cause of the issue here, but still... Gennadiy
participants (17)
-
Andrey Semashev
-
Aparna Kumta
-
Bjorn Reese
-
Bjørn Roald
-
Edward Diener
-
Gavin Lambert
-
Gennadiy Rozental
-
John Maddock
-
M.A. van den Berg
-
Paul A. Bristow
-
Raffi Enficiaud
-
Rob Stewart
-
Robert Ramey
-
Stephen Kelly
-
Tim Blechmann
-
Tom Kent
-
Vladimir Prus