How to make tests building faster?

Hi, I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost: 1) Each .cpp file defines a single test program and all local test routines are executed from test_main() // test_feature_x.cpp void test1() {} // make use of BOOST_CHECK_* macros void test2() {} int test_main(int, char* []) { test1(); test2(); return 0; } 2) Jamfile per testing project builds and runs each program separately # features/Jamfile.v2 test-suite test-features : [ run test_feature_x.cpp ] [ run test_feature_y.cpp ] ; 3) Root mylib/test/Jamfile.v2 defines testing projects to build and run import testing ; project boost-mylib-test : ; build-project features ; build-project algorithms ; build-project abc ; In total, there are nearly 170 test programs in Boost.Geometry. Obviously, running the whole set of tests (b2 command issued in libs\geometry\test) is time consuming process. I'm wondering if there is any way to reorganise the tests to cut down the build (linking) time. My first idea is to decrease number of run entries (e.g. [ run test_feature_y.cpp ] ) in Jamfiles, by building related tests as single test suite program. Conceptually: [ run test_feature_x.cpp test_feature_y.cpp ] Currently, there is no use of Boost.Test features for tests organisation like BOOST_AUTO_TEST_CASE BOOST_TEST_SUITE etc. I wonder if use of any of the above would help in restructuring tests and decreasing number of programs to build. Another aspect of Boost.Geometry tests I'd like to improve is to improve tests report output. Currently, it is reported as *Passed* or *Failed*. I presume that using Boost.Test to group tests in suits with test cases would improve test report providing details about location of failure (which suite, which case, or even better). I'm looking for any piece of advice on the issues discussed above: - How to cut down build time of tests? - How to improve test output report (and keep it suitable for Boost regression testing framework)? - Is it advised to switch to use Boost.Test features to manage suites and test cases? (I tried to figure it out browsing tests of other libs, but it doesn't indicate any preferred practice). I'd be thankful for any insights. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

on Tue Dec 20 2011, Mateusz Łoskot <mateusz-AT-loskot.net> wrote:
Hi,
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there. The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams wrote:
on Tue Dec 20 2011, Mateusz Loskot <mateusz-AT-loskot.net> wrote:
Hi,
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there. The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
Hmmm - I wouldn't be crazy about this idea. The test matrix reports pass/fail often with little other information so putting a lot of tests in to the same executable will lose information. In general I like the idea of compilation / test. I would think that longer term the approach would be to permit the the tests to run simultaneously on a multi-core system. Another idea would be to make better usage of pre-compiled headers. These are supported by both gcc and msvc. Again would require non-trivial changes in build/test infrastructure. Robert Ramey

AMDG On 12/20/2011 08:47 AM, Robert Ramey wrote:
Dave Abrahams wrote:
on Tue Dec 20 2011, Mateusz Loskot <mateusz-AT-loskot.net> wrote:
Hi,
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there. The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
Hmmm - I wouldn't be crazy about this idea. The test matrix reports pass/fail often with little other information so putting a lot of tests in to the same executable will lose information. In general I like the idea of compilation / test.
For failures, the output of the test is shown. As long as you make sure that the test program logs all failures, it should be fine.
I would think that longer term the approach would be to permit the the tests to run simultaneously on a multi-core system.
This has been allowed forever with bjam -jXX.
Another idea would be to make better usage of pre-compiled headers. These are supported by both gcc and msvc. Again would require non-trivial changes in build/test infrastructure.
Boost.Build has support for precompiled headers. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
For failures, the output of the test is shown. As long as you make sure that the test program logs all failures, it should be fine.
the problem is that some failures are are unanticipated. For example, some standard library function might not be working on some platform. The test just bails.
I would think that longer term the approach would be to permit the the tests to run simultaneously on a multi-core system.
This has been allowed forever with bjam -jXX.
hmmm - then what is the problem? is this not being used? or is there some other reason it's not being used. Does it really work with no surprises?
Another idea would be to make better usage of pre-compiled headers. These are supported by both gcc and msvc. Again would require non-trivial changes in build/test infrastructure.
Boost.Build has support for precompiled headers.
hmmm wouldn't this required making the bjam files even more elaborate than they already are? I'm reluctant to start messing with things once they start working as I've found Jamfile hard to understand and debug - especially when they're running on some test system far removed from me. I know there's no solution to this - I'm just mentioning that I don't think this is practical for "the rest of us" Robert Ramey
In Christ, Steven Watanabe
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Tue, Dec 20, 2011 at 12:56 PM, Robert Ramey <ramey@rrsd.com> wrote:
Steven Watanabe wrote:
I would think that longer term the approach would be to permit the the tests to run simultaneously on a multi-core system.
This has been allowed forever with bjam -jXX.
hmmm - then what is the problem? is this not being used? or is there some other reason it's not being used.
It is entirely up to the individual testers if they choose to devote multiple CPUs to testing. And this is intentional. As they are the ones who know how much they can afford to devote.
Does it really work with no surprises?
Yes. -- -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Boost.Build has support for precompiled headers.
hmmm wouldn't this required making the bjam files even more elaborate than they already are? I'm reluctant to start messing with things once they start working as I've found Jamfile hard to understand and debug - especially when they're running on some test system far removed from me. I know there's no solution to this - I'm just mentioning that I don't think this is practical for "the rest of us"
Robert adding PCH support to your build script really isn't that hard: * Create your pch header file. * Add a "cpp-pch" target to the Jamfile: cpp-pch my_target_name : my_pch_header.hpp ; * Add "my_target_name" as a source dependency of all the tests that use it. * Add my_pch_header.hpp as the first include in all the tests that use it. And that's it, job done. Just feel the speed ;-) HTH, John.

On 20 December 2011 18:20, Steven Watanabe <watanabesj@gmail.com> wrote:
On 12/20/2011 08:47 AM, Robert Ramey wrote:
Dave Abrahams wrote:
on Tue Dec 20 2011, Mateusz Loskot <mateusz-AT-loskot.net> wrote:
Hi,
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there. The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
Hmmm - I wouldn't be crazy about this idea. The test matrix reports pass/fail often with little other information so putting a lot of tests in to the same executable will lose information. In general I like the idea of compilation / test.
For failures, the output of the test is shown. As long as you make sure that the test program logs all failures, it should be fine.
Steven, could you explain what does the "logs all failures" mean? AFAIU, currently, test_main() in tests of geometry simply return Zero unless any of BOOST_CHECK_* checkpoints fail. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

Another idea would be to make better usage of pre-compiled headers. These are supported by both gcc and msvc. Again would require non-trivial changes in build/test infrastructure.
Boost.Build supports precompiled headers already: Boost.Math uses it to significantly speed up compile times. The only drawback is if you change one header, then all the tests get rebuilt 'cos the precompiled header pulls in the lot :-( Another option, if the same template instances are used in multiple tests, would be to move template instantiation into separate files, although that's not easy to achieve portably unfortunately. HTH, John.

On 20 December 2011 18:55, John Maddock <boost.regex@virgin.net> wrote:
Another idea would be to make better usage of pre-compiled headers. These are supported by both gcc and msvc. Again would require non-trivial changes in build/test infrastructure.
Boost.Build supports precompiled headers already: Boost.Math uses it to significantly speed up compile times.
I've skimmed the Jamfile.v2 and it looks fairly straightforward to configure. I'll try to test this approach. Looking at the pch.hpp, I wonder, how you decided which header to put there? It looks like carefully selected set only. Best regards -- Mateusz Loskot, http://mateusz.loskot.net

On 20 December 2011 18:55, John Maddock <boost.regex@virgin.net> wrote:
Another idea would be to make better usage of pre-compiled headers. These are supported by both gcc and msvc. Again would require non-trivial changes in build/test infrastructure.
Boost.Build supports precompiled headers already: Boost.Math uses it to significantly speed up compile times.
I have done a quick test in libs/geometry/test/algorithms only. I followed Boost.Math and patched relevant files: http://mateusz.loskot.net/tmp/boost/geometry/boost-geometry-test-algorithm-u... There is room for improvements, adding more headers to pch.hpp, etc. but despite it's a rough test the results are promising: Using Visual Studio 2010, b2 command in libs/geometry/test/algorithms builds 37 programs: 1) Without PCH: 6:19 min:sec 2) With PCH: 2:36 I have noticed compile error for string_from_type<T> specialisations in utility header libs/geometry/test/geometry_test_common.hpp so I didn't put this header into pch.hpp, but it should be feasible to solve later. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

On 20 December 2011 16:47, Robert Ramey <ramey@rrsd.com> wrote:
Dave Abrahams wrote:
on Tue Dec 20 2011, Mateusz Loskot <mateusz-AT-loskot.net> wrote:
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there. The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
Hmmm - I wouldn't be crazy about this idea. The test matrix reports pass/fail often with little other information so putting a lot of tests in to the same executable will lose information. In general I like the idea of compilation / test.
The regression matrix issue you are pointing here is something I'm worried about, indeed.
I would think that longer term the approach would be to permit the the tests to run simultaneously on a multi-core system.
Sounds like jobs for Boost.Build, isn't it?
Another idea would be to make better usage of pre-compiled headers. These are supported by both gcc and msvc. Again would require non-trivial changes in build/test infrastructure.
I have been contemplating idea of leaving the source code of tests structured as it is now, but having a tool which generates some sort of "Unity Builds" - all in single translation unit. Perhaps, having all tests wrapped with Boost.Test macros for test cases, it would be possible to have such feature compile-time configured by Boost.Test with Boost.Build support to switch between many and single test runners variants. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Charter Member of OSGeo, http://osgeo.org Member of ACCU, http://accu.org

AMDG On 12/20/2011 04:01 PM, Mateusz Łoskot wrote:
I have been contemplating idea of leaving the source code of tests structured as it is now, but having a tool which generates some sort of "Unity Builds" - all in single translation unit.
Perhaps, having all tests wrapped with Boost.Test macros for test cases, it would be possible to have such feature compile-time configured by Boost.Test with Boost.Build support to switch between many and single test runners variants.
Best regards,
The easiest way to do it with Boost.Test is to use automatic registration and #include all the source files in a single file. In Christ, Steven Watanabe

On 21 December 2011 01:44, Steven Watanabe <watanabesj@gmail.com> wrote:
On 12/20/2011 04:01 PM, Mateusz Łoskot wrote:
I have been contemplating idea of leaving the source code of tests structured as it is now, but having a tool which generates some sort of "Unity Builds" - all in single translation unit.
Perhaps, having all tests wrapped with Boost.Test macros for test cases, it would be possible to have such feature compile-time configured by Boost.Test with Boost.Build support to switch between many and single test runners variants.
Best regards,
The easiest way to do it with Boost.Test is to use automatic registration and #include all the source files in a single file.
Steven, That's what I want. Another reason to switch to Boost.Test and automatic registration. Thanks! Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

AMDG On 12/20/2011 08:23 AM, Dave Abrahams wrote:
on Tue Dec 20 2011, Mateusz Łoskot <mateusz-AT-loskot.net> wrote:
Hi,
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there. The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
That works as long as you don't have a lot of compile-fail tests, which must be in separate translation units. In Christ, Steven Watanabe

On 20 December 2011 18:23, Steven Watanabe <watanabesj@gmail.com> wrote:
On 12/20/2011 08:23 AM, Dave Abrahams wrote:
on Tue Dec 20 2011, Mateusz Łoskot <mateusz-AT-loskot.net> wrote:
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there. The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
That works as long as you don't have a lot of compile-fail tests, which must be in separate translation units.
I assumed it is obvious that compile-fail tests are in separate translation units. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

on Tue Dec 20 2011, Steven Watanabe <watanabesj-AT-gmail.com> wrote:
AMDG
On 12/20/2011 08:23 AM, Dave Abrahams wrote:
on Tue Dec 20 2011, Mateusz Łoskot <mateusz-AT-loskot.net> wrote:
Hi,
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there. The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
That works as long as you don't have a lot of compile-fail tests, which must be in separate translation units.
...which is why I was interested in testing expressions for invalidity per http://web.archiveorange.com/archive/v/NDiIbkPWvtVaoQn5eGSH -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 20 December 2011 16:23, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Dec 20 2011, Mateusz Łoskot <mateusz-AT-loskot.net> wrote:
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there.
Yes, I'm aware that's a weak link.
The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
Interesting. Would you use Boost.Test for next libary? Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

on Tue Dec 20 2011, Mateusz Łoskot <mateusz-AT-loskot.net> wrote:
On 20 December 2011 16:23, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Dec 20 2011, Mateusz Łoskot <mateusz-AT-loskot.net> wrote:
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there.
Yes, I'm aware that's a weak link.
The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
Interesting. Would you use Boost.Test for next libary?
I might. Historically, I have not needed what Boost.Test provides and issues with the stability of Boost.Test, especially close to release times, has made me wary. However, IIUC, it is well-suited to the many-tests-in-one-executable model. So I'd probably take another look at it. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 23 December 2011 18:19, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Dec 20 2011, Mateusz Łoskot <mateusz-AT-loskot.net> wrote:
On 20 December 2011 16:23, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Dec 20 2011, Mateusz Łoskot <mateusz-AT-loskot.net> wrote:
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library. Currently, the tests follow fairly canonical approach in Boost:
1) Each .cpp file defines a single test program and all local test routines are executed from test_main()
That's your problem right there.
Yes, I'm aware that's a weak link.
The canonical organization is unfriendly to fast test times and I would not use it for my next library. It's better to put more tests together in the same executable, and more in the same translation unit.
Interesting. Would you use Boost.Test for next libary?
I might. Historically, I have not needed what Boost.Test provides and issues with the stability of Boost.Test, especially close to release times, has made me wary. However, IIUC, it is well-suited to the many-tests-in-one-executable model. So I'd probably take another look at it.
Dave, Thanks for sharing your insights. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

Hi,
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library.
In total, there are nearly 170 test programs in Boost.Geometry. Obviously, running the whole set of tests (b2 command issued in libs\geometry\test) is time consuming process. Could you give some figures? I'm wondering if there is any way to reorganise the tests to cut down the build (linking) time. My first idea is to decrease number of run entries (e.g. [ run test_feature_y.cpp ] ) in Jamfiles, by building related tests as single test suite program. Conceptually:
[ run test_feature_x.cpp test_feature_y.cpp ] I guess you will need to reorganize your test so this combination works, but it should surely reduce the time to run all the tests. One of the
Le 20/12/11 14:39, Mateusz Łoskot a écrit : possible problems is that the incremental build could be increased. The other is the reporting, the run test pass or fail globally (except if you use a transformation that gives results for specific test).
Currently, there is no use of Boost.Test features for tests organisation like BOOST_AUTO_TEST_CASE BOOST_TEST_SUITE etc.
I wonder if use of any of the above would help in restructuring tests and decreasing number of programs to build.
Yes BOOST_AUTO_TEST_CASE should help to combine several .cpp into an executable.
Another aspect of Boost.Geometry tests I'd like to improve is to improve tests report output. Currently, it is reported as *Passed* or *Failed*.
I would like to be able to have the possibility for "Not applicable" for configurations on which the test has no sens.
I presume that using Boost.Test to group tests in suits with test cases would improve test report providing details about location of failure (which suite, which case, or even better). I have never used it but there is a possibility to get an XML output with Boost.Test. I don't know if this output could be adapted to the input the regression test are expecting.
I'm looking for any piece of advice on the issues discussed above: - How to cut down build time of tests? - How to improve test output report (and keep it suitable for Boost regression testing framework)? - Is it advised to switch to use Boost.Test features to manage suites and test cases? I started using Boost.Test and I abandoned it because Boost.Test was broken on cygwin since I don't remember which version and the report of individual tests doesn't appear in the regression tests. Of course I would use it if Boost.Test is supported on this platform and the regression test report takes care of individual tets. (I tried to figure it out browsing tests of other libs, but it doesn't indicate any preferred practice).
I'd be thankful for any insights. Best, Vicente

Vicente J. Botet Escriba wrote:
Le 20/12/11 14:39, Mateusz Loskot a écrit :
I'm looking for any piece of advice on the issues discussed above: - How to cut down build time of tests? - How to improve test output report (and keep it suitable for Boost regression testing framework)? - Is it advised to switch to use Boost.Test features to manage suites and test cases?
I started using Boost.Test and I abandoned it because Boost.Test was broken on cygwin since I don't remember which version and the report of individual tests doesn't appear in the regression tests. Of course I would use it if Boost.Test is supported on this platform and the regression test report takes care of individual tets.
(I tried to figure it out browsing tests of other libs, but it doesn't indicate any preferred practice).
I'd be thankful for any insights.
Here's an idea - TEST LESS. Really - I've organized my testing stragegy to do just this. (the serialization library runs 200+ tests). Here is what I do: a) on my local system I run my current version of the serialization library against the Boost Release Branch. I do this by changing the directory of the serialization on my local system to the trunk while leaving the rest of boost on my local system as the release branch. This effectively tests my next version of the serialization library against the "next" release. This has a bunch of advantages: 1) I control when the other boost components change by updating the release branch on my local machine only occasionally - like before a check -in. So usually, not all of my tests have to be rebuilt. Normally it's only the one that's required to test the header I actually changed. If I make an improvement on one test, only that test get's rebuilt. 2) I'm testing against "known good" components - the next release branch. This means that it boost.test or some other component (and I depend on a lot of them) I'm not stuck trying to track down some artifact that is really due to a temporary condition in some other library. Does it make any sense at all to test one set of code (my library) with a tool which is in a state of developement (any code in the trunk)? Of course it doesn't. Doing things this way lets one use boost test without overloading the boost test developer with the responsability to have is version in the trunk bug free and rock solid all the time - which is not what the trunk is for. This saves HUGE amounts of machine time on my local system and even more important, HUGE amounts of my personal time waiting for tests and tracking down test artifacts. Try this out - it will help a lot. Robert Ramey
Best, Vicente
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 20 December 2011 19:22, Robert Ramey <ramey@rrsd.com> wrote:
Vicente J. Botet Escriba wrote:
Le 20/12/11 14:39, Mateusz Loskot a écrit :
I'm looking for any piece of advice on the issues discussed above: - How to cut down build time of tests? - How to improve test output report (and keep it suitable for Boost regression testing framework)? - Is it advised to switch to use Boost.Test features to manage suites and test cases?
I started using Boost.Test and I abandoned it because Boost.Test was broken on cygwin since I don't remember which version and the report of individual tests doesn't appear in the regression tests. Of course I would use it if Boost.Test is supported on this platform and the regression test report takes care of individual tets.
(I tried to figure it out browsing tests of other libs, but it doesn't indicate any preferred practice).
I'd be thankful for any insights.
Here's an idea - TEST LESS.
Sounds familiar :)
Here is what I do:
a) on my local system I run my current version of the serialization library against the Boost Release Branch. I do this by changing the directory of the serialization on my local system to the trunk while leaving the rest of boost on my local system as the release branch. This effectively tests my next version of the serialization library against the "next" release. This has a bunch of advantages:
I'm interested in your practice here. How do you do the "changing the directory of the serialization"? Do you copy and overwrite, use symlinks?
1) I control when the other boost components change by updating the release branch on my local machine only occasionally - like before a check -in. So usually, not all of my tests have to be rebuilt. Normally it's only the one that's required to test the header I actually changed. If I make an improvement on one test, only that test get's rebuilt.
Sounds reasonable.
2) I'm testing against "known good" components - the next release branch. [...] Doing things this way lets one use boost test without overloading the boost test developer with the responsability to have is version in the trunk bug free and rock solid all the time - which is not what the trunk is for.
I have to admit, I have been too lazy to use branches/release that way. Anyhow, this approach sounds very reasonable and I've got new bigger hard drive now, so no excuse.
Try this out - it will help a lot.
I will. Thanks! Your approach combined with John's suggestion to use PCH should be a nice time saver. Plus, I bjam jobs switch, even if I have only two cores. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

Mateusz Loskot wrote:
I'm interested in your practice here. How do you do the "changing the directory of the serialization"? Do you copy and overwrite, use symlinks?
My whole boost/release tree is connect to the boost SVN system. There is an SVN command which let change the directory to another SVN branch. I use tortoise SVN gui so I don't remember the exact command - it might be switch. Hint, make sure you have everything checked in before you do this.
Your approach combined with John's suggestion to use PCH should be a nice time saver.
It is a time saver. But it's much more than that. It changes the whole game. Now I'm not coupled to any quirks/experements in the trunk. And I can be almost 100 % sure that when I check into the trunk - then move my changes to release there will be almost never be a problem. This isn't hype. This has saved me tons of agravation.
Plus, I bjam jobs switch, even if I have only two cores.
Best regards,

On 22 December 2011 01:49, Robert Ramey <ramey@rrsd.com> wrote:
Mateusz Loskot wrote:
I'm interested in your practice here. How do you do the "changing the directory of the serialization"? Do you copy and overwrite, use symlinks?
My whole boost/release tree is connect to the boost SVN system. There is an SVN command which let change the directory to another SVN branch. I use tortoise SVN gui so I don't remember the exact command - it might be switch. Hint, make sure you have everything checked in before you do this.
I presume you use svn switch. Neat.
Your approach combined with John's suggestion to use PCH should be a nice time saver.
It is a time saver. But it's much more than that.
Yes, I do see advantages. I'll try to adopt this idea myself. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

On 22 December 2011 01:49, Robert Ramey <ramey@rrsd.com> wrote:
Mateusz Loskot wrote:
I'm interested in your practice here. How do you do the "changing the directory of the serialization"? Do you copy and overwrite, use symlinks?
My whole boost/release tree is connect to the boost SVN system. There is an SVN command which let change the directory to another SVN branch. I use tortoise SVN gui so I don't remember the exact command - it might be switch. Hint, make sure you have everything checked in before you do this.
Robert, I have additional question to the testing against release branch approach. I assume you work with trunk and branches/release in the same machine/system. How do you switch between Boost.Build from trunk and from branches/release? On Windows, I build and install BBv2 this way: bootstrap.bat .\b2 --prefix=C:\usr install If C:\usr contains BBv2 from trunk (C:\usr\bin is in PATH) and I try to run b2 command inside branches/release, I sometimes get BBv2 errors (e.g. from .jam files). Then I have to wipe out C:\usr, go to branches/release/tools/build/v2 and run the BBv2 installation again: bootstrap.bat .\b2 --prefix=C:\usr install It is quite a hassle. Is there any convenient way to have different version of BBv2 installed? Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

Mateusz Loskot wrote:
On 22 December 2011 01:49, Robert Ramey <ramey@rrsd.com> wrote: I have additional question to the testing against release branch approach.
I assume you work with trunk and branches/release in the same machine/system. How do you switch between Boost.Build from trunk and from branches/release?
On Windows, I build and install BBv2 this way:
bootstrap.bat .\b2 --prefix=C:\usr install
If C:\usr contains BBv2 from trunk (C:\usr\bin is in PATH) and I try to run b2 command inside branches/release, I sometimes get BBv2 errors (e.g. from .jam files).
Then I have to wipe out C:\usr, go to branches/release/tools/build/v2 and run the BBv2 installation again:
bootstrap.bat .\b2 --prefix=C:\usr install
It is quite a hassle. Is there any convenient way to have different version of BBv2 installed?
hmmm - I'm not aware of this issue. Here is what I do: a) I have svn checked out from the release branch to a directory. On my windows system its c:\BoostRelease. b) Then I use svn switch to switch the serialization directories to the trunk. For the serialization library this is three directories. c:\BoostRelease\boost\archive, c:\BoostRelease\boost\serialization, c:\BoostRelease\libs\serialization c) if necessary, one can build using boost build v2 - I forget the procedure. But basically boost build is indifferent to whether it's the release or trunk. d) Then I make sure that the tool executables are in the current path. e) then cd to boost/libs/serialization/test f) invoke the shell script batch file ..\..\..\tools\library_test (or library_status I don't remember) and the system will then: i) build all the prequisite libraries ii) build the serialization libraries iii) run the serialization library tests iv) creat an html table with all the results. I'm sure there are other ways to do this but that's how I do it. As I said before boost build and other tools work the same regardless of the branch Finally, I also have a c:\boosttrunk directory which is a check out from the trunk. I used to have there just "in case". Also it was sometimes handy for comparing trunk/releas etc. But since it was always out of date, I don't every use it any more. I just use SVN diff. I think I'll just delete my local copy of the trunk. Robert Ramey

On 6 January 2012 01:58, Robert Ramey <ramey@rrsd.com> wrote:
Mateusz Loskot wrote:
On 22 December 2011 01:49, Robert Ramey <ramey@rrsd.com> wrote: I have additional question to the testing against release branch approach.
I assume you work with trunk and branches/release in the same machine/system. How do you switch between Boost.Build from trunk and from branches/release?
On Windows, I build and install BBv2 this way:
bootstrap.bat .\b2 --prefix=C:\usr install
If C:\usr contains BBv2 from trunk (C:\usr\bin is in PATH) and I try to run b2 command inside branches/release, I sometimes get BBv2 errors (e.g. from .jam files).
Then I have to wipe out C:\usr, go to branches/release/tools/build/v2 and run the BBv2 installation again:
bootstrap.bat .\b2 --prefix=C:\usr install
It is quite a hassle. Is there any convenient way to have different version of BBv2 installed?
hmmm - I'm not aware of this issue. Here is what I do:
a) I have svn checked out from the release branch to a directory. On my windows system its c:\BoostRelease. b) Then I use svn switch to switch the serialization directories to the trunk. For the serialization library this is three directories. c:\BoostRelease\boost\archive, c:\BoostRelease\boost\serialization, c:\BoostRelease\libs\serialization
Yes, this is what I now do too, following your instructions: http://mateusz.loskot.net/?p=2887
c) if necessary, one can build using boost build v2 - I forget the procedure. But basically boost build is indifferent to whether it's the release or trunk.
When I have boost build v2 installed form trunk and I try to use it to test (any) libraries from branches/release, then I'm getting this: d:\dev\boost\_svn\branches\release\libs\serialization\test>b2 D:/dev/boost/_svn/branches/release/tools/build/v2/build\project.jam:266: unbalanced parentheses So, I have to reinstall boost build v2 from branches/release, then everything works.
I'm sure there are other ways to do this but that's how I do it. As I said before boost build and other tools work the same regardless of the branch
Perhaps there is a temporary issue in the trunk or simply it is my environment fault. Thanks for all your comments Robert, very helpful. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

AMDG On 01/06/2012 06:03 PM, Mateusz Loskot wrote:
c) if necessary, one can build using boost build v2 - I forget the procedure. But basically boost build is indifferent to whether it's the release or trunk.
When I have boost build v2 installed form trunk and I try to use it to test (any) libraries from branches/release, then I'm getting this:
d:\dev\boost\_svn\branches\release\libs\serialization\test>b2 D:/dev/boost/_svn/branches/release/tools/build/v2/build\project.jam:266: unbalanced parentheses
So, I have to reinstall boost build v2 from branches/release, then everything works.
I'm sure there are other ways to do this but that's how I do it. As I said before boost build and other tools work the same regardless of the branch
Perhaps there is a temporary issue in the trunk or simply it is my environment fault.
I made the syntax checking in trunk b2 stricter, exposing a few bugs in Boost.Build. I'll downgrade this to a warning.
Thanks for all your comments Robert, very helpful.
Best regards,
In Christ, Steven Watanabe

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! Am 06.01.12 00:06, schrieb Mateusz Loskot:
Then I have to wipe out C:\usr, go to branches/release/tools/build/v2 and run the BBv2 installation again:
I don't even "install" BBv2. I just run the engine/build.sh or .bat to build the bjam executable. Running this anywhere inside boost will automatically find the boost-build.jam file on top level. This way it will find the current BBv2 files. When building my stuff against one or the other boost version I simply switch the BOOST_ROOT environment variable to the respective boost folder. The bjam executables may reside in the same path using a different name like "bjam_1_48". This should reduce the hassle of repeatedly installing BBv2. Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: keyserver x-hkp://pool.sks-keyservers.net iEYEARECAAYFAk80H0YACgkQhAOUmAZhnmpmRACfYuOgdsRCWb971ZVXR0Hed9FT Uz4AnjqL67MLeeFsZJHAfnR8XedUp7Hw =BJAZ -----END PGP SIGNATURE-----

On 9 February 2012 19:32, Frank Birbacher <bloodymir.crap@gmx.net> wrote:
Am 06.01.12 00:06, schrieb Mateusz Loskot:
Then I have to wipe out C:\usr, go to branches/release/tools/build/v2 and run the BBv2 installation again:
I don't even "install" BBv2. I just run the engine/build.sh or .bat to build the bjam executable. Running this anywhere inside boost will automatically find the boost-build.jam file on top level. This way it will find the current BBv2 files.
When building my stuff against one or the other boost version I simply switch the BOOST_ROOT environment variable to the respective boost folder. The bjam executables may reside in the same path using a different name like "bjam_1_48". This should reduce the hassle of repeatedly installing BBv2.
Sounds like a good idea too. The only minor hassle is to make bjam/b2 executables visible findable in the PATH or one has to locate them using relative/absolute paths. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

The only minor hassle is to make bjam/b2 executables visible findable in the PATH or one has to locate them using relative/absolute paths.
If you use bash, then you could do as I do, which is set an alias in .bash.rc that points "bjam" to the actual exectuable - that way when I rebuild bjam I don't need to copy or install it anywhere, because the alias already points to the build location. HTH, John.

Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
I would like to be able to have the possibility for "Not applicable" for configurations on which the test has no sens.
Trunk version of Boost.Test have this ability (using disable_if decorator)
I started using Boost.Test and I abandoned it because Boost.Test was broken on cygwin since I don't remember which version
Boost.Test works fine with cygwin as far as I know. Boost.Build is broken (in Boost.Test unit tests at least) - I do not know the status of this.
and the report of individual tests doesn't appear in the regression tests. Of course I would use it if Boost.Test is supported on this platform and the regression test report takes care of individual tets.
I believe report show up all the failures in all test cases. Gennadiy

Vicente J. Botet Escriba<vicente.botet<at> wanadoo.fr> writes:
I would like to be able to have the possibility for "Not applicable" for configurations on which the test has no sens. Trunk version of Boost.Test have this ability (using disable_if decorator) I was thinking on the report not the execution here.
I started using Boost.Test and I abandoned it because Boost.Test was broken on cygwin since I don't remember which version Boost.Test works fine with cygwin as far as I know. Boost.Build is broken (in Boost.Test unit tests at least) - I do not know the status of this. Sorry, but I can not develop my own boost libraries with Boost.Test if
Le 20/12/11 22:29, Gennadiy Rozental a écrit : the Jamfiles used by it don't work on cygwin as I could not test them.
and the report of individual tests doesn't appear in the regression tests. Of course I would use it if Boost.Test is supported on this platform and the regression test report takes care of individual tets. I believe report show up all the failures in all test cases.
Oh, I was not aware of this. Could you point me to a library that is using test suites that shows the results of each individual tests on the regression test page? Best, Vicente

On 20 December 2011 17:12, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 20/12/11 14:39, Mateusz Łoskot a écrit :
I'm trying to figure out if there is anything I could do to improve compile/link-time for tests of Boost.Geometry library.
In total, there are nearly 170 test programs in Boost.Geometry. Obviously, running the whole set of tests (b2 command issued in libs\geometry\test) is time consuming process.
Could you give some figures?
The whole session takes ~40 minutes to build and run ~110 tests on my laptop (Intel CPU P8600 + 8 GB RAM + 7200 RPM 750GB HDD) using Visual Studio 11 DP I simply step into libs/geometry/test, type b2, hit enter and wait.
I'm wondering if there is any way to reorganise the tests to cut down the build (linking) time. My first idea is to decrease number of run entries (e.g. [ run test_feature_y.cpp ] ) in Jamfiles, by building related tests as single test suite program. Conceptually:
[ run test_feature_x.cpp test_feature_y.cpp ]
I guess you will need to reorganize your test so this combination works, but it should surely reduce the time to run all the tests. One of the possible problems is that the incremental build could be increased. The other is the reporting, the run test pass or fail globally (except if you use a transformation that gives results for specific test).
Yes, Robert also pointed this issue which I agree with, it may be a problem.
Currently, there is no use of Boost.Test features for tests organisation like BOOST_AUTO_TEST_CASE BOOST_TEST_SUITE etc.
I wonder if use of any of the above would help in restructuring tests and decreasing number of programs to build.
Yes BOOST_AUTO_TEST_CASE should help to combine several .cpp into an executable.
Sounds good.
I presume that using Boost.Test to group tests in suits with test cases would improve test report providing details about location of failure (which suite, which case, or even better).
I have never used it but there is a possibility to get an XML output with Boost.Test. I don't know if this output could be adapted to the input the regression test are expecting.
It is something I'd like to learn about. I am not happy really with the current output. In fact, currently I only get if a test program returned 1 or 0, but no information about which test case exactly failed. I understand it is because of how the tests are implemented now, but I'd like to improve it. So, /me asking for suggestions.
I'm looking for any piece of advice on the issues discussed above: - How to cut down build time of tests? - How to improve test output report (and keep it suitable for Boost regression testing framework)? - Is it advised to switch to use Boost.Test features to manage suites and test cases?
I started using Boost.Test and I abandoned it because Boost.Test was broken on cygwin since I don't remember which version and the report of individual tests doesn't appear in the regression tests. Of course I would use it if Boost.Test is supported on this platform and the regression test report takes care of individual tets.
I will consider it as another +1 vote for Boost.Test :) I don't use Cygwin myself, but I know it is important for many users/developers. Thank you! Best regards, -- Mateusz Loskot, http://mateusz.loskot.net
participants (10)
-
Dave Abrahams
-
Frank Birbacher
-
Gennadiy Rozental
-
John Maddock
-
Mateusz Loskot
-
Mateusz Łoskot
-
Rene Rivera
-
Robert Ramey
-
Steven Watanabe
-
Vicente J. Botet Escriba