
Misha Bergal <mbergal@meta-comm.com> writes:
David Abrahams <dave@boost-consulting.com> writes:
Misha Bergal <mbergal@meta-comm.com> writes:
Beman Dawes <bdawes@acm.org> writes:
I think we need a major upgrade to our testing infrastructure. I'd like to see a machine (perhaps running both Win XP and Linux using a virtual machine manager) constantly running Boost regression tests. The tests should be segmented into sets, including an "everything we've got set", with some sets running more often than others. As previously discussed, one set should be a "quicky test" that runs very often, and that developers can temporarily add a test to that they are concerned about.
It seems to me, that a lot of time is taken by Boost.Build unnecesserily trying to execute the tests which have been failing before, even though files they depend on haven't changed.
It used to work the other way, but it caused confusion.
If this is fixed, it would make sense to set up continuosly running regression tests: clean once a day and the updates for the rest of the day.
We could make it optional and use it only for the Bots.
Agreed. Do you have a rough estimate about what needs to be done to implement/restore it?
I think it would take a day or two of work on testing.jam.
There is also the problem that the type traits tests obfuscate their include files using macros, so some changes won't cause rebuilds.
There is also a similar issue with libraries that use the PP library. We can customize Boost.Build to be aware of the special inclusion macros if neccessary.
The dependencies problems seem to be resolvable. So really what is needed is to:
1. Implement BuildBot. 2. Change BoostBuild to have an option of not rebuilding the failed tests. 3. Implement regression test requests for branch/lib/toolset.
Yep. -- Dave Abrahams Boost Consulting www.boost-consulting.com