
"Reece Dunn" <msclrhd@hotmail.com> writes:
Jeff Garland wrote:
And that's today. Consider during the next couple months 3-4 new libraries are pending to be added. Serialization tests alone dramatically increase the length of the time to run the regression if we always run the full test. What will happen in a year when we have say 10 new libraries?
Robert and I have believe something will need to be done. We've tried to start a discussion, but no one responded:
http://lists.boost.org/MailArchives/boost/msg64471.php http://lists.boost.org/MailArchives/boost/msg64491.php
I don't know how feasable this would be, but you might also want to consider library dependancies so that if you modify the iterator adaptors library you can pull a list of tests that need to be re-run because the associated libraries use iterator adaptors. This would reduce the number of tests that need to be performed on each cycle, but you'd need some way of archiving the results and only updating the ones that are re-run.
It's much, *much* simpler to just change the test system back to not re-run failing tests whose dependencies haven't changed. Once we've made that change, most test cycles will have nothing to do, and the rest will only test the stuff that might've changed.
When testing a new platform/compiler/configuration, you can tell the test suite to run all the tests for that setup, but for the others (where it has already been run) it will operate as above, unless an explicit request for the full set of tests is provided (allowing for, say, the full set to be performed weekly while the dependant tests are performed daily/every 12 hours).
I like the idea of specifying the test level (basic, torture, concept).
I am opposed to the idea of requiring humans to initiate the right tests, at least without proof that mechanically-initiated tests are unworkable. I don't think we've proven that yes. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com