
David Abrahams wrote:
on Tue Aug 14 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
David Abrahams wrote:
Can you give a brief summary of what QMTest actually does and how Boost might use it? QMTest is a testing harness.
Meaning, a system for running tests and collecting their results?
Yes.
Its concepts are captured in python base classes ('Test', 'Suite', 'Resource', 'Target', etc.) which then are implemented to capture domain-specific details. (It is straight forward to customize QMTest by adding new test classes, for example).
What are Resource and Target?
A resource is a prerequisite for a test. Anything that has to be done in preparation, but that may also be shared by multiple tests (and so you don't have to run the same setup procedure for each test). A target is an execution context for a test. Beside the default serial target there are various target classes for parallel execution: multi-process, multi-thread, rsh/ssh-based, etc. Parallel execution aside, targets can also be used to handle multi-platform testing, i.e. where different target instances represent different platforms, on which the tests are to be performed. QMTest guarantees that all resources bound to a test are set up prior to a test execution, in the execution context that test is going to be run. In case of parallel targets the resource may thus be set up multiple times, as needed.
QMTest's central concept is that of a 'test database'. A test database organizes tests. It lets users introspect tests (test types, test arguments, prerequisite resources, previous test results, expectations, etc.), as well as run them (everything or only specific sub-suites, by means of different 'target' implementations
I don't understand what you mean by "run them *by means of* 'target' implementations."
Sorry for expressing myself poorly. (And in fact I'm not sure why I mentioned targets at all in that phrase.) As target classes provide the execution context, it's them that iterate over queues of tests that are assigned to them. But that's getting into implementation detail quite a bit...
either in serial, or parallel using multi-threading, multiple processes, or even multiple hosts).
Would QMTest be used to drive multi-host testing across the internet (i.e. at different testers' sites), or more likely just within local networks? If the former, how do its facilities for that compare with BuildBot?
QMTest would typically be used to drive individual 'test runs', presumably only over local networks, and can then be used during the aggregation of the results of such test runs into test reports. As such, it is complementary to the facilities offered by buildbot.
Another important point is scalability: While some test suites are simple and small, we also deal with test suites that hold many thousands of tests (QMTest is used for some of the GCC test suites, for example). A test can mean to run a single (local) executable, or require a compilation, an upload of the resulting executable to a target board
Target board?
Yes (please note that 'target' here is not the same term used above). In the context here it refers to cross-compilation and cross-testing.
with subsequent remote execution, or other even more fancy things.
Test results are written to 'result streams' (which can be customized as most of QMTest). There is a 'report' command that merges the results from multiple test runs into a single test report (XML), which can then be translated to whatever output medium is desired.
How could this be useful for boost ?
A good question, but I'm more interested in "how Boost might use it." That is, something like, "We'd set up a server with a test database. QMTest would run on the server and drive testing on each testers' machines, ..." etc.
I found that boost's testing harness lacks robustness.
Our testing system itself seems to be pretty reliable. I think it's the reporting system that lacks robustness.
I agree.
There is no way to ask seemingly simple questions such as "what tests constitute this test suite ?" or "what revision / date / runtime environment etc. does this result correspond to ?", making it hard to assess the overall performance / quality of the software.
I believe the hardest part is the connection between QMTest and boost.build. Since boost.build doesn't provide the level of introspection QMTest promises, a custom 'boost.build test database' implementation needs some special hooks from the build system. I discussed that quite a bit with Vladimir.
And what came of it?
I'm not sure. boost.build would need to be extended to allow QMTest to gain access to the database structure (the database already exists, conceptually, in terms of the directory layout...). Volodya ? Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...