
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4601CC89.2030506@sympatico.ca...
Younes M wrote:
First, thank you all for the comments.
On 3/21/07, Stefan Seefeld <seefeld@sympatico.ca> wrote:
There are a number of things I can think of that would make running boost tests more useful and convenient. Among them:
* An easy way to introspect the test suite, i.e. display all tests, with metadata.
I agree. I think a big benefit in a GUI frontend will be to display results in a more digestible manner than the current text output allows. For example:
Actually, I don't think the issue here is GUI vs. CLI. Instead, it's about how robust and scalable the testing harness is. Think of it as a multi-tier design, where the UI is just a simple 'frontend' layer. Some layer underneath proides an API that lets you query about what tests exist (matching some suitable criteria, such as name pattern matching, or filtering per annotations), together with metadata.
Boost.Test UTF itself provides this information already (using test tree travercing interfaces). We may consider adding some simpler, strait to the point interfaces, but even now you could get all you need.
That, together with other queries such as 'give me all platforms this test is expected to fail on' would be very valuable for the release process.
Umm. I am not sure how do you plan to maintain this information.
(All this querying doesn't involve actually running any tests.)
* An easy way to run subsets of tests.
On the current Open Issues page selectively running test cases by name is mentioned, which I think fits into this. I've included it in my list of running ideas above.
Right, but it may also include sub-suites. To push a little further (and deviate from the topic only a little bit), it would be good to parametrize sub-testsuites differently. For example, the boost.python tests may be run against different python versions, while that particular parameter is entirely meaningless for, say, boost.filesystem.
What do you mean by "parametrize"?
* Enhanced test run annotations, to allow report generation to build a more meaningful test report (e.g. fold multiple equivalent test runs into one, only consider test runs from a given time window, or associated with a given revision, etc.)
One issue I forsee is in synchronizing the unit test with the GUI. As it stands, I had only considered using the output of Boost.Test to generate reports, but this means that the reports can get stale until you re-run. This also complicates the statistics across reports idea I had above, since it makes little sense to consider such a thing when the unit test changes considerably.
I'm still not thinking of the GUI as something with an internal state. It's just a frontend to the rest of the harness, and thus, there isn't anything to synchronize. The only thing that has a timestamp (or revision) is the code, as well as a particular test run.
Yes. That's correct way to look into this. test runner should employ exisiting Boost.Test interfaces to implement it's tasks.
I think your idea, if I've understood it correctly, of taking into account revisions of the test suite would work towards solving such issues. I'm not sure how to go about detecting/delineating revisions however, but I'll give it some thought.
Once boost is hosted on subversion, each checked-out source tree has a single revision. Thus, you can label a test run with such a revision and can immediately see whether it corresponds to a particular working copy or not.
As I argued in other post, IMO this is not part od this project. At least not in a first draft. It maybe added later on as an addon. Gennadiy