Google SOC 2007, Boost.Test Frontend

Hi, I recently came across a suggestion for a Boost.Test frontend on the Google Summer of Code 2007 front page. As someone who uses Boost.Test a fair bit I think I would be very interested in persuing this. I have some early ideas about what I would like to see in a frontend, but would also love to get some ideas from others. I've emailed Thorsten Ottosen and he suggested I try here, so here I am. Along with the idea of having the frontend running in the systray and automatically capturing detecting and reporting on any unit tests that are run (as suggested on the Ideas page), I think being shown a list of failures and being jump to one such failure would be very handy. By "jump to" I mean having the source file open up in an editor and possibly even having the editor scroll to/highlight the line in question, or at the very least using a source-view widget in the front-end itself that could perform that function. If anyone has any other suggestions on functionality, information to include in generated reports, user interface, aesthetics, coupling between unit tests and the frontend, coupling between the frontend and editors, or on any other topic, I would love to hear it. Thank you for your time, Younes

This could be handy. Boost.Test does integrate rather nicely with both Visual Studio and Xcode though (it lets you jump to the source code where the error occurred) - so I'd expect such a front end to do more than just let me jump to a spot in source code. Also, if you built a Windows application, I believe most developers who write C++ in Windows use Visual Studio. A Visual Studio addon might be more appropriate for them. If I were doing this, I might try something like the following: Build a command line application that can: 1 - Run the unit tests that result from compiling a set of source files (equivalent to a [ run ../src/$(CPP).cpp ] in a Jamfile). I may not want to run all of my unit tests, and just want an interface to run those from one file. 2 - Automatically add test cases. 3 - Run in a loop where it repeatedly compiles and runs any unit tests whose source files have changed (I could have this open in a command window as I code, and see failing unit tests soon after I save my faulty code). And then build a Visual Studio addon that makes it possible to use this command line application from inside VS. It would parse the information, and display it to the user in useful ways, and allow the user to go straight to where tests failed, or just run a single file's unit tests. Honestly though, this would probably not be enough functionality for me to feel like it was ready to release to the Boost community. But if you can think of some compelling things to add, it could make it more useful. On 3/21/07, Younes M <younes.m@gmail.com> wrote:
Hi,
I recently came across a suggestion for a Boost.Test frontend on the Google Summer of Code 2007 front page. As someone who uses Boost.Test a fair bit I think I would be very interested in persuing this. I have some early ideas about what I would like to see in a frontend, but would also love to get some ideas from others.
I've emailed Thorsten Ottosen and he suggested I try here, so here I am.
Along with the idea of having the frontend running in the systray and automatically capturing detecting and reporting on any unit tests that are run (as suggested on the Ideas page), I think being shown a list of failures and being jump to one such failure would be very handy. By "jump to" I mean having the source file open up in an editor and possibly even having the editor scroll to/highlight the line in question, or at the very least using a source-view widget in the front-end itself that could perform that function.
If anyone has any other suggestions on functionality, information to include in generated reports, user interface, aesthetics, coupling between unit tests and the frontend, coupling between the frontend and editors, or on any other topic, I would love to hear it.
Thank you for your time, Younes _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 3/21/07, Younes M <younes.m@gmail.com> wrote:
Hi,
If anyone has any other suggestions on functionality, information to include in generated reports, user interface, aesthetics, coupling between unit tests and the frontend, coupling between the
frontend and
editors, or on any other topic, I would love to hear it.
A front end for running specific tests would be the bees knees. Even if its cli driven, I don't care so long as its in Boost. Take a look at: http://www.nunit.org/docs/2.4/img/gui-screenshot.jpg for an example of what on emight look like. There are a few posts (I have one printed out somewhere!) that talk about how you can run specific tests. Sorry for responding to the wrong post, couldn't find yours.

"Jeremy Pack" <rostovpack@gmail.com> wrote in message news:860337cf0703211034u79293ee6v7b39e4254cfc4136@mail.gmail.com...
This could be handy. Boost.Test does integrate rather nicely with both Visual Studio and Xcode though (it lets you jump to the source code where the error occurred) - so I'd expect such a front end to do more than just let me jump to a spot in source code.
We may want ot add different test log/outputs to meat different IDE expectation.
Also, if you built a Windows application, I believe most developers who write C++ in Windows use Visual Studio. A Visual Studio addon might be more appropriate for them.
Yes. THat What I though it should look like.
If I were doing this, I might try something like the following:
Build a command line application that can:
1 - Run the unit tests that result from compiling a set of source files (equivalent to a [ run ../src/$(CPP).cpp ] in a Jamfile). I may not want to run all of my unit tests, and just want an interface to run those from one file.
1.34.0 includes boost_test_runner tool. It could be used to run test modules build as DLLs. Test case selection support is planned for next release.
2 - Automatically add test cases.
test suites, fixtures etc
3 - Run in a loop where it repeatedly compiles and runs any unit tests whose source files have changed (I could have this open in a command window as I code, and see failing unit tests soon after I save my faulty code).
And then build a Visual Studio addon that makes it possible to use this command line application from inside VS. It would parse the information, and display it to the user in useful ways, and allow the user to go straight to where tests failed, or just run a single file's unit tests.
I am ont sure that command line based addon is based approach. I would rather do separate test runner as addon. Gennadiy

"Younes M" <younes.m@gmail.com> wrote in message news:586c2acd0703210939t47c729b1oea1e8aa350b03780@mail.gmail.com...
Hi,
I recently came across a suggestion for a Boost.Test frontend on the Google Summer of Code 2007 front page. As someone who uses Boost.Test a fair bit I think I would be very interested in persuing this. I have some early ideas about what I would like to see in a frontend, but would also love to get some ideas from others.
I've emailed Thorsten Ottosen and he suggested I try here, so here I am.
Along with the idea of having the frontend running in the systray and automatically capturing detecting and reporting on any unit tests that are run (as suggested on the Ideas page), I think being shown a list of failures and being jump to one such failure would be very handy. By "jump to" I mean having the source file open up in an editor and possibly even having the editor scroll to/highlight the line in question, or at the very least using a source-view widget in the front-end itself that could perform that function.
If anyone has any other suggestions on functionality, information to include in generated reports, user interface, aesthetics, coupling between unit tests and the frontend, coupling between the frontend and editors, or on any other topic, I would love to hear it.
Thank you for your time, Younes
It would be really great, if someone could pursue this. In 1.34.0 I've made several changes that are intended to allow smooth integration with GUI based test runners. I've got my share of ideas what features it should implement. In any case you will have my full support and/or I could serve as a Boost mentor. Regards, Gennadiy

Younes M wrote:
Hi,
I recently came across a suggestion for a Boost.Test frontend on the Google Summer of Code 2007 front page. As someone who uses Boost.Test a fair bit I think I would be very interested in persuing this. I have some early ideas about what I would like to see in a frontend, but would also love to get some ideas from others.
To me functionality similar to http://cruisecontrol.sourceforge.net/ comes to mind. I don't know if this counts as *front-end* to Boost.Test in this context, but I would give it a look. Basically the CruiseControl software detect events such as commits in the revision control system upon which build and test runs are launched. After completion reports are generated and notifications are given to subscribing developers. The connection to Boost.Test is in the reporting and test launch part I guess. I guess signaling to multiple build/test hosts could be part of it or a future extension. Various SCM such as cvs, subversion, perforce, clear-case, sccs, rcs, etc. should be supported by future plug-ins, for now a single plug-in for cvs, subversion, or whatever will do :-) It may be smart to consider splitting this into more than one project to keep the scope creep out. -- Bjørn

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Bjørn Roald Sent: Wednesday, March 21, 2007 1:03 PM To: boost@lists.boost.org Subject: Re: [boost] Google SOC 2007, Boost.Test Frontend
Hi,
I recently came across a suggestion for a Boost.Test frontend on the Google Summer of Code 2007 front page. As someone who uses Boost.Test a fair bit I think I would be very interested in persuing
Younes M wrote: this. I have
some early ideas about what I would like to see in a frontend, but would also love to get some ideas from others.
To me functionality similar to http://cruisecontrol.sourceforge.net/ comes to mind. I don't know if this counts as *front-end* to Boost.Test in this context, but I would give it a look.
You don't want to use cruisecontrol. You want buildbot. That is all. :-)

Bjørn Roald wrote:
Younes M wrote:
Hi,
I recently came across a suggestion for a Boost.Test frontend on the Google Summer of Code 2007 front page. As someone who uses Boost.Test a fair bit I think I would be very interested in persuing this. I have some early ideas about what I would like to see in a frontend, but would also love to get some ideas from others.
To me functionality similar to http://cruisecontrol.sourceforge.net/ comes to mind. I don't know if this counts as *front-end* to Boost.Test in this context, but I would give it a look.
Basically the CruiseControl software detect events such as commits in the revision control system upon which build and test runs are launched. After completion reports are generated and notifications are given to subscribing developers. The connection to Boost.Test is in the reporting and test launch part I guess. I guess signaling to multiple build/test hosts could be part of it or a future extension. Various SCM such as cvs, subversion, perforce, clear-case, sccs, rcs, etc. should be supported by future plug-ins, for now a single plug-in for cvs, subversion, or whatever will do :-)
Isn't that exactly what buildbot does (http://buildbot.sf.net)? Rene Riviera already did some work a long time ago to provide a buildbot harness for boost.org, which, I agree, would be very useful to lift into official part of the boost infrastructure, especially in support of the regression testing harness.
It may be smart to consider splitting this into more than one project to keep the scope creep out.
Heh, you are the one deviating from the original topic, and now you warn about feature creep. :-) Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Bjørn Roald wrote:
Younes M wrote:
Hi,
I recently came across a suggestion for a Boost.Test frontend on the Google Summer of Code 2007 front page. As someone who uses Boost.Test a fair bit I think I would be very interested in persuing this. I have some early ideas about what I would like to see in a frontend, but would also love to get some ideas from others.
To me functionality similar to http://cruisecontrol.sourceforge.net/ comes to mind. I don't know if this counts as *front-end* to Boost.Test in this context, but I would give it a look.
Basically the CruiseControl software detect events such as commits in the revision control system upon which build and test runs are launched. After completion reports are generated and notifications are given to subscribing developers. The connection to Boost.Test is in the reporting and test launch part I guess. I guess signaling to multiple build/test hosts could be part of it or a future extension. Various SCM such as cvs, subversion, perforce, clear-case, sccs, rcs, etc. should be supported by future plug-ins, for now a single plug-in for cvs, subversion, or whatever will do :-)
Isn't that exactly what buildbot does (http://buildbot.sf.net)? Rene Riviera already did some work a long time ago to provide a buildbot harness for boost.org, which, I agree, would be very useful to lift into official part of the boost infrastructure, especially in support of the regression testing harness.
I did not know of buildbot. It looks like the same basic idea. The problem with cruisecontrol is that it likes the Ant config.xml files. This is only Ok if you use Ant :-( I will have a look at buildbot when time permits, thanks :-)
It may be smart to consider splitting this into more than one project to keep the scope creep out.
Heh, you are the one deviating from the original topic, and now you warn about feature creep. :-)
Yeh, that is me! -- Bjørn

Bjørn Roald wrote: [snip]
I did not know of buildbot. It looks like the same basic idea. The problem with cruisecontrol is that it likes the Ant config.xml files. This is only Ok if you use Ant :-(
No, there are exec tasks available both within CC and Ant. I'm using CC together with BBv2 and Boost.Test. / Johan

"Bjørn Roald" <bjorn@4roald.org> wrote in message news:46018F7F.7030109@4roald.org...
Younes M wrote:
Hi,
I recently came across a suggestion for a Boost.Test frontend on the Google Summer of Code 2007 front page. As someone who uses Boost.Test a fair bit I think I would be very interested in persuing this. I have some early ideas about what I would like to see in a frontend, but would also love to get some ideas from others.
To me functionality similar to http://cruisecontrol.sourceforge.net/ comes to mind. I don't know if this counts as *front-end* to Boost.Test in this context, but I would give it a look.
Basically the CruiseControl software detect events such as commits in the revision control system upon which build and test runs are launched. After completion reports are generated and notifications are given to subscribing developers. The connection to Boost.Test is in the reporting and test launch part I guess. I guess signaling to multiple build/test hosts could be part of it or a future extension. Various SCM such as cvs, subversion, perforce, clear-case, sccs, rcs, etc. should be supported by future plug-ins, for now a single plug-in for cvs, subversion, or whatever will do :-)
It may be smart to consider splitting this into more than one project to keep the scope creep out.
While it maybe very worthwhile idea, it has very little, if anything, to do with Boost.Test. As part of this project you may be required to implement some external test runner "script" that execute Boost.Test based unit tests and collects and interprets result. But this is also true for unit tests based on any other UTF or none at all. IMO Boost.Test front-end is GUI based test runner. either standalone or implemented as add-on (or both). How what features this GUI need to present is still open for discussion. But IMO we should strive for some reasonable, modest and most importantly achievable scope for this project. Regards, Gennadiy

I'm not sure if this has been mentioned before but here it is. I build the serialization library with the Windows VC 7.1 IDE. I have a large "solution" which contains one project for each of the fifty tests I run. Also there are two projects to build the libraries themselves and 5 projects to run the demos included in the package. I've setup the the "project configurations" to select the combination of tests I want to build/run. These combinations are things like release/text archive, etc. So whenever I make a change, I just rebuild the solution and all dependents and are rebuilt and thier tests are run. I get a long list of results in the output window. I implemented this in conformance with a suggestion from the boost test documentation. It was a huge pain the a** to setup. But it seems worth it now. I'm not sure what the motivation for the GUI front end is and what form it will take but from my standpoint it would be nice to automatically setup the project hierarchy and variations. This could be done by automatically creating project files since they are just XML. But it could also be done in a more "microsoft compatible" GUI way (which would also be quite slick looking and feeling) by creating a Boost add-in to the VC 7/8? system which uses the DevStudio object model. So it would be very cool to have a Boost Add-in to DevStudio which would have a boost application wizared etc. I realize that its not cross platform, but I'm sort of doubtful that any GUI isn't going to be really cross platform. Just my 2 cents. Robert Ramey

Younes M wrote:
If anyone has any other suggestions on functionality, information to include in generated reports, user interface, aesthetics, coupling between unit tests and the frontend, coupling between the frontend and editors, or on any other topic, I would love to hear it.
There are a number of things I can think of that would make running boost tests more useful and convenient. Among them: * An easy way to introspect the test suite, i.e. display all tests, with metadata. * An easy way to run subsets of tests. * Enhanced test run annotations, to allow report generation to build a more meaningful test report (e.g. fold multiple equivalent test runs into one, only consider test runs from a given time window, or associated with a given revision, etc.) * Support for running tests in parallel, or even on distributed hardware, if available. * Cross-testing support (i.e. compiling with a cross-compiler toolchain on host A for target B, then uploading the test executables to B and run it there) As it happens, I'm the maintainer of QMTest (http://www.codesourcery.com/qmtest), which is a (Free) tool that addresses all of the above. I would be thrilled to help looking into ways to use QMTest to drive boost test runs (and in fact, I have been talking with Vladimir for quite a while to discuss ways to achieve this). No matter how, I'm very much interested into enhancing boost's testing harness, and would be more than happy to share my own experience with anybody interested. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

First, thank you all for the comments. On 3/21/07, Stefan Seefeld <seefeld@sympatico.ca> wrote:
There are a number of things I can think of that would make running boost tests more useful and convenient. Among them:
* An easy way to introspect the test suite, i.e. display all tests, with metadata.
I agree. I think a big benefit in a GUI frontend will be to display results in a more digestible manner than the current text output allows. For example: * We can display results chronologically along with any checkpoint and general purpose messages the developer has used, or we can group results by source file, test case, or type of failure. * We can re-run individual test cases or groups of tests, as opposed to the entire unit. * We can display each group/test case/individual test in a widget that can expand or contract to display more or less information as required. The widget would allow the developer to click on it to facilitate jumping directly to the source file and line the failure occured at (if applicable). * We can provide statistics on the number of passes and failures, number of failures per source file, test case, or type of failure. * We can keep a history of reports per unit and provide statistics across reports, to allow us to better guage progress.
* An easy way to run subsets of tests.
On the current Open Issues page selectively running test cases by name is mentioned, which I think fits into this. I've included it in my list of running ideas above.
* Enhanced test run annotations, to allow report generation to build a more meaningful test report (e.g. fold multiple equivalent test runs into one, only consider test runs from a given time window, or associated with a given revision, etc.)
One issue I forsee is in synchronizing the unit test with the GUI. As it stands, I had only considered using the output of Boost.Test to generate reports, but this means that the reports can get stale until you re-run. This also complicates the statistics across reports idea I had above, since it makes little sense to consider such a thing when the unit test changes considerably. I think your idea, if I've understood it correctly, of taking into account revisions of the test suite would work towards solving such issues. I'm not sure how to go about detecting/delineating revisions however, but I'll give it some thought.
* Support for running tests in parallel, or even on distributed hardware, if available.
* Cross-testing support (i.e. compiling with a cross-compiler toolchain on host A for target B, then uploading the test executables to B and run it there)
I must admit that I don't usually find myself cross-compiling or running on distributed hardware, so I might not appreciate some of the issues and requirements involved. --- Anyhow, with respect to some of the other replies, Gennadiy mentioned that he's included much more support for external test runners, so I'll have a look at 1.34.0. Up until now I was considering a standalone tool that was crossplatform, but I have no issues with looking into MS Visual Studio integration. If I recall correctly MSVS addins can be built with C++ using ATL or C++/C# using the .Net Framework. If that's the case then I think I would prefer to do this in C#, given that Mono is a viable option, even when using WinForms, while ATL is Windows only. This would probably make it easier to produce both a standalone application and a MSVS addin that share the bulk of the implementation. Given the schedule constraints of the GSOC program, it might be the case that I would only begin on one of these avenues however.

Younes M wrote:
First, thank you all for the comments.
On 3/21/07, Stefan Seefeld <seefeld@sympatico.ca> wrote:
There are a number of things I can think of that would make running boost tests more useful and convenient. Among them:
* An easy way to introspect the test suite, i.e. display all tests, with metadata.
I agree. I think a big benefit in a GUI frontend will be to display results in a more digestible manner than the current text output allows. For example:
Actually, I don't think the issue here is GUI vs. CLI. Instead, it's about how robust and scalable the testing harness is. Think of it as a multi-tier design, where the UI is just a simple 'frontend' layer. Some layer underneath proides an API that lets you query about what tests exist (matching some suitable criteria, such as name pattern matching, or filtering per annotations), together with metadata. That, together with other queries such as 'give me all platforms this test is expected to fail on' would be very valuable for the release process. (All this querying doesn't involve actually running any tests.)
* An easy way to run subsets of tests.
On the current Open Issues page selectively running test cases by name is mentioned, which I think fits into this. I've included it in my list of running ideas above.
Right, but it may also include sub-suites. To push a little further (and deviate from the topic only a little bit), it would be good to parametrize sub-testsuites differently. For example, the boost.python tests may be run against different python versions, while that particular parameter is entirely meaningless for, say, boost.filesystem.
* Enhanced test run annotations, to allow report generation to build a more meaningful test report (e.g. fold multiple equivalent test runs into one, only consider test runs from a given time window, or associated with a given revision, etc.)
One issue I forsee is in synchronizing the unit test with the GUI. As it stands, I had only considered using the output of Boost.Test to generate reports, but this means that the reports can get stale until you re-run. This also complicates the statistics across reports idea I had above, since it makes little sense to consider such a thing when the unit test changes considerably.
I'm still not thinking of the GUI as something with an internal state. It's just a frontend to the rest of the harness, and thus, there isn't anything to synchronize. The only thing that has a timestamp (or revision) is the code, as well as a particular test run.
I think your idea, if I've understood it correctly, of taking into account revisions of the test suite would work towards solving such issues. I'm not sure how to go about detecting/delineating revisions however, but I'll give it some thought.
Once boost is hosted on subversion, each checked-out source tree has a single revision. Thus, you can label a test run with such a revision and can immediately see whether it corresponds to a particular working copy or not.
* Support for running tests in parallel, or even on distributed hardware, if available.
* Cross-testing support (i.e. compiling with a cross-compiler toolchain on host A for target B, then uploading the test executables to B and run it there)
I must admit that I don't usually find myself cross-compiling or running on distributed hardware, so I might not appreciate some of the issues and requirements involved.
Distributed hardware here simply means that you may have got a compile farm or other means to parallelize a test run, yielding significant speedups (some boost testers report that a non-incremental run of the boost tests takes *days* to complete !) Cross-testing is useful for example when developing for embedded platforms, where the machine hosting the development environment isn't the one running the compiled applications (and thus, tests). We are using QMTest in-house to cross-test GCC and various libs, such as libstdc++, for numerous platforms. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4601CC89.2030506@sympatico.ca...
Younes M wrote:
First, thank you all for the comments.
On 3/21/07, Stefan Seefeld <seefeld@sympatico.ca> wrote:
There are a number of things I can think of that would make running boost tests more useful and convenient. Among them:
* An easy way to introspect the test suite, i.e. display all tests, with metadata.
I agree. I think a big benefit in a GUI frontend will be to display results in a more digestible manner than the current text output allows. For example:
Actually, I don't think the issue here is GUI vs. CLI. Instead, it's about how robust and scalable the testing harness is. Think of it as a multi-tier design, where the UI is just a simple 'frontend' layer. Some layer underneath proides an API that lets you query about what tests exist (matching some suitable criteria, such as name pattern matching, or filtering per annotations), together with metadata.
Boost.Test UTF itself provides this information already (using test tree travercing interfaces). We may consider adding some simpler, strait to the point interfaces, but even now you could get all you need.
That, together with other queries such as 'give me all platforms this test is expected to fail on' would be very valuable for the release process.
Umm. I am not sure how do you plan to maintain this information.
(All this querying doesn't involve actually running any tests.)
* An easy way to run subsets of tests.
On the current Open Issues page selectively running test cases by name is mentioned, which I think fits into this. I've included it in my list of running ideas above.
Right, but it may also include sub-suites. To push a little further (and deviate from the topic only a little bit), it would be good to parametrize sub-testsuites differently. For example, the boost.python tests may be run against different python versions, while that particular parameter is entirely meaningless for, say, boost.filesystem.
What do you mean by "parametrize"?
* Enhanced test run annotations, to allow report generation to build a more meaningful test report (e.g. fold multiple equivalent test runs into one, only consider test runs from a given time window, or associated with a given revision, etc.)
One issue I forsee is in synchronizing the unit test with the GUI. As it stands, I had only considered using the output of Boost.Test to generate reports, but this means that the reports can get stale until you re-run. This also complicates the statistics across reports idea I had above, since it makes little sense to consider such a thing when the unit test changes considerably.
I'm still not thinking of the GUI as something with an internal state. It's just a frontend to the rest of the harness, and thus, there isn't anything to synchronize. The only thing that has a timestamp (or revision) is the code, as well as a particular test run.
Yes. That's correct way to look into this. test runner should employ exisiting Boost.Test interfaces to implement it's tasks.
I think your idea, if I've understood it correctly, of taking into account revisions of the test suite would work towards solving such issues. I'm not sure how to go about detecting/delineating revisions however, but I'll give it some thought.
Once boost is hosted on subversion, each checked-out source tree has a single revision. Thus, you can label a test run with such a revision and can immediately see whether it corresponds to a particular working copy or not.
As I argued in other post, IMO this is not part od this project. At least not in a first draft. It maybe added later on as an addon. Gennadiy

Gennadiy Rozental wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message
Actually, I don't think the issue here is GUI vs. CLI. Instead, it's about how robust and scalable the testing harness is. Think of it as a multi-tier design, where the UI is just a simple 'frontend' layer. Some layer underneath proides an API that lets you query about what tests exist (matching some suitable criteria, such as name pattern matching, or filtering per annotations), together with metadata.
Boost.Test UTF itself provides this information already (using test tree travercing interfaces). We may consider adding some simpler, strait to the point interfaces, but even now you could get all you need.
That, together with other queries such as 'give me all platforms this test is expected to fail on' would be very valuable for the release process.
Umm. I am not sure how do you plan to maintain this information.
I'm not sure what you are referring to as 'this information' here. The test cases are clearly already encoded in the file system, i.e. can be found by traversing the code (and possibly scanning for certain tokens). Expectations are already encoded in some xml file. I'm not sure whether and how 'platforms' are described, but that could some simple lookup table, too. These three together form a database that can be queried easily and should provide all the relevant information.
(All this querying doesn't involve actually running any tests.)
* An easy way to run subsets of tests.
On the current Open Issues page selectively running test cases by name is mentioned, which I think fits into this. I've included it in my list of running ideas above. Right, but it may also include sub-suites. To push a little further (and deviate from the topic only a little bit), it would be good to parametrize sub-testsuites differently. For example, the boost.python tests may be run against different python versions, while that particular parameter is entirely meaningless for, say, boost.filesystem.
What do you mean by "parametrize"?
I realize that this, too, concerns more the boost.build system than the testing harness. However, the effect I describe is seen in the test report: Each report lists test runs in columns. Each has a set of parameters, such as toolchain, platform, as well as some other environment parameters unaccounted for in description. Some of these parameters, however, are only meaningful for a subset of the test run. For example, the python version is clearly only meaningful for boost.python, but not boost.regex. Thus, instead of only running full test suites with all parameter combinations (toolchain,platform,etc.), it seems more meaningful to modularize the whole and then parametrize the parts individually. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

On 3/21/07, Stefan Seefeld <seefeld@sympatico.ca> wrote:
Actually, I don't think the issue here is GUI vs. CLI. Instead, it's about how robust and scalable the testing harness is. Think of it as a multi-tier design, where the UI is just a simple 'frontend' layer. Some layer underneath proides an API that lets you query about what tests exist (matching some suitable criteria, such as name pattern matching, or filtering per annotations), together with metadata.
That, together with other queries such as 'give me all platforms this test is expected to fail on' would be very valuable for the release process. (All this querying doesn't involve actually running any tests.)
Ah, I think I understand what you mean by introspection. What you're suggesting is for the frontend to parse the test suite source files and build a picture of the test suite that way, such that the developer can see the tests and associated info (e.g. type of test, which case it's located in, source file, etc), view/sort/group them by criteria, operate on them (e.g. enable/disable/etc), and then be able to run the test suite and have a seperate view for the resulting report(s).

Younes M wrote:
On 3/21/07, Stefan Seefeld <seefeld@sympatico.ca> wrote:
Actually, I don't think the issue here is GUI vs. CLI. Instead, it's about how robust and scalable the testing harness is. Think of it as a multi-tier design, where the UI is just a simple 'frontend' layer. Some layer underneath proides an API that lets you query about what tests exist (matching some suitable criteria, such as name pattern matching, or filtering per annotations), together with metadata.
That, together with other queries such as 'give me all platforms this test is expected to fail on' would be very valuable for the release process. (All this querying doesn't involve actually running any tests.)
Ah, I think I understand what you mean by introspection. What you're suggesting is for the frontend to parse the test suite source files and build a picture of the test suite that way, such that the developer can see the tests and associated info (e.g. type of test, which case it's located in, source file, etc), view/sort/group them by criteria, operate on them (e.g. enable/disable/etc), and then be able to run the test suite and have a seperate view for the resulting report(s).
Exactly. Only, I don't think this functionality belongs into the frontend, but the main testing harness. Its usefulness is most apparent through the frontend, as it allows users to search for any kind of metadata associated with the tests (or test suites), and triage on that (for example). Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:460194C2.2030000@sympatico.ca...
Younes M wrote:
If anyone has any other suggestions on functionality, information to include in generated reports, user interface, aesthetics, coupling between unit tests and the frontend, coupling between the frontend and editors, or on any other topic, I would love to hear it.
There are a number of things I can think of that would make running boost tests more useful and convenient. Among them:
* An easy way to introspect the test suite, i.e. display all tests, with metadata.
This will require changes in a library
* An easy way to run subsets of tests.
For proper implementation this will require changes in a library. It's in my todo anyway.
* Enhanced test run annotations, to allow report generation to build a more meaningful test report (e.g. fold multiple equivalent test runs into one, only consider test runs from a given time window, or associated with a given revision, etc.)
Ummm, I am not clear I undestood these
* Support for running tests in parallel, or even on distributed hardware, if available.
I don't see this as a part of Boost.Test GUI runner. It maybe different project. But this is more in Boost.Build domain.
* Cross-testing support (i.e. compiling with a cross-compiler toolchain on host A for target B, then uploading the test executables to B and run it there)
The sameas above. We need to strive for clearcut problem domain. And everything that feels like part of Boost.Build should not be considerred here IMO (we could, but this will be a different project) Gennadiy

Gennadiy Rozental wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message
* Enhanced test run annotations, to allow report generation to build a more meaningful test report (e.g. fold multiple equivalent test runs into one, only consider test runs from a given time window, or associated with a given revision, etc.)
Ummm, I am not clear I undestood these
Right now test reports have multiple runs for the same toolchain. Thus the absolute count of failures doesn't have a well established meaning. What I propose is a way to either a) eliminate all but one results for 'equivalent' tests or b) enhance test run annotation to make clear that these test runs in fact differ (say, because some now undocumented environment parameter differs).
* Support for running tests in parallel, or even on distributed hardware, if available.
I don't see this as a part of Boost.Test GUI runner. It maybe different project. But this is more in Boost.Build domain.
Possibly, yes.
* Cross-testing support (i.e. compiling with a cross-compiler toolchain on host A for target B, then uploading the test executables to B and run it there)
The sameas above.
We need to strive for clearcut problem domain. And everything that feels like part of Boost.Build should not be considerred here IMO (we could, but this will be a different project)
Agreed. The reason I bring these points up are * to point out more things that should be enhanced in the testing harness * to point out that a GUI frontend really is a frontend, and shouldn't include any other logic (i.e. the whole becomes a multi-tier system) Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld said: (by the date of Thu, 22 Mar 2007 08:22:43 -0400)
* to point out that a GUI frontend really is a frontend, and shouldn't include any other logic (i.e. the whole becomes a multi-tier system)
Will the GUI used work at least on linux and windows? I'm not sure about mac, but supporting linux usually is enough to get it working on mac. There are a few alternatives out there, like QT4 (GPL license) and wxwidgets (Free also), but maybe you want something else? -- Janek Kozicki |

Janek Kozicki said: (by the date of Thu, 22 Mar 2007 18:52:46 +0100)
There are a few alternatives out there, like QT4 (GPL license) and wxwidgets (Free also), but maybe you want something else?
small update: wxwidgets license is more free than LGPL (does not restrict commercial use anyhow) http://wxwidgets.org/about/newlicen.htm I don't even bother to check what is the license for WinAPI, but perhaps an API from ReactOS would do? (in case if you wanted to stick to WinAPI compatibile GUI toolkit). if you asked me, I would vote for QT4, but that's only my personal opinion. -- Janek Kozicki |

There are a few alternatives out there, like QT4 (GPL license) and wxwidgets (Free also), but maybe you want something else?
Without paying for QT, it is definitely not compatible with the Boost license (the free version is GPL), but I don't believe there would be problems since this is just going to be an add-on program for use with the Boost libraries, and not a library itself. The source code would just need to be included with it. There shouldn't be any issues with people using this front end for their proprietary, closed-source software. Correct me if I'm wrong. small update:
wxwidgets license is more free than LGPL (does not restrict commercial use anyhow)
http://wxwidgets.org/about/newlicen.htm
I don't even bother to check what is the license for WinAPI, but perhaps an API from ReactOS would do? (in case if you wanted to stick to WinAPI compatibile GUI toolkit).
if you asked me, I would vote for QT4, but that's only my personal opinion.
I also vote for QT. I like the API more. Of course, you could always build a minimal web server (or write some CGI scripts and run from Apache) and do the interface in HTML and Javascript. Jeremy Pack

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeremy Pack Sent: Thursday, March 22, 2007 1:35 PM To: boost@lists.boost.org Subject: Re: [boost] Boost.Test GUI: QT4, wxwidgets or what?
There are a few alternatives out there, like QT4 (GPL license) and wxwidgets (Free also), but maybe you want something else?
Actually gtkmm is pretty darn good. And it doesn't require a preprocessor.

On 3/22/07, Janek Kozicki <janek_listy@wp.pl> wrote:
Stefan Seefeld said: (by the date of Thu, 22 Mar 2007 08:22:43 -0400)
* to point out that a GUI frontend really is a frontend, and shouldn't include any other logic (i.e. the whole becomes a multi-tier system)
Will the GUI used work at least on linux and windows?
I'm not sure about mac, but supporting linux usually is enough to get it working on mac.
There are a few alternatives out there, like QT4 (GPL license) and wxwidgets (Free also), but maybe you want something else?
There was mention of having a MS Visual Studio addin. I've looked into addins a bit and it seems to me that there are two approaches (at least with MSVS2005), C++ with ATL, or a .Net language with WinForms. Maybe the most reasonable solution would be to use C# for the bulk of the frontend and simply have two targets, a standalone application and an MSVS addin, each having a relatively small bit of glue code. With Mono supporting WinForms very decently this could easily allow the standalone application to run on other platforms. I personally have a lot of experience with wxWidgets and GTK, and originally thought of a cross-platform standalone application, but I think the C#/WinForms combination might open up the most doors.

----- Original Message ----- From: "Younes M" <younes.m@gmail.com> To: <boost@lists.boost.org> Sent: Friday, March 23, 2007 10:39 AM Subject: Re: [boost] Boost.Test GUI: QT4, wxwidgets or what?
On 3/22/07, Janek Kozicki <janek_listy@wp.pl> wrote:
Stefan Seefeld said: (by the date of Thu, 22 Mar 2007 08:22:43 -0400)
* to point out that a GUI frontend really is a frontend, and shouldn't include any other logic (i.e. the whole becomes a multi-tier system)
Will the GUI used work at least on linux and windows?
I'm not sure about mac, but supporting linux usually is enough to get it working on mac.
There are a few alternatives out there, like QT4 (GPL license) and wxwidgets (Free also), but maybe you want something else?
There was mention of having a MS Visual Studio addin. I've looked into addins a bit and it seems to me that there are two approaches (at least with MSVS2005), C++ with ATL, or a .Net language with WinForms. Maybe the most reasonable solution would be to use C# for the bulk of the frontend and simply have two targets, a standalone application and an MSVS addin, each having a relatively small bit of glue code. With Mono supporting WinForms very decently this could easily allow the standalone application to run on other platforms. I personally have a lot of experience with wxWidgets and GTK, and originally thought of a cross-platform standalone application, but I think the C#/WinForms combination might open up the most doors.
I think all the options you have listed are quite high level interfaces. Would it make more sense to use a lower level api like win32. Eventhough this may mean more work. But I think it would provide a good foundation to put together something simple to start with. Just my two centavos worth..
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Send instant messages to your online friends http://au.messenger.yahoo.com

There is a sample GUI library -- Windows Vision Library(WVL) -- by Jinhao. He said that: * WVL is a C++ framework providing the GUI, it gives you a simple way for writing GUI applications. WVL is written in Standard C++, and it is designed to be portable over multiplatforms, so you can easily compile and distribute your applications on different Compiler/OS platforms. Unfortunately, he has no time to maintaining the WVL. Well, it's just smaple. More information in http://www.jinhao.org/ 2007/3/23, Minh Phanivong <m_phanivong@yahoo.com.au>:
----- Original Message ----- From: "Younes M" <younes.m@gmail.com> To: <boost@lists.boost.org> Sent: Friday, March 23, 2007 10:39 AM Subject: Re: [boost] Boost.Test GUI: QT4, wxwidgets or what?
On 3/22/07, Janek Kozicki <janek_listy@wp.pl> wrote:
Stefan Seefeld said: (by the date of Thu, 22 Mar 2007 08:22:43 -0400)
* to point out that a GUI frontend really is a frontend, and shouldn't include any other logic (i.e. the whole becomes a multi-tier system)
Will the GUI used work at least on linux and windows?
I'm not sure about mac, but supporting linux usually is enough to get it working on mac.
There are a few alternatives out there, like QT4 (GPL license) and wxwidgets (Free also), but maybe you want something else?
There was mention of having a MS Visual Studio addin. I've looked into addins a bit and it seems to me that there are two approaches (at least with MSVS2005), C++ with ATL, or a .Net language with WinForms. Maybe the most reasonable solution would be to use C# for the bulk of the frontend and simply have two targets, a standalone application and an MSVS addin, each having a relatively small bit of glue code. With Mono supporting WinForms very decently this could easily allow the standalone application to run on other platforms. I personally have a lot of experience with wxWidgets and GTK, and originally thought of a cross-platform standalone application, but I think the C#/WinForms combination might open up the most doors.
I think all the options you have listed are quite high level interfaces. Would it make more sense to use a lower level api like win32. Eventhough this may mean more work. But I think it would provide a good foundation to put together something simple to start with.
Just my two centavos worth..
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Send instant messages to your online friends http://au.messenger.yahoo.com _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi, May be something very simple but powerful and portable will do the job: http://cpptk.sourceforge.net/ Cheers -- Mateusz Loskot http://mateusz.loskot.net

Janek Kozicki wrote:
Stefan Seefeld said: (by the date of Thu, 22 Mar 2007 08:22:43 -0400)
* to point out that a GUI frontend really is a frontend, and shouldn't include any other logic (i.e. the whole becomes a multi-tier system)
Will the GUI used work at least on linux and windows?
Whatever will get chosen, I expect to be able to run tests without a GUI. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com
participants (13)
-
Ben Bear
-
Bjørn Roald
-
Gennadiy Rozental
-
Janek Kozicki
-
Jeremy Pack
-
Johan Nilsson
-
Martin Wille
-
Mateusz Loskot
-
Minh Phanivong
-
Robert Ramey
-
Sohail Somani
-
Stefan Seefeld
-
Younes M