Enhancing regression tests summary
Hi, Some time ago I was writing about regression summaries enhancements, that I'd be nice to see more meaningful list of errors, etc. I asked questions, even prepared a PR for regression, to no avail. So I developed a simple program downloading summaries, test logs and saving the enhanced result. You may check it out here: https://github.com/awulkiew/summary-enhancer Regards, Adam
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Adam Wulkiewicz Sent: 19 October 2014 16:40 To: boost@lists.boost.org Subject: [boost] Enhancing regression tests summary
Some time ago I was writing about regression summaries enhancements, that I'd be nice to see more meaningful list of errors, etc. I asked questions, even prepared a PR for regression, to no avail.
So I developed a simple program downloading summaries, test logs and saving the enhanced result. You may check it out here: https://github.com/awulkiew/summary-enhancer
Neat :-) I'd certainly find this very helpful when checking the Boost.Math test monster matrix. Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
Paul A. Bristow wrote:
So I developed a simple program downloading summaries, test logs and saving the
enhanced result. You may check it out here: https://github.com/awulkiew/summary-enhancer Neat :-)
I'd certainly find this very helpful when checking the Boost.Math test monster matrix.
Yes, it's big and various kind of errors can be found there. FYI, it has some new features. It can keep a log about the tests previously detected as failing and generate a report about new failures. It's also possible to send an email containing this report (to many recipients). It uses SMTP without authentication and e.g. Google throws such messages into SPAM, but a filter can be set for them. Alternatively SPF record could be configured in the machine's domain but I haven't played with it. Some local SMTP server could also probably work. Currently the mailing configuration must be placed in the file mail.cfg containing per-line settings (see the example in the repo). So if run periodically the program should notify about new regressions. Regards, Adam
On Thu, Oct 30, 2014 at 10:23 AM, Adam Wulkiewicz wrote: Paul A. Bristow wrote: So I developed a simple program downloading summaries, test logs and
saving
the enhanced result.
You may check it out here: https://github.com/awulkiew/summary-enhancer Neat :-) +1 I'd certainly find this very helpful when checking the Boost.Math test
monster
matrix. Yes, it's big and various kind of errors can be found there. FYI, it has some new features.
It can keep a log about the tests previously detected as failing and
generate a report about new failures.
It's also possible to send an email containing this report (to many
recipients). It uses SMTP without authentication and e.g. Google throws
such messages into SPAM, but a filter can be set for them. Alternatively
SPF record could be configured in the machine's domain but I haven't played
with it. Some local SMTP server could also probably work. Currently the
mailing configuration must be placed in the file mail.cfg containing
per-line settings (see the example in the repo).
So if run periodically the program should notify about new regressions. That's also very helpful!
The big question in my mind is whether we should enhance the current
regression testing system or start fresh using one of the continuous
integration frameworks?
--Beman
Beman Dawes wrote
The big question in my mind is whether we should enhance the current regression testing system or start fresh using one of the continuous integration frameworks?
I think we should think bigger. I'd like to see us encourage each user to run tests on the libraries he actually uses and post these results to a common area. This would mean that a) we would have test results for (all?) the actual configurations that users are using. b) The testing load be automatically be distributed to the users which actually use a library c) Libraries actually being used would be tested. d) Resources wouldn't be expended on libraries not being used. In short, distributing testing this way would scale to any number of libraries. I implemented a first cut of such a system as part of the Boost Incubator. This system uses CMake/CDash as described here: http://rrsd.com/blincubator.com/tools_cmak/ It permits users of libraries in the incubator to check out test results of other users. You can see how this works by going to http://rrsd.com/blincubator.com/bi_library/safe-numerics/?gform_post_id=426 and clicking the field in "Test Results Dashboard" In my view, a) Boost has to continue to grow or die. b) To continue to grow, it has to be scalable c) the best way to do that is to distribute testing to the users I see this a one of the ultimate consequences of the Boost modularization effort. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/Enhancing-regression-tests-summary-tp4668... Sent from the Boost - Dev mailing list archive at Nabble.com.
Adam Wulkiewicz wrote
Hi,
Some time ago I was writing about regression summaries enhancements, that I'd be nice to see more meaningful list of errors, etc. I asked questions, even prepared a PR for regression, to no avail.
So I developed a simple program downloading summaries, test logs and saving the enhanced result. You may check it out here: https://github.com/awulkiew/summary-enhancer
Very nice - but rather than downloading and enhancing the results, wouldn't it be simpler to just enhance the current program. That is clone the the current results display tool, incorporate the enhancements, and then merge the changes into the current tool? Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/Enhancing-regression-tests-summary-tp4668... Sent from the Boost - Dev mailing list archive at Nabble.com.
On Fri, Oct 31, 2014 at 10:18 AM, Robert Ramey
Adam Wulkiewicz wrote
Hi,
Some time ago I was writing about regression summaries enhancements, that I'd be nice to see more meaningful list of errors, etc. I asked questions, even prepared a PR for regression, to no avail.
So I developed a simple program downloading summaries, test logs and saving the enhanced result. You may check it out here: https://github.com/awulkiew/summary-enhancer
Very nice - but rather than downloading and enhancing the results, wouldn't it be simpler to just enhance the current program. That is clone the the current results display tool, incorporate the enhancements, and then merge the changes into the current tool?
+1 -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
On Fri, Oct 31, 2014 at 11:31 AM, Rene Rivera
On Fri, Oct 31, 2014 at 10:18 AM, Robert Ramey
wrote: Adam Wulkiewicz wrote
Hi,
Some time ago I was writing about regression summaries enhancements, that I'd be nice to see more meaningful list of errors, etc. I asked questions, even prepared a PR for regression, to no avail.
So I developed a simple program downloading summaries, test logs and saving the enhanced result. You may check it out here: https://github.com/awulkiew/summary-enhancer
Very nice - but rather than downloading and enhancing the results, wouldn't it be simpler to just enhance the current program. That is clone the the current results display tool, incorporate the enhancements, and then merge the changes into the current tool?
+1
+1 --Beman
Beman Dawes wrote:
On Fri, Oct 31, 2014 at 11:31 AM, Rene Rivera
wrote: On Fri, Oct 31, 2014 at 10:18 AM, Robert Ramey
wrote: Adam Wulkiewicz wrote
Hi,
Some time ago I was writing about regression summaries enhancements, that I'd be nice to see more meaningful list of errors, etc. I asked questions, even prepared a PR for regression, to no avail.
So I developed a simple program downloading summaries, test logs and saving the enhanced result. You may check it out here: https://github.com/awulkiew/summary-enhancer Very nice - but rather than downloading and enhancing the results, wouldn't it be simpler to just enhance the current program. That is clone the the current results display tool, incorporate the enhancements, and then merge the changes into the current tool?
+1
+1
Of course it would be simpler. In fact I proposed this some time ago (http://boost.2283326.n4.nabble.com/testing-Proposal-regression-tests-results...). But in practice there wasn't enough feedback/help. At that Point I had some questions about the way how summary pages are generated. Which tools are used exactly, what parameters are passed etc. I found C++ code of a tool generating summary pages, modified it and tested on locally generated XMLs (by the run.py). I prepared a PR (https://github.com/boostorg/boost/pull/25) adding simple enhancements to check if my locally-tested program will work with the official setup. I have no access to the machine on which the pages are generated or even to the actual files sent by the runners so can't do any real testing. Even though I wasn't sure if this C++ generator is the one that had been used to generate the official summary pages, I wanted to have something that actually works. In fact I suspect that they're generated by the XSLT. It's because there are some attributes on summary pages that couldn't be generated by the C++ tool. On the other hand they can be found in the XSLT files. Furthermore, I suspect that summary pages for develop and master branches may be generated somehow differently because for some libraries the pages for master are missing (e.g. Geometry, Spirit) but this is a wild guess. So finally I decided to implement something myself, something that would be useful for me and anyone interested but didn't require anyone's involvement. In the same time I wanted to develop a proof of concept showing what I have in mind. So if you have any ideas how to move this forward and add this functionality to the officially used tool, I'm open to suggestion. Regards, Adam
participants (5)
-
Adam Wulkiewicz
-
Beman Dawes
-
Paul A. Bristow
-
Rene Rivera
-
Robert Ramey