What's up with regression tests?

I only see linux tests at http://boost.sourceforge.net/regression-logs/ What about Win32? Weren't there other platforms, also? -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams writes:
I only see linux tests at http://boost.sourceforge.net/regression-logs/
What about Win32? Weren't there other platforms, also?
Of course. Boost-wide reports have them all (http://www.meta-comm.com/engineering/boost-regression/developer, as per the links at the top of the page you've cited), but you surely know that (?). -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
David Abrahams writes:
I only see linux tests at http://boost.sourceforge.net/regression-logs/
What about Win32? Weren't there other platforms, also?
Of course. Boost-wide reports have them all (http://www.meta-comm.com/engineering/boost-regression/developer, as per the links at the top of the page you've cited), but you surely know that (?).
No, I really didn't. What does it mean that I only see those linux tests on the front page? How do those tests relate to the others? -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
David Abrahams writes:
I only see linux tests at http://boost.sourceforge.net/regression-logs/
What about Win32? Weren't there other platforms, also?
Of course. Boost-wide reports have them all (http://www.meta-comm.com/engineering/boost-regression/developer, as per the links at the top of the page you've cited), but you surely know that (?).
No, I really didn't.
What does it mean that I only see those linux tests on the front page?
It means that Martin is the only tester that is also posting the results to the boost.org/regression-results location. Everyone else is posting the only to the metacomm aggregator. Although when I'm actively running tests, near release time, I also post them to both places :-)
How do those tests relate to the others?
At this point those are the same as the Martin/Linux results that show up on the metacomm site. There's usually a lag between the two as the regression-logs ones show as soon as they are posted. But the metacomm ones have a delay to do the extra XSLT processing. Don't know what the delay and cycle is now a days.. Aleksey? --grafik

Rene Rivera wrote:
How do those tests relate to the others?
At this point those are the same as the Martin/Linux results that show up on the metacomm site. There's usually a lag between the two as the regression-logs ones show as soon as they are posted. But the metacomm ones have a delay to do the extra XSLT processing. Don't know what the delay and cycle is now a days.. Aleksey?
Martin's results at metacomm still show some stale failures from nearly a month ago, despite the fact that he is running tests every day and posting them to the boost.org/regression-results location. For example: http://tinyurl.com/6nllc. Any idea why this is? Jonathan

Jonathan Turkanis wrote:
Rene Rivera wrote:
How do those tests relate to the others?
At this point those are the same as the Martin/Linux results that show up on the metacomm site. There's usually a lag between the two as the regression-logs ones show as soon as they are posted. But the metacomm ones have a delay to do the extra XSLT processing. Don't know what the delay and cycle is now a days.. Aleksey?
Martin's results at metacomm still show some stale failures from nearly a month ago, despite the fact that he is running tests every day and posting them to the boost.org/regression-results location. For example: http://tinyurl.com/6nllc.
Any idea why this is?
Only thing, without really looking :-), I can think of is that Martin is running incremental tests. In that case removed tests need to get cleared out manually. --grafik

Rene Rivera wrote:
Jonathan Turkanis wrote:
Martin's results at metacomm still show some stale failures from nearly a month ago, despite the fact that he is running tests every day and posting them to the boost.org/regression-results location. For example: http://tinyurl.com/6nllc.
Any idea why this is?
Only thing, without really looking :-), I can think of is that Martin is running incremental tests. In that case removed tests need to get cleared out manually.
Yes, I am running the tests incrementally, because a clean run takes a CPU day. I several times requested that the testers get notified of deleted tests and of other changes that might require a purging of old results. Are the stale results referring to tests that no longer exist? Regards, m

Rene Rivera <grafik.list@redshift-software.com> writes:
David Abrahams wrote:
Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
David Abrahams writes:
I only see linux tests at http://boost.sourceforge.net/regression-logs/
What about Win32? Weren't there other platforms, also?
Of course. Boost-wide reports have them all (http://www.meta-comm.com/engineering/boost-regression/developer, as per the links at the top of the page you've cited), but you surely know that (?). No, I really didn't.
What does it mean that I only see those linux tests on the front page?
It means that Martin is the only tester that is also posting the results to the boost.org/regression-results location. Everyone else is posting the only to the metacomm aggregator. Although when I'm actively running tests, near release time, I also post them to both places :-)
Okay, we have to fix that. This is just confusing. I really thought we had already decided that there was no reason to post two different result formats. Is there any reason to keep the non-metacomm format up there? -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
Okay, we have to fix that. This is just confusing.
I really thought we had already decided that there was no reason to post two different result formats. Is there any reason to keep the non-metacomm format up there?
Right now it's the only way for me to get current Linux results, since the meta-comm linux results for iostreams are a month old. I've posted a number of messages to the testing list, starting at least three weeks ago, but the problem has not been fixed and I haven't even discovered if the problem is on Martin's end, meta-comm's end, or both. Jonathan

David Abrahams wrote:
Rene Rivera <grafik.list@redshift-software.com> writes:
It means that Martin is the only tester that is also posting the results to the boost.org/regression-results location. Everyone else is posting the only to the metacomm aggregator. Although when I'm actively running tests, near release time, I also post them to both places :-)
Okay, we have to fix that. This is just confusing.
I really thought we had already decided that there was no reason to post two different result formats. Is there any reason to keep the non-metacomm format up there?
We did agree to only have the metacomm results. But I'm not sure all of us feel that the metacomm results are giving accurate and responsive results. For one the processing delays seem rather long. I'm planning on working on the Boost buildbot next week, after a bit more work on getting my server ready. Hopefully the direct feedback that buildbot gives can help us streamline testing. One thing we could do immediately is to not use the regression-logs location as the testing link. And instead point directly to the metacomm location. So for those who still care about the old style results they can remember to go to the regression-logs location. (Or anything else equivalent to deprecating the regression-logs location) --grafik

Rene Rivera wrote:
David Abrahams wrote:
Rene Rivera <grafik.list@redshift-software.com> writes:
It means that Martin is the only tester that is also posting the results to the boost.org/regression-results location. Everyone else is posting the only to the metacomm aggregator. Although when I'm actively running tests, near release time, I also post them to both places :-)
Okay, we have to fix that. This is just confusing.
I really thought we had already decided that there was no reason to post two different result formats. Is there any reason to keep the non-metacomm format up there?
We did agree to only have the metacomm results.
If we did, then why do we have 1 manual redirection to apply and 2 automatic redirections to see the results in the format we agreed on? (plus another link to click to see the summary instead of a page showing basically only a menu.) But I'm not sure all of
us feel that the metacomm results are giving accurate and responsive results. For one the processing delays seem rather long.
Yes, that's the main reason why I still provide the old format, too. Additionally, the metacomm postprocessing failed several times and the developers didn't have any uptodate test results. I'm still under the impression the XML based postprocessing procedure is a bit fragile.
I'm planning on working on the Boost buildbot next week, after a bit more work on getting my server ready. Hopefully the direct feedback that buildbot gives can help us streamline testing.
One thing we could do immediately is to not use the regression-logs location as the testing link. And instead point directly to the metacomm location. So for those who still care about the old style results they can remember to go to the regression-logs location. (Or anything else equivalent to deprecating the regression-logs location)
This is a good idea. Regards, m

Rene Rivera writes:
How do those tests relate to the others?
At this point those are the same as the Martin/Linux results that show up on the metacomm site. There's usually a lag between the two as the regression-logs ones show as soon as they are posted. But the metacomm ones have a delay to do the extra XSLT processing. Don't know what the delay and cycle is now a days.. Aleksey?
I believe the current delay is ~3 hours, and the cycle is, well, 24/7. We are about to revise the way the results are processed to reduce the integration delay to minimal, say, 30 minutes or less. -- Aleksey Gurtovoy MetaCommunications Engineering

David Abrahams wrote:
I only see linux tests at http://boost.sourceforge.net/regression-logs/
What about Win32? Weren't there other platforms, also?
That has been like that for months. I recall a couple posts indicating that I hadn't clicked the right place, which didn't make much sense to me. -t
participants (6)
-
Aleksey Gurtovoy
-
David Abrahams
-
Jonathan Turkanis
-
Martin Wille
-
Rene Rivera
-
troy d. straszheim