[regression] Discrepancy with display and results...

I just noticed a discrepancy between the regression tables, the MetaComm one's, and the output reported for the failure. If one looks at the Spirit summary: http://www.boost.org/regression-logs/cs-win32_metacomm/developer/spirit.html Boost regression: spirit/CVS main trunk There are two failures, action_tests and action_tests_debug. But if one looks at either for the failure, you see that there is no failure: Boost regression - test run output: spirit - action_tests / cwpro8 Compiler output: call "C:\Program Files\Metrowerks\CodeWarrior\Other Metrowerks Tools\Command Line Tools\cwenv.bat" -quiet mwcc -maxerrors 5 -maxwarnings 20 -c -warn on,nounusedexpr,nounused -cwd include -DNOMINMAX -nowraplines -lang c++ -g -O0 -inline off -prefix UseDLLPrefix.h -runtime dmd -iso_templates on -I"C:\Users\Administrator\boost\main\results\bin\boost\libs\spirit\test" -I"..\libs\spirit" -I".." -I- -I"C:\Users\Administrator\boost\main\boost" -o "C:\Users\Administrator\boost\main\results\bin\boost\libs\spirit\test\action_tests.test\cwpro8\debug\unit_test.obj" "..\libs\spirit\test\./actor/unit_test.cpp" Linker output: call "C:\Program Files\Metrowerks\CodeWarrior\Other Metrowerks Tools\Command Line Tools\cwenv.bat" -quiet mwld -search -maxerrors 5 -maxwarnings 20 -export dllexport -nowraplines -g -subsystem console -o "C:\Users\Administrator\boost\main\results\bin\boost\libs\spirit\test\action_tests.test\cwpro8\debug\action_tests.exe" @"C:\Users\Administrator\boost\main\results\bin\boost\libs\spirit\test\action_tests.test\cwpro8\debug\action_tests.CMD" Run output: Running 14 test cases... *** No errors detected EXIT STATUS: 0 -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

Rene Rivera writes:
I just noticed a discrepancy between the regression tables, the MetaComm one's, and the output reported for the failure. If one looks at the Spirit summary:
http://www.boost.org/regression-logs/cs-win32_metacomm/developer/spirit.html Boost regression: spirit/CVS main trunk
There are two failures, action_tests and action_tests_debug. But if one looks at either for the failure, you see that there is no failure:
[snip the output] Rene, I think you've happened to look at the reports while they were in the process of updating -- the new output pages simply weren't copied to the site yet, while everything else was already updated. The failures are there now -- http://tinyurl.com/4ktrx. Another possibility is that your browser missed the fact that the page was updated and was showing you a cached version. This is a known issue, but since it bit us, I guess it's time to fix it. We'll look into putting both the report time and the compile/link/run timestamps on each of these pages. -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
I think you've happened to look at the reports while they were in the process of updating -- the new output pages simply weren't copied to the site yet, while everything else was already updated. The failures are there now -- http://tinyurl.com/4ktrx. Another possibility is that your browser missed the fact that the page was updated and was showing you a cached version.
I though about that and waited a while and tried again. I also just tried again and the discrepancy is still there. I'm not talking about the results in your website, but the results on the Boost website (i.e. the SF web severs). I also quit and restarted my browser just to make sure.
This is a known issue, but since it bit us, I guess it's time to fix it. We'll look into putting both the report time and the compile/link/run timestamps on each of these pages.
That would be nice also :-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

Rene Rivera writes:
Aleksey Gurtovoy wrote:
I think you've happened to look at the reports while they were in the process of updating -- the new output pages simply weren't copied to the site yet, while everything else was already updated. The failures are there now -- http://tinyurl.com/4ktrx. Another possibility is that your browser missed the fact that the page was updated and was showing you a cached version.
I though about that and waited a while and tried again. I also just tried again and the discrepancy is still there. I'm not talking about the results in your website, but the results on the Boost website (i.e. the SF web severs).
Oh, OK. Those are uploaded as a gzipped tar and then unpacked on the SF. May be the job got terminated. In general, due to issues like this, the Boost-wide results are way more trustworthy. -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
Oh, OK. Those are uploaded as a gzipped tar and then unpacked on the SF. May be the job got terminated. In general, due to issues like this, the Boost-wide results are way more trustworthy.
In that case I would suggest to dispense with the XSLT reports in SF entirely. Instead just upload the generic report to SF, and upload the XSLT reports to your server for the Boost-wide reports. The current situation is just confusing. And since the reports are not reliable it invalidates people looking at the latest results. You are basically wasting time on the second set of test you are doing if people can't use them. And not uploading the large set has the benefit of saving disk space on SF ;-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

Rene Rivera writes:
Aleksey Gurtovoy wrote:
Oh, OK. Those are uploaded as a gzipped tar and then unpacked on the SF. May be the job got terminated. In general, due to issues like this, the Boost-wide results are way more trustworthy.
In that case I would suggest to dispense with the XSLT reports in SF entirely. Instead just upload the generic report to SF, and upload the XSLT reports to your server for the Boost-wide reports.
That was the plan. Just didn't get there yet.
The current situation is just confusing. And since the reports are not reliable it invalidates people looking at the latest results.
I was hoping that by now everybody goes straight to the Boost-wide reports. Any reason not to (besides a habit :)?
You are basically wasting time on the second set of test you are doing if people can't use them.
It's *the same* set of tests, but you are right, there is no reason to generate these reports now that the Boost-wide ones have stabilized and are actually more reliable.
And not uploading the large set has the benefit of saving disk space on SF ;-)
Sure. -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
Rene Rivera writes:
You are basically wasting time on the second set of test you are doing if people can't use them.
It's *the same* set of tests
OK, you've managed to confuse me. If that's the case why do they have different timestamps? -- And why do they show different results, sometimes? -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

"Aleksey Gurtovoy" <agurtovoy@meta-comm.com> writes:
[snip the output]
Rene,
I think you've happened to look at the reports while they were in the process of updating -- the new output pages simply weren't copied to the site yet, while everything else was already updated. The failures are there now -- http://tinyurl.com/4ktrx. Another possibility is that your browser missed the
I usually take care of updates like that one by building the new contents into a fresh directory and then overwriting a link that points to it, so things are as atomic as possible. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com
participants (3)
-
Aleksey Gurtovoy
-
David Abrahams
-
Rene Rivera