On 2/1/17 10:35 AM, Peter Dimov wrote:
Rene Rivera wrote:
And then the fun part of finding out what programs get run and what is slow :-)
From the look of it, most of the time is spent in "Generating links files".
The architecture looks a bit odd. If I understand it correctly, the test runners generate a big .xml file, it's zipped, uploaded, the report script downloads all zips and then generates the whole report.
It would be more scalable for the test runners to do most of the work. For instance, what immediately comes to mind is that they could generate the so-called links files directly instead of combining everything into one .xml which is then decomposed back to individual pages.
Hmmm - making just one page of errors for each library and tester rather than a page for each test would help me out. this would eliminate the truncation of the error messages and diminish the number of potential linked pages (in the serialization library) by a factor of about 1000. that might help performance.
Longer term we could ...
But once we start doing that, we'll end up re-architecting the whole thing! Which is what I would very much like to see. But I wouldn't want to impose such a task on anyone. Robert Ramey