
David Abrahams wrote:
It looks like QCC got added to the regression system and now I am getting huge regression reports with the Python library failing every test.
I'm going to ramble here, because I don't really know what or who to lash out at ;).
Guilty as charged... So apologies in advance if it seems like I'm firing
indiscriminately. I hope we can ultimately make things better.
The buckshot approach often works :-)
With all due gratitude to Doug for setting it up, I have a hard time not concluding that there's something wrong with the regression nanny system. The psychological impact seems misdirected. I think the goal is that a sudden crop of failures showing up in my mailbox should be seen as a problem I need to address.
And they also show up in the NG which I find useful. And anyway, as I am the QNX platform maintainer, an e-mail to me would soon elicit an explanation.
Too often, though, there's nothing I can do about such failures and they get ignored. In this case, it's just really annoying. These aren't regressions because Boost.Python never worked with QNX in the past. Why am I getting these reports?
I added QNX6 to the "required" list a few days ago and I am now slowly working trough the test failures. We have a solution for Boost.Python, but just haven't implemented it as yet. I was hoping for time to achieve something optimal, but the rush towards 1.34 means that it will be more of a kludge. I promise it will get done early next week. If you check back I think you should find a gradual improvement for qcc.
Shouldn't whoever enabled these tests have done something to ensure that they wouldn't cause this to happen to "innocent" developers?
Blame the new kid on the block then :-) Jim