[testing] QCC causing huge numbers of failures

It looks like QCC got added to the regression system and now I am getting huge regression reports with the Python library failing every test. I'm going to ramble here, because I don't really know what or who to lash out at ;). So apologies in advance if it seems like I'm firing indiscriminately. I hope we can ultimately make things better. With all due gratitude to Doug for setting it up, I have a hard time not concluding that there's something wrong with the regression nanny system. The psychological impact seems misdirected. I think the goal is that a sudden crop of failures showing up in my mailbox should be seen as a problem I need to address. Too often, though, there's nothing I can do about such failures and they get ignored. In this case, it's just really annoying. These aren't regressions because Boost.Python never worked with QNX in the past. Why am I getting these reports? Shouldn't whoever enabled these tests have done something to ensure that they wouldn't cause this to happen to "innocent" developers? -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
It looks like QCC got added to the regression system and now I am getting huge regression reports with the Python library failing every test.
I'm going to ramble here, because I don't really know what or who to lash out at ;).
Guilty as charged... So apologies in advance if it seems like I'm firing
indiscriminately. I hope we can ultimately make things better.
The buckshot approach often works :-)
With all due gratitude to Doug for setting it up, I have a hard time not concluding that there's something wrong with the regression nanny system. The psychological impact seems misdirected. I think the goal is that a sudden crop of failures showing up in my mailbox should be seen as a problem I need to address.
And they also show up in the NG which I find useful. And anyway, as I am the QNX platform maintainer, an e-mail to me would soon elicit an explanation.
Too often, though, there's nothing I can do about such failures and they get ignored. In this case, it's just really annoying. These aren't regressions because Boost.Python never worked with QNX in the past. Why am I getting these reports?
I added QNX6 to the "required" list a few days ago and I am now slowly working trough the test failures. We have a solution for Boost.Python, but just haven't implemented it as yet. I was hoping for time to achieve something optimal, but the rush towards 1.34 means that it will be more of a kludge. I promise it will get done early next week. If you check back I think you should find a gradual improvement for qcc.
Shouldn't whoever enabled these tests have done something to ensure that they wouldn't cause this to happen to "innocent" developers?
Blame the new kid on the block then :-) Jim

Jim Douglas wrote:
...We have a solution for Boost.Python, but just haven't implemented it as yet. I was hoping for time to achieve something optimal, but the rush towards 1.34 means that it will be more of a kludge. I promise it will get done early next week.
David, Here are the fixes that will ensure that the python library builds and passes the regression tests under QNX6 (both libraries). Two files need to be modified, type_id.cpp & python.jam. The diff files are attached. In type_id.cpp I have left the original conditional code intact and wrapped the additional QNX conditionals around it. The end result favours clarity over optimisation. Please remove the additional comments I added if you wish. I sincerely hope these changes will not cause other platforms to fail. Regards Jim Index: type_id.cpp =================================================================== RCS file: /cvsroot/boost/boost/libs/python/src/converter/type_id.cpp,v retrieving revision 1.15 diff -r1.15 type_id.cpp 14a15,18
#if defined(__QNXNTO__) # include <ostream> #else /* defined(__QNXNTO__) */
33a38
#endif /* defined(__QNXNTO__) */ 38,39c43,51 < # ifdef __GNUC__ < # if __GNUC__ < 3
# if defined(__QNXNTO__) namespace cxxabi { extern "C" char* __cxa_demangle(char const*, char*, std::size_t*, int*); } # else /* defined(__QNXNTO__) */
# ifdef __GNUC__ # if __GNUC__ < 3
43c55 < # else ---
# else 47c59 < # if __GNUC__ == 3 && __GNUC_MINOR__ == 0
# if __GNUC__ == 3 && __GNUC_MINOR__ == 0 52,54c64,67 < # endif < # endif < # endif
# endif /* __GNUC__ == 3 && __GNUC_MINOR__ == 0 */ # endif /* __GNUC__ < 3 */ # endif /* __GNUC__ */ # endif /* defined(__QNXNTO__) */
Index: python.jam =================================================================== RCS file: /cvsroot/boost/boost/tools/build/v1/python.jam,v retrieving revision 1.94 diff -r1.94 python.jam 88a89,92
else if $(OS) = QNXNTO { PYTHON_EMBEDDED_LIBRARY = python$(PYTHON_VERSION) ; }

On Feb 6, 2006, at 1:11 PM, David Abrahams wrote:
It looks like QCC got added to the regression system and now I am getting huge regression reports with the Python library failing every test.
It sounds like the right course of action here is for this compiler to be marked as unsupported in the explicit-markups file. I suppose there's some question as to who should be responsible adding that markup. To provide a different perspective, I've recently had the opportunity to fix some bugs that only became apparent when a new version of an existing compiler was added to the testing regime and I received one of these pesky emails. ron

On Feb 6, 2006, at 1:11 PM, David Abrahams wrote:
With all due gratitude to Doug for setting it up, I have a hard time not concluding that there's something wrong with the regression nanny system. The psychological impact seems misdirected. I think the goal is that a sudden crop of failures showing up in my mailbox should be seen as a problem I need to address. Too often, though, there's nothing I can do about such failures and they get ignored. In this case, it's just really annoying. These aren't regressions because Boost.Python never worked with QNX in the past. Why am I getting these reports?
Because QNX is now a release platform, so we need to deal with failures somehow, either by marking them up as known failures or fixing them. I can turn off the part of the regression nanny that e-mails maintainers. That's probably a good idea when we're not near the end of the release cycle. Making the nanny smarter takes more time that I currently have available :( Doug
participants (5)
-
David Abrahams
-
Douglas Gregor
-
Jim Douglas
-
Peter Dimov
-
Ronald Garcia