White cells in regression tables?

I posted this yesterday, but got no reply. Can anyone explain the vast number of white cells in the Boost.Python regression log? In particular, the tests noted below don't look like they're reported correctly in the log. ------ Due to what appears to be an MPL problem, neither libs/python/test/polymorphism.cpp nor libs/python/test/polymorphism2.cpp are compiling on vc6. This fact is reflected only strangely by the posted regression log, as white cells. ------ -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams wrote:
I posted this yesterday, but got no reply. Can anyone explain the vast number of white cells in the Boost.Python regression log? In particular, the tests noted below don't look like they're reported correctly in the log.
------ Due to what appears to be an MPL problem, neither libs/python/test/polymorphism.cpp nor libs/python/test/polymorphism2.cpp are compiling on vc6. This fact is reflected only strangely by the posted regression log, as white cells. ------
I didn't see your message yesterday. I don't know why there are white cells. I think all RUN_PYD tests are affected. COMPILE_FAIL and RUN tests seem to get displayed correctly. There's this kind of error reports for intel-8.0 in the log: intel-linux-C++-action ../bin/boost/libs/python/test/bienstman3_ext.so/intel-8.0-linux/debug/shared-linkable-true/stdlib-gcc/bienstman3.o icc: error: could not find directory in which the set of libstdc++ include files resides . /opt/intel_cc_80/bin/iccvars.sh icc -c -w1 -DBOOST_PYTHON_DYNAMIC_LIB -g -O0 -cxxlib-gcc -KPIC -I"../bin/boost/libs/python/test" -I"/usr/include" -I"/usr/include/python2.2" -I"/boost/head-regression/boost" -I"../libs/python" -o "../bin/boost/libs/python/test/bienstman3_ext.so/intel-8.0-linux/debug/shared-linkable-true/stdlib-gcc/bienstman3.o" "../libs/python/test/bienstman3.cpp" ...failed intel-linux-C++-action ../bin/boost/libs/python/test/bienstman3_ext.so/intel-8.0-linux/debug/shared-linkable-true/stdlib-gcc/bienstman3.o... My guess is that this kind of error doesn't get parsed correctly by the test post processing tools. Regards, m

David Abrahams <dave@boost-consulting.com> writes:
I posted this yesterday, but got no reply. Can anyone explain the vast number of white cells in the Boost.Python regression log? In particular, the tests noted below don't look like they're reported correctly in the log.
White cells (missing test results) are caused by inability of process_jam_log (utility which produces xml result files from bjam log) to handle Boost.Python tests compile/link/execute failures. Having done some research I believe that the main problem is that the following bjam.log snippet: ...skipped <@boost!libs!python!test\args.test\msvc-stlport\debug\threading-multi>args.run for lack of <@boost!libs!python!test\args.test\msvc-stlport\debug\threading-multi>args.pyd... is treated by process_jam_log as: 1. Failed to build "args.run" in directory "args.test/..." because file args.pyd in the _same directory_ is missing. 2. If it is in the same directory, then it is something internal to args.test and it and failure to build it has already been processed and all relevant info has been dumped into xml results file in "args.test/..." directory. So process_jam_log just skips this message. The real situation is different (I am speculating here - our boost regression machine is in the middle of clean run, so I don't have all logs etc.) For one, args_ext.pyd was built in args_ext.pyd directory and xml result file with failure was written in args_ext.pyf/... directory. It is not clear to me what happens in Boost.Python build next. At this point I would appreciate if somebody clarified to me how Boost.Python build works, so I can figure out how to handle it in process_jam_log. -- Misha Bergal MetaCommunications Engineering

Misha Bergal <mbergal@meta-comm.com> writes:
David Abrahams <dave@boost-consulting.com> writes:
I posted this yesterday, but got no reply. Can anyone explain the vast number of white cells in the Boost.Python regression log? In particular, the tests noted below don't look like they're reported correctly in the log.
White cells (missing test results) are caused by inability of process_jam_log (utility which produces xml result files from bjam log) to handle Boost.Python tests compile/link/execute failures.
Having done some research I believe that the main problem is that the following bjam.log snippet:
...skipped <@boost!libs!python!test\args.test\msvc-stlport\debug\threading-multi>args.run for lack of <@boost!libs!python!test\args.test\msvc-stlport\debug\threading-multi>args.pyd...
is treated by process_jam_log as:
1. Failed to build "args.run" in directory "args.test/..." because file args.pyd in the _same directory_ is missing.
2. If it is in the same directory, then it is something internal to args.test and it and failure to build it has already been processed and all relevant info has been dumped into xml results file in "args.test/..." directory. So process_jam_log just skips this message.
The real situation is different (I am speculating here - our boost regression machine is in the middle of clean run, so I don't have all logs etc.)
For one, args_ext.pyd was built in args_ext.pyd directory and xml result file with failure was written in args_ext.pyf/... directory.
It is not clear to me what happens in Boost.Python build next.
At this point I would appreciate if somebody clarified to me how Boost.Python build works, so I can figure out how to handle it in process_jam_log.
Sorry, I don't remember :( I'll try to do an analysis later today. Thanks for looking at this. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams writes:
Misha Bergal <mbergal@meta-comm.com> writes:
David Abrahams <dave@boost-consulting.com> writes:
I posted this yesterday, but got no reply. Can anyone explain the vast number of white cells in the Boost.Python regression log? In particular, the tests noted below don't look like they're reported correctly in the log.
White cells (missing test results) are caused by inability of process_jam_log (utility which produces xml result files from bjam log) to handle Boost.Python tests compile/link/execute failures.
Having done some research I believe that the main problem is that the following bjam.log snippet:
...skipped <@boost!libs!python!test\args.test\msvc-stlport\debug\threading-multi>args.run for lack of <@boost!libs!python!test\args.test\msvc-stlport\debug\threading-multi>args.pyd...
is treated by process_jam_log as:
1. Failed to build "args.run" in directory "args.test/..." because file args.pyd in the _same directory_ is missing.
2. If it is in the same directory, then it is something internal to args.test and it and failure to build it has already been processed and all relevant info has been dumped into xml results file in "args.test/..." directory. So process_jam_log just skips this message.
The real situation is different (I am speculating here - our boost regression machine is in the middle of clean run, so I don't have all logs etc.)
For one, args_ext.pyd was built in args_ext.pyd directory and xml result file with failure was written in args_ext.pyf/... directory.
It is not clear to me what happens in Boost.Python build next.
At this point I would appreciate if somebody clarified to me how Boost.Python build works, so I can figure out how to handle it in process_jam_log.
Sorry, I don't remember :(
I'll try to do an analysis later today. Thanks for looking at this.
Dave, did you have a chance to look at it? -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
For one, args_ext.pyd was built in args_ext.pyd directory and xml result file with failure was written in args_ext.pyf/... directory.
I don't think there's anything in the build process that should make an extension of .pyf
It is not clear to me what happens in Boost.Python build next.
At this point I would appreciate if somebody clarified to me how Boost.Python build works, so I can figure out how to handle it in process_jam_log.
Sorry, I don't remember :(
I'll try to do an analysis later today. Thanks for looking at this.
Dave, did you have a chance to look at it?
I'm not sure if this is the info Misha needs, but: The embedding test is essentially just a normal test that builds and runs an application. All of the other tests have two parts: building one or more shared libraries (the pyd files are just dlls with a different extension), and running the Python application to test them. The .run file contains the output of invoking Python, and if the result code is zero, the .test file gets created to mark the test as having succeeded. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams <dave@boost-consulting.com> writes:
Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
For one, args_ext.pyd was built in args_ext.pyd directory and xml result file with failure was written in args_ext.pyf/... directory.
I don't think there's anything in the build process that should make an extension of .pyf
It is not clear to me what happens in Boost.Python build next.
At this point I would appreciate if somebody clarified to me how Boost.Python build works, so I can figure out how to handle it in process_jam_log.
Sorry, I don't remember :(
I'll try to do an analysis later today. Thanks for looking at this.
Dave, did you have a chance to look at it?
I'm not sure if this is the info Misha needs, but: The embedding test is essentially just a normal test that builds and runs an application. All of the other tests have two parts: building one or more shared libraries (the pyd files are just dlls with a different extension), and running the Python application to test them. The .run file contains the output of invoking Python, and if the result code is zero, the .test file gets created to mark the test as having succeeded.
Thanks for info. I've made a patch to bjam (make1.c) which seems to help (see results for const_argument at http://tinyurl.com/6uw2m). I am not an expert on Boost.Build, so I am not sure that what I did was the best way to deal with the problem. I will post the patch to boost.build mailing list, to get some expert feedback on this. -- Misha Bergal MetaCommunications Engineering
participants (4)
-
Aleksey Gurtovoy
-
David Abrahams
-
Martin Wille
-
Misha Bergal