[1.34.0] Code Freeze / Marking Remaining Failures

Hi, The RC_1_34_0 is now in effect please don't submit changes unless authorized by me. While I am on the subject. Dave is currently working on some python build fixes that will go into 1.34.0. Once that is done we can start marking the remaining failures as expected. Please use the following note when doing this "[1.34.0] This failure was introduced in 1.34.0. There is no known fix." when using more specific wording please add the [1.34.0] tag to the note so that we can grep for them. Also please wait with marking until I got around setting a tag in cvs. Thanks Thomas Thomas Witt witt@acm.org

Thomas Witt wrote:
Hi,
The RC_1_34_0 is now in effect please don't submit changes unless authorized by me. While I am on the subject. Dave is currently working on some python build fixes that will go into 1.34.0. Once that is done we can start marking the remaining failures as expected. Please use the following note when doing this "[1.34.0] This failure was introduced in 1.34.0. There is no known fix." when using more specific wording please add the [1.34.0] tag to the note so that we can grep for them.
Also please wait with marking until I got around setting a tag in cvs.
There's apparently a very new regression in Boost.Optional. I hope that can be fixed before we mark it expected. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

Martin Wille wrote:
There's apparently a very new regression in Boost.Optional. I hope that can be fixed before we mark it expected.
I believe you are referring to the 'import_' test in boost.python. That's a new test I added (after review), to demonstrate an existing failure, together with a fix. It works fine for me. The fact that it fails everywhere makes me wonder whether this is build system related, i.e. whether I missed something in the Jamfile (v2). Unfortunately, the exact (bjam) command to run the test isn't printed in the report, so I have a hard time to try to reproduce the failure. Running 'bjam' from inside the tests directory works fine. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Martin Wille wrote:
There's apparently a very new regression in Boost.Optional. I hope that can be fixed before we mark it expected.
I believe you are referring to the 'import_' test in boost.python.
No, I'm not. I'm referring to the failures of optional_test. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

Stefan Seefeld wrote:
Martin Wille wrote:
There's apparently a very new regression in Boost.Optional. I hope that can be fixed before we mark it expected.
I believe you are referring to the 'import_' test in boost.python. That's a new test I added (after review), to demonstrate an existing failure, together with a fix. It works fine for me. The fact that it fails everywhere makes me wonder whether this is build system related, i.e. whether I missed something in the Jamfile (v2).
In case you didn't see my message on IRC: I ran your test manually and, according to strace, it tried to look for modules in the wrong directory. I had the same problem two days ago and I solved it by calling Py_SetProgramName() and passing the path to the python binary to it. HTH, m Send instant messages to your online friends http://au.messenger.yahoo.com

Martin Wille wrote:
Stefan Seefeld wrote:
Martin Wille wrote:
There's apparently a very new regression in Boost.Optional. I hope that can be fixed before we mark it expected. I believe you are referring to the 'import_' test in boost.python. That's a new test I added (after review), to demonstrate an existing failure, together with a fix. It works fine for me. The fact that it fails everywhere makes me wonder whether this is build system related, i.e. whether I missed something in the Jamfile (v2).
In case you didn't see my message on IRC:
I ran your test manually and, according to strace, it tried to look for modules in the wrong directory. I had the same problem two days ago and I solved it by calling Py_SetProgramName() and passing the path to the python binary to it.
Thanks ! While I can certainly call Py_SetProgramName(), I'm quite reluctant to do that just yet. I'd rather reproduce the error first (without falling back to strace). So I ask again: How are the tests run during regression testing ? Once I know, I'll apply your suggested change, and if that indeed solves the problem, commit it. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Martin Wille wrote:
I ran your test manually and, according to strace, it tried to look for modules in the wrong directory. I had the same problem two days ago and I solved it by calling Py_SetProgramName() and passing the path to the python binary to it.
Looking into the Py_SetProgramName() docs indicated that name would be used to construct search paths. Given that this implies mapping things like <prefix>/bin/python to <prefix>/lib/python/site-packages, I don't think this does what I want. However, you appear to be right in that I have to prepend a search path to sys.path. Now I only need to figure out how to tell the application what that is. I spent quite some time digging through the build system, but couldn't figure out how I would do this. Any ideas ? Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Martin Wille wrote:
I ran your test manually and, according to strace, it tried to look for modules in the wrong directory. I had the same problem two days ago and I solved it by calling Py_SetProgramName() and passing the path to the python binary to it.
Looking into the Py_SetProgramName() docs indicated that name would be used to construct search paths. Given that this implies mapping things like <prefix>/bin/python to <prefix>/lib/python/site-packages, I don't think this does what I want. However, you appear to be right in that I have to prepend a search path to sys.path.
We might have another problem then. You're right, if your module isn't in the standard search path then Py_SetProgramName can't help. However, while Boost.Python gets built using a Python installation under /usr/local here, the tests search for Python modules in /usr/lib. This indicates that probably all tests search at the wrong place. This is apparently not harmful if the Python versions match. However, I wouldn't rely on that to work in the long run. Maybe, *all* tests need to call Py_SetProgramName. Maybe, a helper for that should be built into Boost.Python. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

The attached patch appears to fix the problem with the import_ test. Here is how it works: * The py-run rule now allows an optional input-file argument. * The import_ test invocation takes the import_.py file as argument, which bjam magically turns into a relative path. * The import_.cpp code extracts that path, and injects it into the module search path, before attempting to 'import import_'. Thanks to Volodya for helping me with the boost.build change. OK to check in ? Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin... Index: libs/python/test/Jamfile.v2 =================================================================== RCS file: /cvsroot/boost/boost/libs/python/test/Jamfile.v2,v retrieving revision 1.13.2.10 diff -u -r1.13.2.10 Jamfile.v2 --- libs/python/test/Jamfile.v2 1 Mar 2007 18:31:10 -0000 1.13.2.10 +++ libs/python/test/Jamfile.v2 4 Mar 2007 19:18:25 -0000 @@ -5,11 +5,11 @@ use-project /boost/python : ../build ; project /boost/python/test ; -rule py-run ( sources * ) +rule py-run ( sources * : input-file ? ) { return [ run $(sources) /boost/python//boost_python /python//python : # args - : # input files + : $(input-file) : #requirements <define>BOOST_PYTHON_SUPPRESS_REGISTRY_INITIALIZATION @@ -150,7 +150,7 @@ /boost/python//boost_python ] [ bpl-test map_indexing_suite : map_indexing_suite.py map_indexing_suite_ext ] -[ py-run import_.cpp ] +[ py-run import_.cpp : import_.py ] # if $(TEST_BIENSTMAN_NON_BUGS) # { Index: libs/python/test/import_.cpp =================================================================== RCS file: /cvsroot/boost/boost/libs/python/test/import_.cpp,v retrieving revision 1.1.2.1 diff -u -r1.1.2.1 import_.cpp --- libs/python/test/import_.cpp 1 Mar 2007 18:31:10 -0000 1.1.2.1 +++ libs/python/test/import_.cpp 4 Mar 2007 19:18:25 -0000 @@ -7,7 +7,7 @@ #include <boost/detail/lightweight_test.hpp> #include <iostream> - +#include <sstream> namespace bpl = boost::python; @@ -22,9 +22,23 @@ int main(int argc, char **argv) { + BOOST_TEST(argc == 2); + // Initialize the interpreter Py_Initialize(); - + + // Retrieve the main module + bpl::object main = bpl::import("__main__"); + + // Retrieve the main module's namespace + bpl::object global(main.attr("__dict__")); + + // Inject search path for import_ module + std::ostringstream script; + script << "import sys, os.path\n" + << "path = os.path.dirname('" << argv[1] << "')\n" + << "sys.path.insert(0, path)\n"; + bpl::object result = bpl::exec(bpl::str(script.str()), global, global); if (bpl::handle_exception(import_test)) { if (PyErr_Occurred())

Stefan Seefeld wrote:
The attached patch appears to fix the problem with the import_ test. Here is how it works:
* The py-run rule now allows an optional input-file argument. * The import_ test invocation takes the import_.py file as argument, which bjam magically turns into a relative path. * The import_.cpp code extracts that path, and injects it into the module search path, before attempting to 'import import_'.
Thanks to Volodya for helping me with the boost.build change.
OK to check in ?
The build-system-related changes seem fine with me. I can't comment on Boost.Python code with confidence, but nothing jumped on me either. Thomas, can this go in? - Volodya

Vladimir Prus wrote:
Stefan Seefeld wrote:
OK to check in ?
The build-system-related changes seem fine with me. I can't comment on Boost.Python code with confidence, but nothing jumped on me either.
I can't comment on both with confidence ;-)
Thomas, can this go in?
Please go ahead. Thanks! Thomas -- Thomas Witt witt@acm.org

Thomas Witt wrote:
Hi,
The RC_1_34_0 is now in effect please don't submit changes unless authorized by me. While I am on the subject. Dave is currently working on some python build fixes that will go into 1.34.0. Once that is done we can start marking the remaining failures as expected. Please use the following note when doing this "[1.34.0] This failure was introduced in 1.34.0. There is no known fix." when using more specific wording please add the [1.34.0] tag to the note so that we can grep for them.
Also please wait with marking until I got around setting a tag in cvs.
Any estimate when such a tag will be set? - Volodya

Vladimir Prus wrote:
Thomas Witt wrote:
Hi,
The RC_1_34_0 is now in effect please don't submit changes unless authorized by me. While I am on the subject. Dave is currently working on some python build fixes that will go into 1.34.0. Once that is done we can start marking the remaining failures as expected. Please use the following note when doing this "[1.34.0] This failure was introduced in 1.34.0. There is no known fix." when using more specific wording please add the [1.34.0] tag to the note so that we can grep for them.
Also please wait with marking until I got around setting a tag in cvs.
Any estimate when such a tag will be set?
PING? - Volodya

Vladimir, On Mar 6, 2007, at 12:11 PM, Vladimir Prus wrote:
Vladimir Prus wrote:
Thomas Witt wrote:
Also please wait with marking until I got around setting a tag in cvs.
Any estimate when such a tag will be set?
PING?
Once we've dealt with the recently introduced regressions in optional. That is unless we break something else in the meantime. Thomas -- Thomas Witt witt@acm.org

Thomas Witt wrote:
Vladimir,
On Mar 6, 2007, at 12:11 PM, Vladimir Prus wrote:
Vladimir Prus wrote:
Thomas Witt wrote:
Also please wait with marking until I got around setting a tag in cvs.
Any estimate when such a tag will be set?
PING?
Once we've dealt with the recently introduced regressions in optional.
I've looked though other reported regression, and a large fraction of appear to be genuine failures without no recent tests in process of being tested. Is it really necessary to wait with marking up them?
That is unless we break something else in the meantime.
I would hope nobody will check anything in at this point. - Volodya

Vladimir Prus wrote:
Thomas Witt wrote:
Vladimir,
On Mar 6, 2007, at 12:11 PM, Vladimir Prus wrote:
Vladimir Prus wrote:
Thomas Witt wrote:
Also please wait with marking until I got around setting a tag in cvs.
Any estimate when such a tag will be set?
PING?
Once we've dealt with the recently introduced regressions in optional.
I've looked though other reported regression, and a large fraction of appear to be genuine failures without no recent tests in process of being tested. Is it really necessary to wait with marking up them?
PING? Also, optional results are mostly settled down now -- all recent test results have optional as green. - Volodya

Vladimir, On Mar 10, 2007, at 12:11 PM, Vladimir Prus wrote:
Vladimir Prus wrote:
Also, optional results are mostly settled down now -- all recent test results have optional as green.
Please see my other post regarding this. I see recent failures, what am I missing? Thanks Thomas -- Thomas Witt witt@acm.org

Thomas Witt wrote:
Hi,
The RC_1_34_0 is now in effect please don't submit changes unless authorized by me. While I am on the subject. Dave is currently working on some python build fixes that will go into 1.34.0. Once that is done we can start marking the remaining failures as expected. Please use the following note when doing this "[1.34.0] This failure was introduced in 1.34.0. There is no known fix." when using more specific wording please add the [1.34.0] tag to the note so that we can grep for them.
Thomas, Have I got time to get a page of documentation in for QNX. I finally managed to fix the test machine so that it runs OK. A combination of bad memory and faulty HDD. Regards Jim Douglas

Jim Douglas wrote:
Have I got time to get a page of documentation in for QNX. I finally managed to fix the test machine so that it runs OK. A combination of bad memory and faulty HDD.
QNX was dropped from the list of officially supported platforms for 1.34 a couple of months back. Given that testing has now resumed, and it is currently passing all expected tests (no fail results) would it be possible to restore it to the officially supported list? This does not impact me personally but I saw a lot of work go into bringing that platform up to a supported standard. It would be most discouraging to see it dropped for 'editorial' reasons at the last gasp. Luckily this is Thomas's call and not mine <g> -- AlisdairM

AlisdairM wrote:
QNX was dropped from the list of officially supported platforms for 1.34 a couple of months back.
Given that testing has now resumed, and it is currently passing all expected tests (no fail results) would it be possible to restore it to the officially supported list?
No it is not passing. The error is not flagged correctly. There is a bug in the threading lib still pending. Unfortunately I do not have acceess to a compiler, so I was not able to solve this problem. I am afraid it is too late in the release cycle for 1.34. But I would be glad if you could help with this issue, so it would be ready for 1.35 at least. Roland

Thomas Witt wrote:
Hi,
The RC_1_34_0 is now in effect please don't submit changes unless authorized by me. While I am on the subject. Dave is currently working on some python build fixes that will go into 1.34.0. Once that is done we can start marking the remaining failures as expected. Please use the following note when doing this "[1.34.0] This failure was introduced in 1.34.0. There is no known fix." when using more specific wording please add the [1.34.0] tag to the note so that we can grep for them.
Thomas, Have I got time to get a page of documentation in for QNX. I finally managed to fix the test machine so that it runs OK. A combination of bad memory and faulty HDD. Regards Jim Douglas
participants (7)
-
AlisdairM
-
Jim Douglas
-
Martin Wille
-
Roland Schwarz
-
Stefan Seefeld
-
Thomas Witt
-
Vladimir Prus