Hello everyone! I'm currently embedding Python and have a problem. I have a struct (point) which looks like struct TPoint { int x, y; }; And I want it to be usable from Python. I made it in two ways: the first one was writing all the warpping by myself according to Python/C API, and the second was wrap it with Boost.Python. So it looks like class_<TPoint>("TPoint") .def_readwrite("x", &TPoint::x) .def_readwrite("y", &TPoint::y); It wraps ok but here is a problem: I create a huge (1000000 items) list of objects of that type: def cr_fig(n): res = [] while n>0: res.append(TPoint()) n -= 1 return res And then when I use a simple function which just increments the x member of every object, this function takes about 4 seconds to process the whole list (1000000 items) of Boost-wrapped objects and less than a second to process objects of my own wrapping! How this can be possible? Is Boost.Python much more slower than Python itself? Regards, Artyom Chirkov. P.S. I couldn't find out how to write to Boost.Python community, so I wrote here. If anyone knows how to do that, please let me know!
On Thu, Jul 09, 2009 at 01:00:47PM +0400, Tyomich on the AIR wrote:
P.S. I couldn't find out how to write to Boost.Python community, so I wrote here. If anyone knows how to do that, please let me know!
http://mail.python.org/mailman/listinfo/cplusplus-sig They deal in all kinds of Python interop but has a tendency towards BPL. -- Lars Viklund | zao@acc.umu.se | 070-310 47 07
On Thursday 09 July 2009 05:00:47 Tyomich on the AIR wrote:
And then when I use a simple function which just increments the x member of every object, this function takes about 4 seconds to process the whole list (1000000 items) of Boost-wrapped objects and less than a second to process objects of my own wrapping! How this can be possible? Is Boost.Python much more slower than Python itself?
Boost.Python is a little slower than wrapping directly using Python C-API, especially in very simple cases like yours. (It is a variant of the abstraction penalty.) However, it is hard to believe that it is 4x slower. Would you mind posting complete compilable examples so that we can take a look at it? Regards, Ravi
Boost.Python is a little slower than wrapping directly using Python C-API, especially in very simple cases like yours. (It is a variant of the abstraction penalty.) However, it is hard to believe that it is 4x slower. Would you mind posting complete compilable examples so that we can take a look at it? Regards, Ravi
Sure! Here they are: http://www.filefactory.com/file/ahda0d8/n/_export_Boost_vs_pure_Python_7z I tried to get/set and increase the member of a struct and here are my results: Getting boost.python point 1000000 times: 0.30395876051 Getting Python point 1000000 times: 0.096010029312 Setting boost.python point 1000000 times: 0.373980616451 Setting Python point 1000000 times: 0.109051436823 Increasing boost.python point 1000000 times: 0.708493801882 Increasing Python point 1000000 times: 0.20538183019 The numbers are time in seconds taken to accomplish calculations. As you can see, Boost.Python is apparently 3-4 times slower than its equivalent in pure Python.
participants (3)
-
Lars Viklund
-
Ravi
-
Tyomich on the AIR