
Quoting Andrey Semashev <andrey.semashev@gmail.com>:
On Wed, Jul 18, 2012 at 7:29 AM, Joel Falcou <joel.falcou@gmail.com> wrote:
As for Pooling, there is still some use cases on embedded systems where you are required to pool some Mb at the beginning of the applications
On Wednesday 18 July 2012 10:30:46 Klaim - Joël Lamotte wrote: then
you want to go through normal allocator/container design to access it. Gb of RAM on COTS computer are not the only use case around ;)
Exactly. For example, for high performance video games on any console, there is no way you will avoid to have to pool memory in a way or another. It is possible to not do this, but in most actino game context it is not acceptable to have slow down because of allocations (though there are diverse ways to fix this, pooling memory is a general one).
Ok, but that implies that the pool has to be at least as fast as the system allocator. Which Boost.Pool isn't. I admit, I am no game developer, but wouldn't be a fast allocator over a non-swappable memory region be a better solution?
I would be surprised if boost.pool is slower than a typical system allocator. I wonder if you're referring to ordered_alloc() and ordered_free() (which clearly are very slow and probably over-used). A very simple test of boost.pool [1] in 1.47 suggests that it is in fact faster than the system allocator (on Mac OS X 10.7.3). $ g++ -O2 -DUSE_BOOST_POOL=1 -IDocuments/dev/boost_1_47_0/ test.cpp $ time ./a.out real 0m0.422s user 0m0.406s sys 0m0.003s $ g++ -O2 test.cpp $ time ./a.out real 0m7.065s user 0m6.505s sys 0m0.021s
Also, and it's very important: budgeting memory is a very important practice in some very high performance games.
Good point, that might be a useful feature.
Another use case is when you're planning on allocating a very large number of a specific object. Relying on the system allocator, you may end up in a slab that holds allocation some tens of bytes larger than your actual object. If you allocate a few million of them, the extra memory use may become significant, as well as the potentially wasted L1 and L2 cache caused by not having your objects be packed back-to-back in RAM.
Andrey Semashev said:
There is also an option to repurpose the library. Pooling raw memory may not be that needed today but pooling constructed objects is another story. More than once I had cases when constructing an object, including collateral resource allocation and initialization, is a costly operation and it is only natural to arrange a pool of such objects. While in the pool, the objects are not destroyed but merely "cleaned". An object can be retrieved and returned to pool multiple times, which saves the costly initialization. I could use a framework for building such pools. I think this is the most productive way of development of the library.
I was thinking is that mostly the Object Pool of Boost Pool was basically what you describe? At least on the purpose, maybe not very efficient in the implementation.
It creates and destroys objects within the pooled memory. Not much different from a raw memory pool.
My understanding is that the main complaint with the object_pool in boost.pool is that when it constructs a new object, it uses ordered_malloc() instead of just malloc() (not the system function, the member of boost::pool). Personally, this has forced me to just use the raw boost::pool and manually call constructors and destructors. This seems like a trivial fix to make to the object_pool type, but I'm not entirely sure why this decision was made in the first place, perhaps there is a good reason. -- Arvid Norberg [1] http://codepad.org/vGuWLi0i