[pool2] Requests for comment

Greetings, I am currently working on a replacement for boost ::pool. I have a few questions regarding what would be an ideal implementation. Hopefully some members of the Boost mailing list will be able to help me answering these questions. The first question requires to look at the source code; my other questions are more general. First question : I have started by defining a pool object which manage fixed size buffers of a given object type. Documented code is found here : http://svn.boost.org/svn/boost/sandbox/pool2/boost/pool/pool.hpp. The code is short and hopefully well documented. Note that the 'ThreadSafe' template parameter is currently ignored; I have not implemented thread safety yet. My question is about the "pool growth policy". This policy defines how the pool grows when it needs to allocate more memory. This policy is given as a parameter to the pool constructor. Should this policy be a template argument instead ? Moreover, if the policy is a template argument, I could add a constant 'MaximumNumberOfTimesThePoolWillBeGrown' into the policy class. This constant would allow me to replace the std::vector() which holds allocated pointers with a static array. Is this the right solution ? Other questions : - I have defined a pool class which manage fixed sized buffers of some object T. Do we also need a pool class that can manage variable sized buffers of some object T ? - From a pool, I can implement a pool_allocator which inherits from std::allocator, exactly like it was done in the original boost::pool. Do we need something else ? Thank you for your valuable insights, Etienne Dupuis Ce message et toutes les pièces jointes sont confidentiels et établis à l'intention exclusive de ses destinataires. Toute modification, édition, utilisation ou diffusion non autorisée est interdite. Si vous avez reçu ce message par erreur, merci de nous en avertir immédiatement. ATEME décline toute responsabilité au titre de ce message s'il a été altéré, déformé, falsifié ou encore édité ou diffusé sans autorisation. This message and any attachments are confidential and intended solely for the addressees. Any unauthorized modification, edition, use or dissemination is prohibited. If you have received this message by mistake, please notify us immediately. ATEME decline all responsibility for this message if it has been altered, deformed, falsified or even edited or disseminated without authorization. Note: To protect against computer viruses, e-mail programs may prevent sending or receiving certain types of file attachments.

On Tue, Oct 9, 2012 at 9:32 AM, DUPUIS Etienne <e.dupuis@ateme.com> wrote:
- From a pool, I can implement a pool_allocator which inherits from std::allocator, exactly like it was done in the original boost::pool. Do we need something else ?
Hi Etienne, I'd like to see a linear allocator. An arena/pool from which objects of multiple sizes can be allocated, but memory isn't reused and only freed once the pool is destroyed. And a pool that also supports objects of multiple sizes which does support memory reuse (for use with containers). Olaf

Hi Etienne, I'd like to see a big performance(in time) improvement. At last test, I saw boost::pool was slower than new. Changsheng Jiang On Tue, Oct 9, 2012 at 5:18 PM, Olaf van der Spek <ml@vdspek.org> wrote:
On Tue, Oct 9, 2012 at 9:32 AM, DUPUIS Etienne <e.dupuis@ateme.com> wrote:
- From a pool, I can implement a pool_allocator which inherits
from std::allocator, exactly like it was done in the original boost::pool. Do we need something else ?
Hi Etienne,
I'd like to see a linear allocator. An arena/pool from which objects of multiple sizes can be allocated, but memory isn't reused and only freed once the pool is destroyed.
And a pool that also supports objects of multiple sizes which does support memory reuse (for use with containers).
Olaf
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I'd like to see a big performance(in time) improvement. At last test, I saw boost::pool was slower than new.
Changsheng Jiang
It is one of the main goal; the actual implementation of pool is pretty much useless given its mediocre speed. Regards, Étienne Ce message et toutes les pièces jointes sont confidentiels et établis à l'intention exclusive de ses destinataires. Toute modification, édition, utilisation ou diffusion non autorisée est interdite. Si vous avez reçu ce message par erreur, merci de nous en avertir immédiatement. ATEME décline toute responsabilité au titre de ce message s'il a été altéré, déformé, falsifié ou encore édité ou diffusé sans autorisation. This message and any attachments are confidential and intended solely for the addressees. Any unauthorized modification, edition, use or dissemination is prohibited. If you have received this message by mistake, please notify us immediately. ATEME decline all responsibility for this message if it has been altered, deformed, falsified or even edited or disseminated without authorization. Note: To protect against computer viruses, e-mail programs may prevent sending or receiving certain types of file attachments.

Hi Etienne, The problem of the returning the unused memory , in the allocation / deallocation of fixed size elements, it's well resolved in the suballocators of the countertree library. It permit a fast allocation and deallocation with a fast detection of the unused chucks of memory. You can find it in : http://dl.dropbox.com/u/8437476/works/countertree/doc/index.html You have an explanation of the algorithm used in the point 5.2 of the documentation. If I can help you in any question , please , say me, and I will try. Sincerely yours Francisco Tapia

Hi Francisco,
The problem of the returning the unused memory , in the allocation / deallocation of fixed size elements, it's well resolved in the suballocators of the countertree library. It permit a fast allocation and deallocation with a fast detection of the unused chucks of memory.
You can find it in : http://dl.dropbox.com/u/8437476/works/countertree/doc/index.html
You have an explanation of the algorithm used in the point 5.2 of the documentation. If I can help you in any question , please , say me, and I will try.
If I understand correctly, the current pool implementation did not fulfill your needs (both in terms of functionality and performance), and thus you implemented suballocators in the countertree library ? Regards, Étienne Ce message et toutes les pièces jointes sont confidentiels et établis à l'intention exclusive de ses destinataires. Toute modification, édition, utilisation ou diffusion non autorisée est interdite. Si vous avez reçu ce message par erreur, merci de nous en avertir immédiatement. ATEME décline toute responsabilité au titre de ce message s'il a été altéré, déformé, falsifié ou encore édité ou diffusé sans autorisation. This message and any attachments are confidential and intended solely for the addressees. Any unauthorized modification, edition, use or dissemination is prohibited. If you have received this message by mistake, please notify us immediately. ATEME decline all responsibility for this message if it has been altered, deformed, falsified or even edited or disseminated without authorization. Note: To protect against computer viruses, e-mail programs may prevent sending or receiving certain types of file attachments.

Hi, The pool and the suballocator have several things in common , but they are differents. The next paragraphs is a part of a previous message sent with the countertree tag, where I describe it “In the allocator process we have two parts: - First, from where and how obtain the memory the allocator. - Second How dispatch the memory to the data structure. The suballocator is the second part with a big number of fixed size elements data structures ( STL list, set , multiset, map and multimap). About the suballocator. It is a simple layer over the allocator. It have the same interface. The data structure received a suballocator as template parameter. But the suballocator use internally an allocator. It request a big chuck of memory to the allocator , and manage in a fast and easy way. When a chuck of memory is unused by the data structure, is returned to the allocator. Usually, that big chucks of memory are returned to the Operating System from the allocator , and this reduce the memory consumption of the program. You can see this , running the benchmark_allocator.cpp. The suballocator run over ANY allocator with the STL allocator interface. It don't take care about the kind of allocator or the origin of the memory. If you want to know more about the algorithm of the suballocator, in the documentation , in the point 5.2 you have a document with a description of the data structure and algorithms.” You have a more detailed information in the documentation http://dl.dropbox.com/u/8437476/works/countertree/doc/index.html I need the source of the memory, and the pool library is an excellent option. I need to study in deep, for to know if the pool have all I need for the next extension of the suballocator . If I need something, or have any idea or suggestion , I will comment you. The dispatching of a big number of fixed size elements, have different problems that the dispatching of variable size elements. In the Boost library you have excellent allocators as fast pool allocator. But all of them present a problem. When you deallocate all the elements, they don't return the memory used, and it is always the highest used. The suballocator request chuck of memory to the allocator , but when they are unused, return inmediately to the allocator , and this to the operating system, decreasing the memory used by the program, as you can see in the suballocator benchmarks in the documentation. This problem is well described in the suballocator page in the countertree documentation. ( It's only 1 page, less than 5 minutes for to read) I think, the pool library and the suballocator are complementary in many things, and I am sure, you can be useful to me and perhaps I can be useful to you. Joining ideas, we can be more useful to the Boost community Sincerely yours Francisco Tapia

Greetings Francisco,
De la part de Francisco José Tapia
The pool and the suballocator have several things in common , but they are differents.
...
The suballocator run over ANY allocator with the STL allocator interface. It don't take care about the kind of allocator or the origin of the memory.
...
I need the source of the memory, and the pool library is an excellent option. I need to study in deep, for to know if the pool have all I need for the next extension of the suballocator . If I need something, or have any idea or suggestion , I will comment you.
The dispatching of a big number of fixed size elements, have different problems that the dispatching of variable size elements.
...
In fact your suballocator is a particular case of pool which 1. Manages only fixed sized elements, hence it is optimized for this usage. 2. Fetches his memory by using another STL allocator. 3. Automatically frees no longer used chunks of memory. The actual boost::pool does not address your issues as 1. The 'object_pool' implementation is horrendously slow. 2. The memory is fetched from a static custom allocator which has a different API than std::allocator. The pool2 I am designing will address all issues; I am not finished as I am currently trying to know what users need. The code I was referring in my original post currently manages fixed sized pool buffers, using a std::allocator as a base. A user supplied policy controls how the pool grows. The pool can be thread safe of not. The policy will also enable the user to control whether or not the pool should release unused chunks of memory. This pool itself will be usable as a std::allocator. This first pool class will be completed by a second one, dedicated to variable sized buffers. Hence the new pool2 should fulfill all requirements of your suballocator. Regards, Étienne Ce message et toutes les pièces jointes sont confidentiels et établis à l'intention exclusive de ses destinataires. Toute modification, édition, utilisation ou diffusion non autorisée est interdite. Si vous avez reçu ce message par erreur, merci de nous en avertir immédiatement. ATEME décline toute responsabilité au titre de ce message s'il a été altéré, déformé, falsifié ou encore édité ou diffusé sans autorisation. This message and any attachments are confidential and intended solely for the addressees. Any unauthorized modification, edition, use or dissemination is prohibited. If you have received this message by mistake, please notify us immediately. ATEME decline all responsibility for this message if it has been altered, deformed, falsified or even edited or disseminated without authorization. Note: To protect against computer viruses, e-mail programs may prevent sending or receiving certain types of file attachments.

On Tue, Oct 16, 2012 at 9:24 AM, DUPUIS Etienne <e.dupuis@ateme.com> wrote:
The pool2 I am designing will address all issues; I am not finished as I am currently trying to know what users need.
Hi, I just wanted to comment on this quickly. I'm making video games (and digital narration stuffs too). I don't make games always like in traditional industry, where both object pools and small allocation pools are used heavily (and arena too). I mostly use only object pools in my own project, for some game-specific state objects. Last week I finished implementing a template factory class (not really generic though) that have the following features: 1. it have a "pool" of objects, meaning it have to be stable (objects must not move in memory). 2. it provide a way to go through all the elements very fast 3. it provide an "index" by "id" of objects Basically this means I wanted a std::vector of elements, which should be reserved and would not go higher than a specific size, plus a map<id, element*> for the index (that is not interesting here). The problem was that I didn't want to limit the number of elements, to allow the same code to be usable by the in-game editor, meaning I cannot really fix a maximum of elements before having edited some "map" of the game. (maybe I'll fix later, but I don't want to for some reasons). Having done the same system before with boost::pool (from 1.44 version I think) and I was not happy with the performances, I didn't bother trying using it. So I implemented it first with std::vector and std::map, then I decided to try boost::stable_vector instead. It have some speed disavantages, but I made some tests and it appear that if you first resize (not reserve) the stable_vector, then it is very fast to go through the element, almost as fast as with a std::vector. (in my tests at least) Then I thought that, as I want objects to be created and destroy in an unpredictable order, a solution would be to make them optional Currently, my solution is implemented by using a boost::stable_vector< boost::optional<T>> which seems efficient (to my surprise). In particular it allows me to go through all elements in a fast way, not like the current pool. It also allows me to fully construct and destroy objects like in a pool instead of "reusing" them, like often done in other game companies, which don't involve constructor and destructor and is source of maintenance bugs. I don't know yet the drawbacks of using this combination compared to using a better pool implementation, so I just wanted to give this feedback see if there are some things you're working on for pool2 that would make things better. Hope it helps. Joel Lamotte

On Tue, Oct 16, 2012 at 5:00 AM, Klaim - Joël Lamotte <mjklaim@gmail.com>wrote: [...]
Currently, my solution is implemented by using a boost::stable_vector< boost::optional<T>> which seems efficient (to my surprise).
[...] Sounds basically equivalent to a std::vector<T*> or (better) std::vector< std::unique_ptr<T> >, no? I would expect these explicit pointer-based containers to have a marginally smaller memory footprint than stable_vector< optional<T> >. - Jeff

On Tue, Oct 16, 2012 at 2:52 PM, Jeffrey Lee Hellrung, Jr. < jeffrey.hellrung@gmail.com> wrote:
On Tue, Oct 16, 2012 at 5:00 AM, Klaim - Joël Lamotte <mjklaim@gmail.com
wrote: [...]
Currently, my solution is implemented by using a boost::stable_vector< boost::optional<T>> which seems efficient (to my surprise).
[...]
Sounds basically equivalent to a std::vector<T*> or (better) std::vector< std::unique_ptr<T> >, no? I would expect these explicit pointer-based containers to have a marginally smaller memory footprint than stable_vector< optional<T> >.
No, you get tons of cache misses this way. Because when you try to access to each element in order it's really faster when the element are in contiguous memory. It's not a problem with low count of objects, obviously, or when you don't do it often, but I have to do it around 50 times by seconds and with a count of elements that can be high. If you compare using pointers to using optional (which is NOT a pointer and keep the memory of the element "hot") in a vector, you immediately see the benefit. Stable vector don't guarantee contiguity but it apparently does keep bloc of objects contiguous as much as it can. If you reserve your optionals at first, then a lot of contiguous memory is allocated for your future objects, which is basically a pool. I'd say the main difference with boost::pool for example is that you can't have any input on the memory blocks boost::stable_vector will allocate. I might be wrong on the boost::stable_vector behaviour but my current tests shows that it's a win win scenario in my case. Joel Lamotte

El 16/10/2012 15:06, Klaim - Joël Lamotte escribió:
Stable vector don't guarantee contiguity but it apparently does keep bloc of objects contiguous as much as it can. If you reserve your optionals at first, then a lot of contiguous memory is allocated for your future objects, which is basically a pool. I'd say the main difference with boost::pool for example is that you can't have any input on the memory blocks boost::stable_vector will allocate.
stable_vector allocs nodes even when just reserving (and of course when resizing). It keeps then in an internal pool as it offers better exception guarantees than vector: it does not throw a memory error exception (as memory was already reserved) and of course, it does not have potentially throwing move operations when inserting in the middle.
I might be wrong on the boost::stable_vector behaviour but my current tests shows that it's a win win scenario in my case.
If you reserve, then stable vector will call the allocator several times to reserve memory and fill the internal pool. That memory is probably contiguous (it depends on the allocator). In the near future I hope to improve the performance of stable_vector and other node containers adding "burst-allocation" extensions to a general purpose heap allocator, an improved version of: http://www.drivehq.com/web/igaztanaga/allocplus/#Chapter2 Current experiments are very encouraging. Ion

On Tue, Oct 16, 2012 at 10:02 PM, Ion Gaztañaga <igaztanaga@gmail.com>wrote:
If you reserve, then stable vector will call the allocator several times to reserve memory and fill the internal pool. That memory is probably contiguous (it depends on the allocator).
Actually, for a reason I don't understand, using resize instead of reserve makes even better results both in progressive (one by one) creation of elements and going through all the elements (using optional as element allow me to create optional objects without creating the object they wrap), at least in my tests, that I need to check again. Joel Lamotte

On Tue, Oct 16, 2012 at 1:02 PM, Ion Gaztañaga <igaztanaga@gmail.com> wrote:
El 16/10/2012 15:06, Klaim - Joël Lamotte escribió:
Stable vector don't guarantee contiguity but it apparently does keep bloc
of objects contiguous as much as it can. If you reserve your optionals at first, then a lot of contiguous memory is allocated for your future objects, which is basically a pool. I'd say the main difference with boost::pool for example is that you can't have any input on the memory blocks boost::stable_vector will allocate.
stable_vector allocs nodes even when just reserving (and of course when resizing). It keeps then in an internal pool as it offers better exception guarantees than vector: it does not throw a memory error exception (as memory was already reserved) and of course, it does not have potentially throwing move operations when inserting in the middle.
I might be wrong on the boost::stable_vector behaviour but my current
tests shows that it's a win win scenario in my case.
If you reserve, then stable vector will call the allocator several times to reserve memory and fill the internal pool. That memory is probably contiguous (it depends on the allocator).
In the near future I hope to improve the performance of stable_vector and other node containers adding "burst-allocation" extensions to a general purpose heap allocator, an improved version of:
Current experiments are very encouraging.
Ah, I didn't know there were these additional optimizations behind the scenes :) - Jeff

El 17/10/2012 4:39, Jeffrey Lee Hellrung, Jr. escribió:
Ah, I didn't know there were these additional optimizations behind the scenes :)
dlmalloc has recently improved "bulk" (called "burst" in my paper) operations and those ideas can be used to improve adaptive pools, specially deallocation times (fast coalescing adjacent buffers before entering the complex deallocating code). Dlmalloc also implements its own spinlocks for some platforms, whereas my code uses critical section or pthread mutexes. I need to investigate if dlmalloc's internal spinlocks could be used in my pools. At least, to get a fair comparison ;-) Boost.Container's have also improved a bit in regards to extended allocators so figures need to be updated. When I realized that the new "plain" dlmalloc performed a bit faster than my pools, I didn't want to fall behind ;-) Ion

On Tue, Oct 16, 2012 at 2:52 PM, Jeffrey Lee Hellrung, Jr. < jeffrey.hellrung@gmail.com> wrote:
Sounds basically equivalent to a std::vector<T*> or (better) std::vector< std::unique_ptr<T> >, no? I would expect these explicit pointer-based containers to have a marginally smaller memory footprint than stable_vector< optional<T> >.
Actually you gave me doubt, so I made sure I was comparing std::vector<std::unique_ptr<T>> with the other variants in my tests. Now apparently I get roughly the same numbers as as boost::stable_vector< optional<T>> in performance measurements. Which makes me think my tests might be incorrect. :) I'll take a closer look soon. I'm surprised using new directly is as efficient as resetting an optional value. Joel Lamotte

Greetings,
I don't make games always like in traditional industry, where both object pools and small allocation pools are used heavily (and arena too). I mostly use only object pools in my own project, for some game-specific state objects.
Last week I finished implementing a template factory class (not really generic though) that have the following features: 1. it have a "pool" of objects, meaning it have to be stable (objects must not move in memory). 2. it provide a way to go through all the elements very fast 3. it provide an "index" by "id" of objects Basically this means I wanted a std::vector of elements, which should be reserved and would not go higher than a specific size, plus a map<id, element*> for the index (that is not interesting here). The problem was that I didn't want to limit the number of elements, to allow the same code to be usable by the in-game editor, meaning I cannot really fix a maximum of elements before having edited some "map" of the game. (maybe I'll fix later, but I don't want to for some reasons). Having done the same system before with boost::pool (from 1.44 version I think) and I was not happy with the performances, I didn't bother trying using it.
If I understand correctly, your 'pool' manages live objects, i.e. objects that are currently in use by the application. I was rather working on a pool which manages memory for object allocation; i.e. as soon as an object is released back to the pool, it's content is destroyed and lost; hence there is no way to index elements released to the pool as we can think of them as if they no longer exists. Regards, Étienne Ce message et toutes les pièces jointes sont confidentiels et établis à l'intention exclusive de ses destinataires. Toute modification, édition, utilisation ou diffusion non autorisée est interdite. Si vous avez reçu ce message par erreur, merci de nous en avertir immédiatement. ATEME décline toute responsabilité au titre de ce message s'il a été altéré, déformé, falsifié ou encore édité ou diffusé sans autorisation. This message and any attachments are confidential and intended solely for the addressees. Any unauthorized modification, edition, use or dissemination is prohibited. If you have received this message by mistake, please notify us immediately. ATEME decline all responsibility for this message if it has been altered, deformed, falsified or even edited or disseminated without authorization. Note: To protect against computer viruses, e-mail programs may prevent sending or receiving certain types of file attachments.

On Thu, Oct 18, 2012 at 8:15 AM, DUPUIS Etienne <e.dupuis@ateme.com> wrote:
If I understand correctly, your 'pool' manages live objects, i.e. objects that are currently in use by the application. I was rather working on a pool which manages memory for object allocation; i.e. as soon as an object is released back to the pool, it's content is destroyed and lost; hence there is no way to index elements released to the pool as we can think of them as if they no longer exists.
Currently it manage live boost::optional objects, which have constructed or not the object they wrap, which is the same to say it manage raw memory which have an object constructed or not, which is I believe exactly what you describe as a pool. So I need objects to have specific address (for fast access), not move in memory, and be destroyed when not used but the memory is still allocated and ready for another object to be created in. I also need to go through all the live objects for updates. Assuming I'm using a vector or stable_vector, it is faster to just go through all elements from begin to end and check if it's a live object, and if it is to update it. If I was using a boost::pool, I would be forced to have, say, a std::vector<T*> which would have contained only pointers to live objects and would have to be updated when objects are created or destroyed. As stable_vector provide iterators to go through all the elements, I don't need to do this at the moment. It's ok to me if a pool system don't have a way to go through all the objects, or a way to go through all the live objects. However I always have to setup such system at some points when I need a pool, most of the time by encapsulating the pool in a factory class that does the job. Joel Lamotte

On Tue, Oct 9, 2012 at 2:32 AM, DUPUIS Etienne <e.dupuis@ateme.com> wrote:
Greetings,
I am currently working on a replacement for boost ::pool. I have a few questions regarding what would be an ideal implementation. Hopefully some members of the Boost mailing list will be able to help me answering these questions. The first question requires to look at the source code; my other questions are more general.
Great job! Maybe I can stop implementing my own pools soon. Some feature requests, if I may: - An arena allocator: to allocate any-sized objects from, but only deallocate all at once. - Allow an arena/pool allocator to draw its storage from another pool. - Configurable allocation efficiency. For instance, boost::pool will find the least common multiple of page size and block size, which can result in huge allocations in the name of not wasting any memory. Sometimes I might want this, other times I'd be OK wasting a handful of bytes out of every page if it means more reasonably sized initial allocations. - Configurable alignment. Very useful for eg. AVX instructions, which can require 32-byte data alignment. - (Maybe) If you've got an object allocator, have a policy setting that disables calling destructors. Not sure if this belongs in the library due to shooting-own-foot-off potential, but there have been instances where I don't care about destructors because I know all the resources are associated with the pool. In this case an object allocator would become a glorified wrapper for placement new. -- Cory Nelson http://int64.org

Greetings,
De la part de Cory Nelson Envoyé : mardi 16 octobre 2012 16:28
- An arena allocator: to allocate any-sized objects from, but only deallocate all at once.
I will take care to have this use case optimized as you are not the first one to request it.
- Allow an arena/pool allocator to draw its storage from another pool.
It will be the case.
- Configurable allocation efficiency. For instance, boost::pool will find the least common multiple of page size and block size, which can result in huge allocations in the name of not wasting any memory. Sometimes I might want this, other times I'd be OK wasting a handful of bytes out of every page if it means more reasonably sized initial allocations.
The 'least common multiple problem' present in boost::pool was fixed in a recent version of boost. I hopefully will not repeat the same error!
- Configurable alignment. Very useful for eg. AVX instructions, which can require 32-byte data alignment.
I agree; it is a pain that Microsoft's malloc() function still returns 8-byte aligned buffers.
- (Maybe) If you've got an object allocator, have a policy setting that disables calling destructors. Not sure if this belongs in the library due to shooting- own-foot-off potential, but there have been instances where I don't care about destructors because I know all the resources are associated with the pool. In this case an object allocator would become a glorified wrapper for placement new.
Let me complete the API and implementation before looking into these more 'exotic' features... Thanks for your comments, Étienne Ce message et toutes les pièces jointes sont confidentiels et établis à l'intention exclusive de ses destinataires. Toute modification, édition, utilisation ou diffusion non autorisée est interdite. Si vous avez reçu ce message par erreur, merci de nous en avertir immédiatement. ATEME décline toute responsabilité au titre de ce message s'il a été altéré, déformé, falsifié ou encore édité ou diffusé sans autorisation. This message and any attachments are confidential and intended solely for the addressees. Any unauthorized modification, edition, use or dissemination is prohibited. If you have received this message by mistake, please notify us immediately. ATEME decline all responsibility for this message if it has been altered, deformed, falsified or even edited or disseminated without authorization. Note: To protect against computer viruses, e-mail programs may prevent sending or receiving certain types of file attachments.

Hi Etienne,
Other questions : - I have defined a pool class which manage fixed sized buffers of some object T. Do we also need a pool class that can manage variable sized buffers of some object T ? - From a pool, I can implement a pool_allocator which inherits from std::allocator, exactly like it was done in the original boost::pool. Do we need something else ?
We currently use the pool to allocate variable size buffers of an object T, and find it useful despite the speed issues. We used it to replace the alloc/realloc calls of a third party library which was causing excessive memory fragmentation. Now all the fragmentation is localized into the pool and it is cleaned up once the call to the third party lib ends. Nikolay
participants (9)
-
Cory Nelson
-
DUPUIS Etienne
-
Francisco José Tapia
-
Ion Gaztañaga
-
Jeffrey Lee Hellrung, Jr.
-
jiangzuoyan@gmail.com
-
Klaim - Joël Lamotte
-
Nikolay Mladenov
-
Olaf van der Spek