[serialization] custom allocation/deletion

Hi all - Is there a way to use custom allocation while loading from an archive? I'm thinking like passing an allocator along with the archive during deserialization? Adding to this, is it possible to add a custom deleter to a shared_ptr during deserialization? I am thinking of using this with boost mpi, and if I send a shared_ptr over the wire with some kind of deleter, the same deleter object can't be used on the other side since it's in a different address space. Thanks, Brian

Brian Budge wrote:
Hi all -
Is there a way to use custom allocation while loading from an archive? I'm thinking like passing an allocator along with the archive during deserialization?
I'm guessing that this would not be possible without adding a new customization point to the library. Looks like an oversight to me.
Adding to this, is it possible to add a custom deleter to a shared_ptr during deserialization? I am thinking of using this with boost mpi, and if I send a shared_ptr over the wire with some kind of deleter, the same deleter object can't be used on the other side since it's in a different address space.
I don't think this is true. The new object is created in the new address space. The object on the "new" side is not the same object on the "other" side. Robert Ramey
Thanks, Brian

On Jun 6, 2012 9:43 PM, "Robert Ramey" <ramey@rrsd.com> wrote:
Brian Budge wrote:
Hi all -
Is there a way to use custom allocation while loading from an archive? I'm thinking like passing an allocator along with the archive during deserialization?
I'm guessing that this would not be possible without adding a new customization point to the library. Looks like an oversight to me.
Any idea how much work something like this might be?
Adding to this, is it possible to add a custom deleter to a shared_ptr during deserialization? I am thinking of using this with boost mpi, and if I send a shared_ptr over the wire with some kind of deleter, the same deleter object can't be used on the other side since it's in a different address space.
I don't think this is true. The new object is created in the new address space. The object on the "new" side is not the same object on the "other" side.
Hmmm. Maybe the question should have been about how serialization deals with the shared_ptr deleter. Obviously both the pointer and the deleter will have to exist in the same address space. I suppose (if allocator passing were allowed) that custom shared_ptr deserialization would be required. The case I am specifically thinking about is a pool allocator where when the shared_ptr destructor is called, it invokes the deleter on the pointer, returning the object to th pool. In the MPI case, a unique pool exists on each node. Thanks. Brian

Brian Budge wrote:
On Jun 6, 2012 9:43 PM, "Robert Ramey" <ramey@rrsd.com> wrote:
The case I am specifically thinking about is a pool allocator where when the shared_ptr destructor is called, it invokes the deleter on the pointer, returning the object to th pool. In the MPI case, a unique pool exists on each node.
right. I don't see any problem with the custom deleter.. If a custom deleter is specified for a shared pointer, I would expect it works as it always does. More of a problem is the allocator. Allocation is currently a detail in the serialization library. As such, I'm not sure how easy to override it would be. On the other hand, since the STL collections have an allocator parameter, it might be that this is already addressed automatically by the serialization library. This would require more research into how the library allocates new object. I realize I wrote this code, but it was a number of years ago and I don't remember how it works.
Thanks. Brian
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users

On Thu, Jun 7, 2012 at 9:45 AM, Robert Ramey <ramey@rrsd.com> wrote:
Brian Budge wrote:
On Jun 6, 2012 9:43 PM, "Robert Ramey" <ramey@rrsd.com> wrote:
The case I am specifically thinking about is a pool allocator where when the shared_ptr destructor is called, it invokes the deleter on the pointer, returning the object to th pool. In the MPI case, a unique pool exists on each node.
right. I don't see any problem with the custom deleter.. If a custom deleter is specified for a shared pointer, I would expect it works as it always does.
Well, shared_ptr is not templatized on the custom deleter. It stores it in a magical way under the hood. The deleter is accessed through this standalone function: template<class D, class T> D * get_deleter(shared_ptr<T> const & p); I don't see how serialization of the custom deleter could work. Perhaps it could work if serialization was performed and then deserialization was performed in the same process, but this wouldn't be the case between processes (saving to a file for a later time, or with MPI). It seems that in this case, the default deserialization of shared_ptr cannot work with custom deleters. On the other hand, it seems that if "custom" allocation were used, then the default deserialization into a shared_ptr already would not work, and the container of the shared_ptr would have to manually serialize and deserialize the contents of the actual pointer, and then build a shared_ptr anyway, so perhaps its a larger issue than the custom deleter.
More of a problem is the allocator. Allocation is currently a detail in the serialization library. As such, I'm not sure how easy to override it would be. On the other hand, since the STL collections have an allocator parameter, it might be that this is already addressed automatically by the serialization library. This would require more research into how the library allocates new object. I realize I wrote this code, but it was a number of years ago and I don't remember how it works.
Okay, fair enough :) What I want might actually be even simpler than what I said. I essentially want to be able to pass contextual data along the entire deserialization process. With struct Bar { Foo *foo; int numFoo; }; This could enable changing something like template <class Archive> void load(Archive &ar, Bar &b, const unsigned int version) { ar >> b.numFoo; b.foo = new Foo[b.numFoo]; ar.load_binary(b.foo, b.numFoo * sizeof(Foo)); } to template <class Archive, typename Context> void load(Archive &ar, Bar &b, const unsigned int version, Context &ctx) { ar >> b.numFoo; b.foo = ctx.get_buffer<Foo>(b.numFoo); ar.load_binary(b.foo, b.numFoo * sizeof(Foo)); } In this case, the Context might be a buffer object pool or a set of pools for different types or really anything contextual. Of course you could do this with global objects, but if we want to keep these in logical allocation groups so all memory can be flushed once operation on a context is done, that won't work. Thanks again for your comments. Brian

Brian Budge wrote:
On Thu, Jun 7, 2012 at 9:45 AM, Robert Ramey <ramey@rrsd.com> wrote: I essentially want to be able to pass contextual data along the entire deserialization process. With
struct Bar { Foo *foo; int numFoo; };
This could enable changing something like
template <class Archive> void load(Archive &ar, Bar &b, const unsigned int version) { ar >> b.numFoo; b.foo = new Foo[b.numFoo]; ar.load_binary(b.foo, b.numFoo * sizeof(Foo)); }
to
template <class Archive, typename Context> void load(Archive &ar, Bar &b, const unsigned int version, Context &ctx) { ar >> b.numFoo; b.foo = ctx.get_buffer<Foo>(b.numFoo); ar.load_binary(b.foo, b.numFoo * sizeof(Foo)); }
In this case, the Context might be a buffer object pool or a set of pools for different types or really anything contextual. Of course you could do this with global objects, but if we want to keep these in logical allocation groups so all memory can be flushed once operation on a context is done, that won't work.
class A { ... }; class A_PLUS { // includes extra data only for serialization const A * m_aptr; // data extradata m_t A_PLUS(const A * aptr; extradata t) : m_aptr(aptr), m_t(t) {} }; main(..){ strstream os; binary_oarchive oa(os); A a; A_PUSH aplus(&a, extradata); oa << aplus } Consider something like this
Thanks again for your comments. Brian
participants (2)
-
Brian Budge
-
Robert Ramey