[serialization] Transactional persistence ?
I'm looking for an STL-like library that would let me do something
like the following:
tp::database data;
tp::map
Buster wrote:
I'm looking for an STL-like library that would let me do something like the following:
tp::database data; tp::map
map1(data); tp::list<float> list1(data); ...
tp::transaction t; map1.start_transaction(t); list1.start_transaction(t);
map1[5] = "hello"; // does NOT commit to container or database list1.push_back(1.0); // does NOT commit either t.commit(); // Finally commits to container and database. // Without this call, t would call rollback // in destructor.
Does anyone know of such a library ? If not, I guess I'll have to do some writing.
If I do write it, it seems that using boost serialization would save a lot of the work. But I've looked at it and it seems to be missing a couple things I need. For one, it seems to require that you serialize the entire container each time you change one element. Perhaps I'm wrong ?
The approach I use is to use Berkeley DB for transactional persistence of each element/object in the database, and boost::serialization for converting C++ objects to and from the array of bytes that is required by Berkeley DB. That is, I serialize at the granularity of individuals objects in the database, rather than serializing the whole database contents in one go.
The other problem is that I don't see anything related to transactions within the library. This may even prevent me from using the library to implement the actual serialization of each element, as I wouldn't be able to gurantee that the data was NOT written.
I serialize an object to an in-memory byte vector first before saving to the database, so that (along with Berkeley DB) guarantees all-or-nothing (i.e. atomic) changes to the database.
Unless one can hook up transactional archives to boost serialization ?
Any advice or pointers would be greatly appreciated.
Mick Hollins wrote:
The approach I use is to use Berkeley DB for transactional persistence of each element/object in the database, and boost::serialization for converting C++ objects to and from the array of bytes that is required by Berkeley DB. That is, I serialize at the granularity of individuals objects in the database, rather than serializing the whole database contents in one go.
Thanks, sounds like an excellent approach. I guess the only downside is that boost::serialization can't detect duplicates between elements, but I would imagine most implementations are unlikely to have many of those.
On 15/12/05, Buster
map1[5] = "hello"; // does NOT commit to container or database list1.push_back(1.0); // does NOT commit either t.commit(); // Finally commits to container and database. // Without this call, t would call rollback // in destructor.
http://www.cuj.com/documents/s=8000/cujcexp1812alexandr/alexandr.htm The scopeguard trick will work in any case where you have a rollback function that is nofail -- such as erase in a map or pop_back in a list. For more complex tasks you'll need a more complex solution, but ScopeGuard is quite simple and elegant for many. I wrote an implementation you're free to use or imitate or whatever: http://www.uploadthis.co.uk/uploads/me22/guard.hpp - Scott McMurray
Buster wrote:
For one, it seems to require that you serialize the entire container each time you change one element. Perhaps I'm wrong ?
Nope, you're correct. This is a fundamental feature of the library
The other problem is that I don't see anything related to transactions within the library. This may even prevent me from using the library to implement the actual serialization of each element, as I wouldn't be able to gurantee that the data was NOT written. Unless one can hook up transactional archives to boost serialization ?
Nothing is written to an archive until one explictly invokes the appropriate operator. Robert Ramey
participants (4)
-
Buster
-
me22
-
Mick Hollins
-
Robert Ramey