Hi,
I'm looking into implementing an object cache available from multiple
processes, I naturally found the boost::interprocess library mostly adapted
for this.
My problem is how to implement a cache eviction mechanism (LRU or other)
when shared memory segment has been filled up.
My problem gets worse, when I start using containers of containers, as
described here :
http://www.boost.org/doc/libs/1_46_1/doc/html/interprocess/allocators_contai...
So I have that allocator that I use everywhere, like in the example :
typedef managed_shared_memory::segment_manager
segment_manager_t;
typedef allocator
void_allocator;
I use it to create a boost::interprocess::map of objects, which in turn can
contain a boost::interprocess::map of strings in shared memory as well.
So to recap, I have an map of Object like an "object store" and each Object
can contain a "dictionary" (map of strings) of unknown size, all of those
living in a shared memory segment.
My problem, is that shared memory segments are of a given size and any of
the allocate() operation that the allocator is going to perform (when adding
a new Object, or a new element in a dictionary)
can throw a boost::interprocess::bad_alloc exception.
So I'm wondering if there is a more suited library to do persistent caching
of objects between processes, with a way to automatically evicts "old cache"
from the memory.
Or if there is a way to implement my own allocator that would be more aware
of my design, and would deallocate one of the Objects and the associated
dictionary if an allocate() were to fail.
I've been scratching my head for a little while now, and any suggestions or
example of code would be really appreciated!
Thank you,
Matthieu.