
I have a bunch of data I am trying to store in shared memory which is basically a vector of bytes + a vector of some metadata; both vectors, in general, grow as time goes on at an unpredictable rate, until someone stops using them. The vector lengths are extremely variable; total amount of shared memory could be as small as 10K and as large as several hundred megabytes. I will not know beforehand the amount needed.
From what I understand about Boost shared memory segments, they are not infinitely growable, so I think I have to organize my data into chunks of a more reasonable size, and allocate a series of separate shared memory segments with a few chunks in each. (there are other application-specific reasons for me to do this anyway) So I am trying to figure out what is a reasonable size is of shared memory to allocate. (probably in the 64K - 1MB range but I'm not sure)
Does anyone have any information about the overhead of a shared memory segment? Just an order of magnitude estimate, e.g. M bytes fixed + N bytes per object allocated + total size of names. Are M,N on the order of 10 bytes or 10Kbytes or what?

On Thu, May 29, 2008 at 03:42:16PM -0400, Jason Sachs wrote:
reasonable size is of shared memory to allocate. (probably in the 64K - 1MB range but I'm not sure)
From the low-level POV:
Modern systems use on demand allocation. I.e. you can allocate a (f.ex.) 32 MB SHM chunk, but the actual resource usage (RAM) will correspond to what you actually use. For example: 0 1M 32M [****|..................] | | | +- unused part | +- used part of the SHM segment As long as your program does not touch (neither reads nor writes) the unused part, the actual physical memory usage will be 1M + small amount for page tables (worst case: 4kB of page tables for 4MB of virtual address space). This is at least how SYSV SHM works on Solaris 10 (look up DISM - dynamic intimate shared memory); I would expect it to work in the same way on new linux kernels too. I'm not sufficiently acquainted with NT kernel to be able to comment on it. Bottom line: allocate as few as large chunks as possible; modern VMs should be able to handle it gracefully. === If you don't know how much memory you will need in total, how do you handle out of memory situations? Alternatively, why not use files instead?

Jason Sachs wrote:
From what I understand about Boost shared memory segments, they are not infinitely growable, so I think I have to organize my data into chunks of a more reasonable size, and allocate a series of separate shared memory segments with a few chunks in each. (there are other application-specific reasons for me to do this anyway) So I am trying to figure out what is a reasonable size is of shared memory to allocate. (probably in the 64K - 1MB range but I'm not sure)
Does anyone have any information about the overhead of a shared memory segment? Just an order of magnitude estimate, e.g. M bytes fixed + N bytes per object allocated + total size of names. Are M,N on the order of 10 bytes or 10Kbytes or what?
If you mean the overhead added by a named allocation and indexes, use managed_shared_memory.get_free_memory() function to know how many bytes you have after creating the managed segment and after creating an empty vector. Take in care that if you fill those vectors alternatively, the reallocations needed by the vector might not take advantage of all the needed memory (one vector can end just in the middle of the segment and the other vector just can't take the memory before and after that vector). If you need to minimize needed shared memory, pre-calculate all the data and then dump it in shared memory. Regards, Ion
participants (3)
-
Ion Gaztañaga
-
Jason Sachs
-
Zeljko Vrba