shmem - Allocating enough space. open and allocate.
Hello dear boost users, I am trying to use the shmem library for the first time. The domain of the application is centered on the passing of image frames from process that acquires the images to the client process via a local shared memory segment. In short, nothing really complicated. A synchronized writer/reader process. I started from looking at processA/processB example in the example directory. I am using boost 1.33.1 and shmem 0.93 (the latest available from boost vault). If I open the segment with the exact number of bytes needed:
const int memsize = 320*240; //Create shared memory if(!segment.open_or_create(shMemName, memsize)){ std::cout << "error creating shared memory\n"; return -1; };
Then I cannot use:
unsigned char* uchar_ptr = (unsigned char*)segment.allocate(num_elements);
with num_elements equal to the memsize. I see that there is need for an extra memory space to pass as second argument to the open() function... (for example 320*240+1024) ... How can I know in advance how many bytes are necessary to allocate the space actually needed? For instance, what is the difference between the size in open(..) and allocate(..)? In this case I just need a raw_memory segment .... In more complex cases? Is there some rule of thumb? Andrea Carbone
Hi Andrea,
The domain of the application is centered on the passing of image frames from process that acquires the images to the client process via a local shared memory segment.
Ok.
If I open the segment with the exact number of bytes needed:
const int memsize = 320*240; //Create shared memory if(!segment.open_or_create(shMemName, memsize)){ std::cout << "error creating shared memory\n"; return -1; };
Then I cannot use:
unsigned char* uchar_ptr = (unsigned char*)segment.allocate(num_elements);
Because the shared memory segment is not a trivial shm_open + mmap-like shared memory. It has dynamic allocation, a named parameter index type... so it wastes memory. The size passed in open is the shared memory size (well, more or less). Each allocate() call needs to keep track of the allocated size, to be able to deallocate it when the user calls deallocate(). In short, there is no way to know beforehand how much space it will be wasted in bookkeeping, since it depends on the number of allocations, the alignment, the number of named parameters, their length, the type of index... If you need a raw, classic shared memory, I would recommend you to use Boost.Interprocess: http://boost-consulting.com/vault/index.php?&direction=0&order=&directory=Concurrent%20Programming Interprocess is the official Boost name for Shmem. I surely will remove Shmem from the vault soon because I want people to use Interprocess. In interprocess you can create a classic shared memory segment like this: http://tinyurl.com/z5gyn Regards, Ion
In short, there is no way to know beforehand how much space it will be wasted in bookkeeping, since it depends on the number of allocations, the alignment, the number of named parameters, their length, the type of index...
Yes, I see ...
If you need a raw, classic shared memory, I would recommend you to use Boost.Interprocess:
http://boost-consulting.com/vault/index.php?&direction=0&order=&directory=Concurrent%20Programming
wow ... I missed this ....
Interprocess is the official Boost name for Shmem. I surely will remove Shmem from the vault soon because I want people to use Interprocess. In interprocess you can create a classic shared memory segment like this:
That's what I needed ... cool ...
Regards,
Best, Amdrea
Ion
# igaztanaga@gmail.com / 2006-10-09 21:55:23 +0200:
In interprocess you can create a classic shared memory segment like this:
Looks like the "only create" / "only open" comments are swapped, arent' they? -- How many Vietnam vets does it take to screw in a light bulb? You don't know, man. You don't KNOW. Cause you weren't THERE. http://bash.org/?255991
Looks like the "only create" / "only open" comments are swapped, arent' they?
Yes. It's a documentation error. Thanks. Regards, Ion
Hello Ion, I implemented successfully a program to delivery grabbed images to clients for display purposes ... Really fine ... But it seems to me that the remove method doesn't work. For example after killing the grabber process and I open again a Gui to display images, no (ipc) exceptions are thrown and the gui displays the last grabbed image. The process that grabs, before quitting calls: shared_memory_object::remove(mem_obj_tag.c_str()); (the shmem was created with open_or_create) and the process that reads opens the shared memory in read_only mode. sincerely, andrea Ion Gaztañaga wrote:
Hi Andrea,
The domain of the application is centered on the passing of image frames from process that acquires the images to the client process via a local shared memory segment.
Ok.
If I open the segment with the exact number of bytes needed:
const int memsize = 320*240; //Create shared memory if(!segment.open_or_create(shMemName, memsize)){ std::cout << "error creating shared memory\n"; return -1; }; Then I cannot use:
unsigned char* uchar_ptr = (unsigned char*)segment.allocate(num_elements);
Because the shared memory segment is not a trivial shm_open + mmap-like shared memory. It has dynamic allocation, a named parameter index type... so it wastes memory. The size passed in open is the shared memory size (well, more or less). Each allocate() call needs to keep track of the allocated size, to be able to deallocate it when the user calls deallocate().
In short, there is no way to know beforehand how much space it will be wasted in bookkeeping, since it depends on the number of allocations, the alignment, the number of named parameters, their length, the type of index...
If you need a raw, classic shared memory, I would recommend you to use Boost.Interprocess:
http://boost-consulting.com/vault/index.php?&direction=0&order=&directory=Concurrent%20Programming
Interprocess is the official Boost name for Shmem. I surely will remove Shmem from the vault soon because I want people to use Interprocess. In interprocess you can create a classic shared memory segment like this:
Regards,
Ion
Hi Andrea,
I implemented successfully a program to delivery grabbed images to clients for display purposes ... Really fine ...
But it seems to me that the remove method doesn't work.
For example after killing the grabber process and I open again a Gui to display images, no (ipc) exceptions are thrown and the gui displays the last grabbed image.
The process that grabs, before quitting calls:
shared_memory_object::remove(mem_obj_tag.c_str()); (the shmem was created with open_or_create)
and the process that reads opens the shared memory in read_only mode.
Remove does not throw any exception, just returns true or false. Check i false is being returned. If the shared memory segment was opened while trying to delete it, it's possible on some platforms (win32) not to be able to delete the segment. This is similar to standard std::remove function. In windows, usually calls DeleteFile, that fails if another process has opened the file. In Unix, unlink() is called, and the file is removed from the filesystem, but processes still work in that unnamed file. This is the non-portable problem of trying to emulate shared memory in Windows with the same semantics as in Unix. I think I should see what Boost.Filesystem does with remove(onst path & ph) and do the same. There are 3 possible options: -> The shared memory segment does not exist -> The shared memory segment exists but it can't be deleted -> The shared memory segment exists and its deleted The problem for a different solution for each one is that we must atomically check the existence and remove the file, and that's not easy because it must be atomic regarding other processes. The Shmem big global lock is not a very good idea, so I will try to think something about it. Definitely I think I should follow Boost.Filesystem rationale. Regards, Ion
participants (3)
-
Andrea Carbone
-
Ion Gaztañaga
-
Roman Neuhauser