
On Fri, Apr 11, 2008 at 10:10:28PM +0200, Ion GaztaƱaga wrote:
it's possible without the collaboration and processes. It's in the to-do list, but I don't see how could I implement it.
POSIX shared memory segments can be grown: enlarge the underlying file with ftruncate(), then mmap() the new chunk with MAP_FIXED flag at the end of existing mapping; this may naturally fail in which case the application is out of luck. (It may help if the initial mapping's starting address is chosen smartly in a platform-specific way.) As for growing it atomically, two things are important to note: 1) there's always the first process that wants to enlarge it 2) the other processes first must get some knowledge about segment growth -- this does not happen magically, but is somehow transmitted by the process that has grown the segment 2) relies on the assumption that a correct program can't know about address X until it has malloc()'d it. With multiple processes this assumption is extended to that a process can't know about address X until _some_ process has malloc()'d it _and_ communicated it to others. (This communication may be limited to just writing some pointer to new memory in an already mapped SHM aread). OTOH, I can't think of a scenario where this assumption doesn't hold (except for HW programming / embedded systems where there's often some "higher force" which hands you out absolute addresses). So, for N processes, have in the initial mapping: - a barrier initialized with a count of N - a process-shared mutex When a process wants to grow the segment (this is written in the context of a shm malloc() - allocating memory and returning chunks is written with this in mind): - lock the mutex [this prevents other processes to concurrently try to grow the segment] - try to malloc() some memory _again_ [serialize memory allocation in case of shortage -- another process might have already enlarged the segment, so this allocation may succeed: when first thinking about this problem a long ago, I wished for a useful return status from pthread_mutex_lock() that would indicate whether the mutex was acquired with or without contention] - if this (repeated) malloc() succeeded, unlock the mutex and exit - otherwise: grow the segment and fix the current process's mappings - malloc() memory and save the pointer to return to the app [this allocation must succeed because the segment has just been grown, and we're executing in a critical section. the allocation algo must be smart enough NOT to satisfy concurrent mallocs(), not protected with this mutes, from the largest available chunk, if there are other free chunks] - signal other processes to fix THEIR memory mappings - unlock the mutex - wait on a barrier [when this returns, all of the processes will have updated their mappings] - return the new memory chunk to the user; now it's safe to "leak" out data about newly mapped memory to everybody else Signaling processes must be done asynchronously and might be done in at least two ways: - POSIX message queues with event notification (thread/signal) - storing (address, length) of the new mapping in the initial SHM chunk and sending a signal to all of the other processes Signal processing routine [asynchronously invoked; maybe even while SHM malloc() was executing]: - get the (address, length) sent by the "first" process, and fix own mapping - on failure -> bad luck; design some error handling - on success -> wait on the barrier - exit the signal handler As you already mentioned, this requires significant cooperation, but this is written from the perspective of a SHM malloc() routine, so it can be hidden from the program in a library..