
On 23/06/2010 18:13, Matt Cupp wrote:
Hi,
1. I looked at the "files" at |/proc/sys/kernel/shm*|: * |shmall| - 4294967296 (4 Gb) * |shmmax| - 68719476736 (68 Gb) * |shmmni| - 4096
2. I called the |ipcs -lm| command:
------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 67108864 max total shared memory (kbytes) = 17179869184 min seg size (bytes) = 1
From what I can tell, those settings indicate that I should be able to allocate enough shared memory for my purposes. So I created a stripped down program that created large amounts of data in shared memory:
Those are limits for System V shared memory. Interprocess uses POSIX shared memory so we don't know about these limits in your system, unless CentOS implements posix on top of system V. I think linux uses an special filesystem for this but you'll need to check CentOS documentation to be sure. I've checked that Interprocess does catch truncation errors and if that's the case re-truncates the memory to a tiny size (ftruncate *might* report an if the fs does not allow over 2GB sizes). Check if when creating the shared memory the call to shared_memory_object::truncate() is successful. If it returns an error, and that error is not propagated it's an Interprocess bug. If it does not return any error, the OS does not correctly inform that the requested truncation was not successful, so it's not an inteprocess issue I guess. Maybe an additonal check could be added for OS that don't correctly notify truncation errors (if the error is the OS, something we don't know yet). Best, Ion