shmem on multi-user machine
Hi Ion, I've noticed that the "file" /dev/shmem/sem.boost_shmem_shm_global_mutex keeps hanging around after my program finishes. This file has write permission only for the user that created it. When different user comes along a runs another program that uses shmem, it seg faults on sem_wait(), probably due to the file permissions. Should this global mutex be cleaned up after use? Or could the permissions be changed somehow to let other use it (though probably not a good idea if user A and user B run a shmem based program at the same time). Thanks for your advice. Cheers, Steve.
Hi Jan,
Sorry, but Ion replied to me privately. To answer your question, it is a
Linux system I'm using. I've fixed the problem by giving each user their own
global mutex to play with.
Regards,
Steve.
On 31/07/06, Jan Stetka
What platform is that compiled/crashing on?
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Can you publish Ion's answer please?
Thanks
-----Mensaje original-----
De: boost-users-bounces@lists.boost.org
[mailto:boost-users-bounces@lists.boost.org]En nombre de Steven Wooding
Enviado el: martes, 01 de agosto de 2006 14:34
Para: boost-users@lists.boost.org
Asunto: Re: [Boost-users] shmem on multi-user machine
Hi Jan,
Sorry, but Ion replied to me privately. To answer your question, it is a
Linux system I'm using. I've fixed the problem by giving each user their own
global mutex to play with.
Regards,
Steve.
On 31/07/06, Jan Stetka
Hi Berenguer, Here are the exchanges myself and Ion have been having. Sorry this was not copied to the Boost list. Steve. ========================================================== Ion Gaztañaga to me More options 26-Jul (6 days ago) Hi Steven, - Show quoted text -
I've noticed that the "file" /dev/shmem/sem.boost_shmem_shm_global_mutex keeps hanging around after my program finishes. This file has write permission only for the user that created it. When different user comes along a runs another program that uses shmem, it seg faults on sem_wait(), probably due to the file permissions.
Should this global mutex be cleaned up after use? Or could the permissions be changed somehow to let other use it (though probably not a good idea if user A and user B run a shmem based program at the same time).
Umm. I haven't thought about this, but it's sure that using the lock
leads to some problems. I really don't know how to solve this, because
I'm not a UNIX user, do you have a suggestion? The global mutex is used
to implement mutual exclusion when initializing the shared memory from
different processes, so it's created with all permissions (well, I
suppose those are masked by the security mask). However, I see a huge
security problem, so I don't know how should I proceed. Any suggestion
is welcome.
Regards,
Ion
==============================================================
Steven Wooding to Ion
More options 14:31 (7 hours ago)
Hi Ion,
I've come up with a solution that works us OK. What I've done is add the
username of the user executing the program that uses shared memory to the
end of the global mutex name (using getenv("LOGNAME")). So each user now has
their own shmem global mutex to play with.
I've also found that it was too easy to create a shared memory region larger
than what was available on the /dev/shm filesystem. My solution to this is
to add a check of the filesystem that m_shmHnd is on (should be /dev/shm)
using the fstatfs system call.
Below are my changes against the 0.93 version of shmem. Please feel free to
include, modify or discard these suggestions as you see fit. If they are in
the official release of shmem, then that means we don't have to maintain
patches in the future.
I hope I have been of some help.
Cheers,
Steve.
===================================================================
RCS file: boost/shmem/shared_memory.hpp,v
retrieving revision 1.2
retrieving revision 1.4
diff -u -r1.2 -r1.4
--- shared_memory.hpp 2006/07/03 14:14:46 1.2
+++ shared_memory.hpp 2006/08/01 13:06:22 1.4
@@ -31,6 +31,7 @@
# include
Hi Ion,
I've come up with a solution that works us OK. What I've done is add the username of the user executing the program that uses shared memory to the end of the global mutex name (using getenv("LOGNAME")). So each user now has their own shmem global mutex to play with.
This does not solve the problem where two different users can share the same segment, but at least, does not block new users.
I've also found that it was too easy to create a shared memory region larger than what was available on the /dev/shm filesystem. My solution to this is to add a check of the filesystem that m_shmHnd is on (should be /dev/shm) using the fstatfs system call.
I didn't know it was possible to create a shared memory region bigger than the available. Shouldn't ftruncate fail if we try to do this? Thanks for the patch. I will add it ASAP. Regards, Ion. ================================================================= Steven Wooding to Ion More options 19:54 (2 hours ago) Hi Ion,
This does not solve the problem where two different users can share the same segment, but at least, does not block new users.
In my application, the user can adjust the name of the shared memory segment as a run-time variable, so I just told them to add their user name to the end of the name to avoid this. I didn't know it was possible to create a shared memory region bigger
than the available. Shouldn't ftruncate fail if we try to do this?
No, it doesn't. The only time you know about it is when you actually use the shared memory segment past what's available. Then you get bus errors and the system crashes. Very bad. Some people on the Boost list seem to what to know the answer to my original question. The discuss between me and you has been private up til now. Shall I post our exchange? Cheers, Steve. ================================================================== Ion Gaztañaga to me More options 20:10(1 hour ago)
No, it doesn't. The only time you know about it is when you actually use the shared memory segment past what's available. Then you get bus errors and the system crashes. Very bad.
Umm. That sounds really bad. However, this seems too linux specific, since posix shared memory mounting point can be anywhere. I would need to investigate this issue further to know if mmap can be told to reserve all the memory, instead of waiting bus errors. Regards, ===================================================== Steven Wooding to Ion More options 21:59 (0 minutes ago)
Umm. That sounds really bad. However, this seems too linux specific, since posix shared memory mounting point can be anywhere. I would need to investigate this issue further to know if mmap can be told to reserve all the memory, instead of waiting bus errors.
Doesn't my use of fstatfs that takes the file descriptor as the input argument avoid knowing where the mount point is? Cheers, Steve.
Hi, If you do segment.allocate(3) many times you end up NOT using 100% of the available space but MUCH less. Is that normal and the expected behaviour? or is this some sort of bug? Thanks.
Hi Berenguer,
If you do segment.allocate(3) many times you end up NOT using 100% of the available space but MUCH less.
Every dynamic allocatio needs to store deallocation information to know the size of the buffer, perform buffer merging, etc... Apart from that, an allocation has normally to fulfill some alignment constraints. In the default Shmem allocation algorithm, 8 bytes are used per allocation to store that information and the alignment is 8 bytes. This means that when you want to allocate 3 bytes, you need to use 16 bytes of memory. That's why you run out of memory. Remember that the initial size or the segment will never be fully used: depends on the number of allocation and the size of that allocation. This is exactly the same that happens with "operator new": you waste much more memory with many small allocations than with a few big ones. Regards, Ion
Thanks! Great explanation. -----Mensaje original----- De: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org]En nombre de Ion Gaztanaga Enviado el: lunes, 07 de agosto de 2006 14:37 Para: Boost User List Asunto: Re: [Boost-users] [shmem] Allocation of little objects Hi Berenguer,
If you do segment.allocate(3) many times you end up NOT using 100% of the available space but MUCH less.
Every dynamic allocatio needs to store deallocation information to know the size of the buffer, perform buffer merging, etc... Apart from that, an allocation has normally to fulfill some alignment constraints. In the default Shmem allocation algorithm, 8 bytes are used per allocation to store that information and the alignment is 8 bytes. This means that when you want to allocate 3 bytes, you need to use 16 bytes of memory. That's why you run out of memory. Remember that the initial size or the segment will never be fully used: depends on the number of allocation and the size of that allocation. This is exactly the same that happens with "operator new": you waste much more memory with many small allocations than with a few big ones. Regards, Ion _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
participants (4)
-
Berenguer Blasi
-
Ion Gaztañaga
-
Jan Stetka
-
Steven Wooding