
1. If you hasn't semaphore support in your glibc libraries (in my case it was without TLS) "sem_open" function will return (sem_t*)0 instead of (sem_t*)-1 as it is described in my system header "semaphore.h". Plese correct file "shared_memory.hpp" for using SEM_FAILED macros. Defines in "semaphore.h": /* Value returned if `sem_open' failed. */ #define SEM_FAILED ((sem_t *) 0) /* Maximum value the semaphore can have. */ #define SEM_VALUE_MAX (2147483647) 2. Unclear sinuation. I wrote bugous example program where I write some data in shared_message_queue in infinity cycle without any timeout. If you stop program by Ctrl+C "boost_shmem_shm_global_mutex" stays in locked state so program will not work from second start. If you include some timeout in cycle then program will work properly. Thank you for good job! \o/ Best Regards, Dmitry Smirnov Saint-Petersburg, Russia Slackware 10.2 gcc-3.4.4 kernel 2.6.13 glibc-2.3.6

Hi Dmitry,
1. If you hasn't semaphore support in your glibc libraries (in my case it was without TLS) "sem_open" function will return (sem_t*)0 instead of (sem_t*)-1 as it is described in my system header "semaphore.h". Plese correct file "shared_memory.hpp" for using SEM_FAILED macros.
Thanks! It seems that ((sem_t *)-1) was explicity used in some implementations and was documented like this, but you are right, OpenGroup specifies that the return value is SEM_FAILED. Corrected in sandbox-cvs.
2. Unclear sinuation. I wrote bugous example program where I write some data in shared_message_queue in infinity cycle without any timeout. If you stop program by Ctrl+C "boost_shmem_shm_global_mutex" stays in locked state so program will not work from second start. If you include some timeout in cycle then program will work properly.
This is strange. The infamous "boost_shmem_shm_global_mutex" global mutex is used to implement atomic system-wide initializations, and can be left locked if a process crashes in the atomic initialization (that's why I plan to use file locks in Boost.Interprocess, that are guaranteed to be unlocked automatically when a process crashes), but with a shared_message_queue this just can happen when opening, creating or destroying one. But never while sending receiving messages, since it's not used in those cases. The problem is that the message queue has a process-shared mutex and a condition variable constructed in shared memory to implement the queue and if you crash a process while holding the mutex, the queue is left in a corrupted state. I can't do anything to solve this, since I would need the OS to unlock all the resources if the owner dies. Nevertheless this does not guarantee anything, since the message queue state might be wrong (the program is crashed while manipulating internal queue pointers, for example). I'm sorry to tell you that POSIX async signals are really difficult to combine with a portable interprocess syncrhonization library. I would need help from UNIX experts to know what should I do to provide signal safe synchronization objects. Masking the signals just when entering the message queue functions does not seem very nice, since a process can stay blocked while a message arrives. Any ideas? Regards, Ion

Hey, Ion! I try to thinking about this problem. Anyway correct written application is working ok and from the other side the "system" solution like removing file of mutex from /dev/shm directory works too :-) But at the current moment I'm very interesting in adding of more features to shared_message_queue class. As I understood this queue supplies only one client-server connection. So one message can be received only one time despite of many processes that want to read it. It seems to have internal client number counter will be a good idea. Second feature I want to ask you it add more service methods - like to get number of messages in queue (useful for profiling applications at hard data traffic). Best Regards, Dmitry Smirnov Saint-Petersburg, Russia

Hi,
But at the current moment I'm very interesting in adding of more features to shared_message_queue class. As I understood this queue supplies only one client-server connection. So one message can be received only one time despite of many processes that want to read it. It seems to have internal client number counter will be a good idea.
You want the queue to have an internal client number so that a message should be read N times before it's removed from the queue? I've based my model on POSIX mq_xxx functions that offer this possibility. I a reference counted message quite dangerous because a process crash would block the queue indefinitely. I don't see an strong reason to do this. If you want a robust server<->multi-client communication, I would create N message queues just like a web server has a connection with each web client. If a client crashes, the server and the other clients are still fine. If a timeout expires you could even erase the used message queue system wide erasing the shared memory that was used for that message queue. This multiplies the number of sent/received messages, and makes the application more complicated, though.
Second feature I want to ask you it add more service methods - like to get number of messages in queue (useful for profiling applications at hard data traffic).
Interesting, I will try to add it. Ion

But at the current moment I'm very interesting in adding of more features to shared_message_queue class. As I understood this queue supplies only one client-server connection. So one message can be received only one time despite of many processes that want to read it. It seems to have internal client number counter will be a good idea.
You want the queue to have an internal client number so that a message should be read N times before it's removed from the queue? I've based my model on POSIX mq_xxx functions that offer this possibility. I a reference counted message quite dangerous because a process crash would block the queue indefinitely. I don't see an strong reason to do this. If you want a robust server<->multi-client communication, I would create N message queues just like a web server has a connection with each web client. If a client crashes, the server and the other clients are still fine. If a timeout expires you could even erase the used message queue system wide erasing the shared memory that was used for that message queue. This multiplies the number of sent/received messages, and makes the application more complicated, though.
You're right understand what I want. And no matter how it will be realized. It's interesting but you said practically the same idea as I made in my application short time ago. In every message I put the name of SHM object and kill this object after some timeout despite of it was object processed or not. I'm waiting new versions in anticipation ! :-) Dima
participants (2)
-
Dmitry Smirnov
-
Ion Gaztañaga