[Interprocess::Semaphore] Deadlock on more producers - one consumer problem

Hi to all, I've a buffer of 10 positions; 4 producers put elements on the buffer and only 1 consumer pick them; the producers generate elements very fastly, but the consumer picks the element very slowly. I've decided to synchronize the producers and the consumer with a boost::interprocess semaphore initialized to 10. Before to put an element in the buffer, the producer execute a wait() in the semaphore; the 4 producers put quickly 10 elements in the buffer, so they all block on the wait(). When the consumer picks an element from the buffer, it performs a post() on the semaphore. Here is the problem: the post() unlocks only one of the 4 producers. Consider this scenario: - the producers are all in wait state; the semaphore counter is 0; - the consumer picks an element and perform a post(), that unlocked the producer_1; the semaphore counter is 1; - the producer_1 is unlocked but it doesn't produce any element; the other producers are locked; - the consumer picks the other 9 elements in the buffer and perform the 9 post() on the semaphore, but the semaphore counter becomes 2, 3, 4, ... 10, and so the semaphore doesn't signal the other producers, that remains in lock state(). --> DEADLOCK. Is there a way to unlock all the producers with the boost::interprocess::semaphore? Or I have mistaken the approach of the problem? Thanks to all, Cosimo Calabrese.

Cosimo Calabrese wrote:
Is there a way to unlock all the producers with the boost::interprocess::semaphore? Or I have mistaken the approach of the problem?
A semaphore post only increases a waiting thread if count was 0. That's the behaviour of a semaphore. If you want to wake up all waiters, use a condition variable and notify_all. Take any mutex condition variable tutorial for an example on multiple producers / consumers.
Thanks to all, Cosimo Calabrese.
Ion

Ion Gaztañaga wrote:
A semaphore post only increases a waiting thread if count was 0. That's the behaviour of a semaphore. Not of Dijkstra's design it isn't. Conceptually a waiting consumer is immediately given the token and woken up, and further tokens will wake more consumers whether they are already waiting or not.
Looks like a bug to me. James

James Mansion wrote:
Ion Gaztañaga wrote:
A semaphore post only increases a waiting thread if count was 0. That's the behaviour of a semaphore. Not of Dijkstra's design it isn't. Conceptually a waiting consumer is immediately given the token and woken up, and further tokens will wake more consumers whether they are already waiting or not.
Looks like a bug to me.
My point is that (http://en.wikipedia.org/wiki/Semaphore_(programming)): P(Semaphore s) // Acquire Resource { wait until s > 0, then s := s-1; /* Testing and decrementing s must be atomic to avoid race conditions */ } V(Semaphore s) // Release Resource { s := s+1; /* must be atomic */ } P can only wake a single thread, because the semaphore count represents the free resource count. The original post said: "the post() unlocks only one of the 4 producers" and that's logical. If there are 4 waiting threads, you need 4 posts to unlock all. Best, Ion

igaztanaga@gmail.com wrote:
My point is that (http://en.wikipedia.org/wiki/Semaphore_(programming)):
P(Semaphore s) // Acquire Resource { wait until s > 0, then s := s-1; /* Testing and decrementing s must be atomic to avoid race conditions */ }
V(Semaphore s) // Release Resource { s := s+1; /* must be atomic */ }
P can only wake a single thread, because the semaphore count represents the free resource count. The original post said:
"the post() unlocks only one of the 4 producers"
and that's logical. If there are 4 waiting threads, you need 4 posts to unlock all.
The post operation that 'wakes up' is V, not P. If your api allows V(4) to release 4 resources atomically, then 4 consumers in P must be released (if they are waiting, and any left over accrued to allow future consumers not to block. Even if you say V(4) is not supported and actually V;V;V;V and these are atomic individually, then there is still a race between those four posts and any Ps actually getting to run. Either way, you MUST enable 4-off Ps to become ready. If the API allows to post multiple resources atomically, then you need to handle this.

James Mansion wrote:
The post operation that 'wakes up' is V, not P.
Ooops, yes, I meant V.
If your api allows V(4) to release 4 resources atomically, then 4 consumers in P must be released (if they are waiting, and any left over accrued to allow future consumers not to block. Even if you say V(4) is not supported and actually V;V;V;V and these are atomic individually, then there is still a race between those four posts and any Ps actually getting to run. Either way, you MUST enable 4-off Ps to become ready. If the API allows to post multiple resources atomically, then you need to handle this.
No, the API just allows V(1). And VxN operations is what I'm suggesting if N resources are freed by the producer. Is it usual to have V(N) in semaphores? I have never seen it. Best, Ion

http>
Forget this response. We have system V semaphores with this feature. Interprocess models POSIX interface and that's why post is V(1).
Ion, James thanks to all for the answers. Sorry for the delay in my reply. And sorry for the uppercase in this reply, I've used it only for a better explanation. And sorry for the too long post... Yes, it's true, I've said: "the post() unlocks only one of the 4 producers", but perhaps I've badly explained the problem. I don't want to wake up all waiters with a single post(), but only one waiter every time I post() the semaphore, independently from the counter value. The problem is that the interprocess::semaphore() doesn't wake up a waiter EVERY TIME I post() the semaphore, but only if the counter is 0. If the buffer has more of one free resource, and I've more of one waiter locked, EVERY TME I post() the semaphore should unlock ONE of the waiters. I'm working under Windows XP, so I've analized the "emulation" implementation of the semaphore interface (the posix implementation is a simple wrapper of the POSIX semaphore). The emulation module implements the post() like this: inline void interprocess_semaphore::post() { scoped_lock<interprocess_mutex> lock(m_mut); if(m_count == 0){ m_cond.notify_one(); } ++m_count; } - From the cited http://en.wikipedia.org/wiki/Semaphore_(programming), it isn't mentioned the requirement to check if the semaphore counter is 0 for release a waiter. - Ion, you've said: "V can only wake a single thread, because the semaphore count represents the free resource count". I am perfectly agree with you. But this should be done independently from the counter value, not only when the counter is 0. - Ion, you've also said: "Interprocess models POSIX interface and that's why post is V(1)". I'm agree with you if you do V(1), but not if you do: if ( sempaphore_counter == 0 ) { V(1); } - I've modeled my consumers - producer problem like described in the cited http://en.wikipedia.org/wiki/Producer-consumer_problem#Using_semaphores procedure consumer() { while (true) { down(fillCount) item = removeItemFromBuffer() up(emptyCount) consumeItem(item) } } But using the interprocess::semaphore, the up() operation correctly unlocks one of the locked threads, but ONLY if counter == 0. - From boost documentation: http://www.boost.org/doc/libs/1_39_0/doc/html/interprocess/synchronization_m...: "Post: Increments the semaphore count. If any process is blocked, one of those processes is awoken." So, not only when the semaphore counter is 0. - From boost documentation too: http://www.boost.org/doc/libs/1_39_0/doc/html/boost/interprocess/interproces... void post() ; Increments the interprocess_semaphore count. If there are processes/threads blocked waiting for the interprocess_semaphore, then one of these processes will return successfully from its wait function. [...] This is exactly the Dijkstra semaphore behaviour; but for correctly describe the interprocess::semaphore lacks the words: "...only if the counter is 0" I hope to have explained my point of view. Best, Cosimo Calabrese.

Cosimo Calabrese wrote:
inline void interprocess_semaphore::post() { scoped_lock<interprocess_mutex> lock(m_mut); if(m_count == 0){ m_cond.notify_one(); } ++m_count; }
Yes, I think you are right. post() should unconditionally notify_one(), otherwise, we could post() several times and only wake up one thread. Can you test your code removing the m_count == 0 condition? Thanks, Ion

2009/6/20 Ion Gaztañaga <igaztanaga@gmail.com>:
Cosimo Calabrese wrote:
inline void interprocess_semaphore::post() { scoped_lock<interprocess_mutex> lock(m_mut); if(m_count == 0){ m_cond.notify_one(); } ++m_count; }
Yes, I think you are right. post() should unconditionally notify_one(), otherwise, we could post() several times and only wake up one thread.
Can you test your code removing the m_count == 0 condition?
Sorry for jumping into the middle of this. But shouldn't it only notify_one() if count is greater than or equal to 0? Not unconditionally. It's possible to initialize the semaphore with a negative count, and in that case a call to wait() should not unblock until the semaphore is 0 or higher.

Zachary Turner wrote:
Sorry for jumping into the middle of this. But shouldn't it only notify_one() if count is greater than or equal to 0? Not unconditionally. It's possible to initialize the semaphore with a negative count, and in that case a call to wait() should not unblock until the semaphore is 0 or higher.
Ummm... Interprocess models POSIX primitives and I see that #include <semaphore.h> int sem_init(sem_t *sem, int pshared, unsigned int value); semaphore value should be always positive, something that does not happen in Interprocess: interprocess_semaphore(int initialCount); So I think I should change interprocess constructor to an unsigned int. Best, Ion

2009/6/20 Ion Gaztañaga <igaztanaga@gmail.com>:
Zachary Turner wrote:
Sorry for jumping into the middle of this. But shouldn't it only notify_one() if count is greater than or equal to 0? Not unconditionally. It's possible to initialize the semaphore with a negative count, and in that case a call to wait() should not unblock until the semaphore is 0 or higher.
Ummm... Interprocess models POSIX primitives and I see that
#include <semaphore.h>
int sem_init(sem_t *sem, int pshared, unsigned int value);
semaphore value should be always positive, something that does not happen in Interprocess:
interprocess_semaphore(int initialCount);
So I think I should change interprocess constructor to an unsigned int.
That works too :) Just out of curiosity why doesn't windows backend just use built-in windows api functions for manipulating semaphores? CreateSemaphore, etc? I haven't done any performance benchmarks myself, but it seems like using native system calls would be faster and more scalable.

Zachary Turner wrote:
2009/6/20 Ion Gaztañaga <igaztanaga@gmail.com>:
Zachary Turner wrote:
Sorry for jumping into the middle of this. But shouldn't it only notify_one() if count is greater than or equal to 0? Not unconditionally. It's possible to initialize the semaphore with a negative count, and in that case a call to wait() should not unblock until the semaphore is 0 or higher. Ummm... Interprocess models POSIX primitives and I see that
#include <semaphore.h>
int sem_init(sem_t *sem, int pshared, unsigned int value);
semaphore value should be always positive, something that does not happen in Interprocess:
interprocess_semaphore(int initialCount);
So I think I should change interprocess constructor to an unsigned int.
That works too :) Just out of curiosity why doesn't windows backend just use built-in windows api functions for manipulating semaphores? CreateSemaphore, etc? I haven't done any performance benchmarks myself, but it seems like using native system calls would be faster and more scalable.
Because that semaphore couldn't be placed in shared memory and memory-mapped files, like its POSIX equivalent with pshared set to true: int sem_init(sem_t *sem, int pshared, unsigned int value); http://www.opengroup.org/onlinepubs/007908775/xsh/sem_init.html Best, Ion

Ion Gaztañaga wrote:
Because that semaphore couldn't be placed in shared memory and memory-mapped files, like its POSIX equivalent with pshared set to true:
int sem_init(sem_t *sem, int pshared, unsigned int value);
That's not strictly true. Put an unique name in shared memory, and do something to cache the handle in accessing processes or the cost *will* suck. Unfortunately just basing things on POSIX is a pretty bad design decision if you want portability - if you manage it as 'a shared thing' and handles then its easier to implement with either, though POSIX is still generally crappy because its hard to avoid races to initialise the shared thing, and because its so badly defined what you can put it in. POSIX shared memory? SysV shared memory? Memory mapped file? memory mapped /dev/null? For a memory mapped file, does the semaphore state persist if the file is closed by all processes? What about closed by all processes, and the system is rebooted? Its a khazi, it really is. James

On Sun, Jun 21, 2009 at 3:15 PM, James Mansion<james@mansionfamily.plus.com> wrote:
Ion Gaztañaga wrote:
Because that semaphore couldn't be placed in shared memory and memory-mapped files, like its POSIX equivalent with pshared set to true:
int sem_init(sem_t *sem, int pshared, unsigned int value);
That's not strictly true. Put an unique name in shared memory, and do something to cache the handle in accessing processes or the cost *will* suck. Unfortunately just basing things on POSIX is a pretty bad design decision if you want portability - if you manage it as 'a shared thing' and handles then its easier to implement with either, though POSIX is still generally crappy because its hard to avoid races to initialise the shared thing, and because its so badly defined what you can put it in.
POSIX shared memory? SysV shared memory? Memory mapped file? memory mapped /dev/null?
For a memory mapped file, does the semaphore state persist if the file is closed by all processes? What about closed by all processes, and the system is rebooted?
Its a khazi, it really is.
Not sure what a khazi is :) But regardless, I also feel like modelling it after posix semaphores is a bit contrary to how other libraries in boost work, or how boost libraries are supposed to work in general (correct me if I'm wrong). It seems to me like you should be starting with requirements, and molding the o/s primitives to fit the requirements. Not molding the requirements to fit specific o/s primitives. For example, does anyone actually care that specifically that a semaphore is in shared memory or a memory mapped file? Or do they just care that it can be accessed from multiple processes? Ok fine, before you shoot me, I'm sure that some people actually do want it to be in a specific type of memory for whatever reason. But regardless, both o/s'es provide native support for semaphores that can be used from multiple processes. It seems like the interface should be designed simply to distinguish whether or not it can be used in multiple processes, and make the internals hide the rest. There may be situations where you really want some o/s specific behavior, but that's what an interprocess::posix namespace could be used for. Like in boost::asio I can use boost::asio::windows::overlapped_ptr

Zachary Turner wrote:
Not sure what a khazi is :) But regardless, I also feel like modelling it after posix semaphores is a bit contrary to how other libraries in boost work, or how boost libraries are supposed to work in general (correct me if I'm wrong). It seems to me like you should be starting with requirements, and molding the o/s primitives to fit the requirements. Not molding the requirements to fit specific o/s primitives.
How are Boost libraries suppossed to work? First of all, the goal of the library was portability it's easier to model POSIX primitives in Windows than the inverse without the need of any server daemon. After all, POSIX is supposed to be the standard, isn't ist?
For example, does anyone actually care that specifically that a semaphore is in shared memory or a memory mapped file? Or do they just care that it can be accessed from multiple processes? Ok fine, before you shoot me, I'm sure that some people actually do want it to be in a specific type of memory for whatever reason. But regardless, both o/s'es provide native support for semaphores that can be used from multiple processes. It seems like the interface should be designed simply to distinguish whether or not it can be used in multiple processes, and make the internals hide the rest.
There are specific uses for process-shared (in posix sense) primitives. Interprocess also offers named primitives, similar to sem_open functions, but they are also modeled after POSIX primitives. Both have their uses.
There may be situations where you really want some o/s specific behavior, but that's what an interprocess::posix namespace could be used for. Like in boost::asio I can use boost::asio::windows::overlapped_ptr
Ok, Boost libraries are supposed to be the base of standardization and that requires portability at least in UNIX and Windows. I found POSIX behaviour was more portable. If OS-specific primitives are demanded, I will add this to the to-do list and I'll try to find some time for this. If anyone that is an expert on an specific OS primitives, contributions are welcome. Best, Ion

Ion Gaztañaga wrote:
How are Boost libraries suppossed to work? First of all, the goal of the library was portability it's easier to model POSIX primitives in Windows than the inverse without the need of any server daemon. Why do you say that? After all, POSIX is supposed to be the standard, isn't ist? Its a standard. It has the supposed advantage of being a de jure standard - but what does that mean in reality? Portability between a lot of bit players? Real world portability has to mean Windows, MacOS and Linux now, with 'real POSIX' somewhat secondary, which is galling for those of us with sympathy for systems like Solaris with a history of POSIX compliance foremost and frippery second.
Ok, Boost libraries are supposed to be the base of standardization and that requires portability at least in UNIX and Windows. I found POSIX behaviour was more portable. 'More portable' between what? Between a bunch of systems with different falavours and a decent-but-not-overwhelming total share of the server market and MacOS-plus-bit-part share of the desktop?
Windows and POSIX are sadly rather different even when the APIs are superficially similar. You can fake one on the other but doing so with the same atomicity is quite hard - even at the level of something as ostensibly simple as pread. There is no substitute for designing for both approaches at once. Still, its done now, unless there is any real stomach to have a second attempt at a shared memory API for Boost. James

James Mansion wrote:
Its a standard. It has the supposed advantage of being a de jure standard - but what does that mean in reality? Portability between a lot of bit players? Real world portability has to mean Windows, MacOS and Linux now, with 'real POSIX' somewhat secondary, which is galling for those of us with sympathy for systems like Solaris with a history of POSIX compliance foremost and frippery second.
I think current approach is portable between Windows, MacOs and Linux. Maybe not as efficient as it should be, but portable anyway. Portable enough to present a shared memory proposal to the standard.
Windows and POSIX are sadly rather different even when the APIs are superficially similar. You can fake one on the other but doing so with the same atomicity is quite hard - even at the level of something as ostensibly simple as pread. There is no substitute for designing for both approaches at once. Still, its done now, unless there is any real stomach to have a second attempt at a shared memory API for Boost.
I think current attempt is positive, taking in care all the difficulties you've just mentioned. If that's not enough, I'm open to suggestions, but I definitely need help, because sadly I can only dedicate a small part of my time to Boost. What would like to see in the library, additional classes for Windows IPC, and System V IPC? Since implementing all those is a heavy task, do you have some preferences on what should we implement first? Best, Ion

Ion Gaztañaga wrote:
part of my time to Boost. What would like to see in the library, additional classes for Windows IPC, and System V IPC? Since implementing all those is a heavy task, do you have some preferences on what should we implement first?
To be honest I think the answer is 'neither' and I would like to see any emulation of posix APIs removed too. I think its the wrong approach - its too low level. You can get apparent flexibility by doing so but I think its a mistake. I think its better to look at use-cases for actual applications and look at what's needed to support application design patterns. The key question really is whether you expect to support sharing between unrelated processes and why. There seem to me to be two major use cases: - unrelated processes that open a shared resource - the canonical example being some sort of ISAM database perhaps - related processes sharing a lot of state - say a web server with shared caches and scoreboards, or databases such as Postgres with shared memory for caches, locks and status message queues The second of these is A LOT easier to get right because there is typically a master process that can own the resources and ensure that the workers only attach once the state is defined, and provide a lifetime service. Windows and POSIX are very different in lifetime management and also in terms of how named shared resources like semaphores and mutexes are initialised - its hard to avoid a race with initialisation of POSIX system scope shared resources without using some other sort of master lock. Personally, I would welcome a framework that helps create multiprocess worker apps (strcuture a bit like Postgres I guess) and do it well. That's a much higher-level problem than any attempt to force an alien API on a system, and as such its easier to get acceptable performance everywhere. Personally I value Windows performance as much as anything alse and don't care for low-level emulations. ASIO is an example of something done right, IMO. I have some code that addresses some of these areas, but its rather unsuitable for Boost, either through style (I'm much happier with Poco from a style perspective) or overlap. James

On Tue, Jun 23, 2009 at 3:15 PM, James Mansion<james@mansionfamily.plus.com> wrote:
Ion Gaztañaga wrote:
part of my time to Boost. What would like to see in the library, additional classes for Windows IPC, and System V IPC? Since implementing all those is a heavy task, do you have some preferences on what should we implement first?
To be honest I think the answer is 'neither' and I would like to see any emulation of posix APIs removed too. I think its the wrong approach - its too low level. You can get apparent flexibility by doing so but I think its a mistake.
I think its better to look at use-cases for actual applications and look at what's needed to support application design patterns. The key question really is whether you expect to support sharing between unrelated processes and why.
There seem to me to be two major use cases: - unrelated processes that open a shared resource - the canonical example being some sort of ISAM database perhaps - related processes sharing a lot of state - say a web server with shared caches and scoreboards, or databases such as Postgres with shared memory for caches, locks and status message queues
The second of these is A LOT easier to get right because there is typically a master process that can own the resources and ensure that the workers only attach once the state is defined, and provide a lifetime service. Windows and POSIX are very different in lifetime management and also in terms of how named shared resources like semaphores and mutexes are initialised - its hard to avoid a race with initialisation of POSIX system scope shared resources without using some other sort of master lock.
Personally, I would welcome a framework that helps create multiprocess worker apps (strcuture a bit like Postgres I guess) and do it well. That's a much higher-level problem than any attempt to force an alien API on a system, and as such its easier to get acceptable performance everywhere. Personally I value Windows performance as much as anything alse and don't care for low-level emulations. ASIO is an example of something done right, IMO.
I have some code that addresses some of these areas, but its rather unsuitable for Boost, either through style (I'm much happier with Poco from a style perspective) or overlap.
You said pretty much what I was trying to say, but couldn't find the right words. :) I also agree that ASIO is really just nothing short of amazing. 95% of the use cases are handled by a generic interface that has nothing to do with either windows, posix, or any other api and is not designed with those api's in mind, but rather designed from the perspective of just exposing very general asynchronous functionality to users. For the rare cases where you really need it to work a specific way, there's a posix namespace and a windows namespace providing access to more low level functionality. I'm not sure how to design interprocess according to the same philosophy because by all rights asio is really complex under the hood. And for that matter it's not like interprocess is "unusable" or even unpleasant to use for that matter. It just would be nice to have an interface that lives on its own independent of any other standard or interface. Along those same lines, I've always felt like there was some overlap between interprocess and thread, but it's one of those things that's difficult to decide how to deal with. In some ways I feel like the two should be merged into a single library called boost::parallel or something.

James Mansion wrote:
Personally, I would welcome a framework that helps create multiprocess worker apps (strcuture a bit like Postgres I guess) and do it well. That's a much higher-level problem than any attempt to force an alien API on a system, and as such its easier to get acceptable performance everywhere. Personally I value Windows performance as much as anything alse and don't care for low-level emulations. ASIO is an example of something done right, IMO.
Then there is room for a new Boost library that does not overlap with Interprocess. Just like we have an API to create a simple thread and another library with pools and more advanced concurrency features. I think your approach can be very attractive but I'm not the right person to collaborate on this, I have no experience with this type of frameworks. And I think we need some pretty good discussion between concurrency experts until we find a good model for the framework. Any volunteer? Best, Ion

James Mansion wrote:
Ion Gaztañaga wrote:
Because that semaphore couldn't be placed in shared memory and memory-mapped files, like its POSIX equivalent with pshared set to true:
int sem_init(sem_t *sem, int pshared, unsigned int value);
That's not strictly true. Put an unique name in shared memory, and do something to cache the handle in accessing processes or the cost *will* suck.
That would be incredibly slow. We would need to maintain a per-process DB for name-mapping for each use. I think it's prohibitively expensive when comparing to a few atomic operations.
Unfortunately just basing things on POSIX is a pretty bad design decision if you want portability - if you manage it as 'a shared thing' and handles then its easier to implement with either, though POSIX is still generally crappy because its hard to avoid races to initialise the shared thing, and because its so badly defined what you can put it in.
Which interface do you consider adequate for portability?
For a memory mapped file, does the semaphore state persist if the file is closed by all processes? What about closed by all processes, and the system is rebooted?
Of course. Everything you map in a file ends in the file, if you write a wrong data, it's your problem. This might unacceptable for some, but it has some pretty good uses. For named resources (sem_open...) Unix resources have kernel lifetime and they need to be unlinked, just like files. This has some good uses cases but also problems and I'm open to suggestions. Achieving windows lifetime (reference counted) semantics is really impossible in the presence (unless you develop a server ipc process like wine) of process crashes is and that was a widespread complain in the pre-accepted Interprocess library. Best, Ion

Yes, I think you are right. post() should unconditionally notify_one(), otherwise, we could post() several times and only wake up one thread.
Can you test your code removing the m_count == 0 condition?
Well, I've removed the m_count == 0 condition and it seems to work properly. Every time a protected resource is freed, a notify_one() is performed and a locked thread is unlocked. Do you take in consideration a correction in a next boost release? Best, Cosimo Calabrese.

Cosimo Calabrese wrote:
Yes, I think you are right. post() should unconditionally notify_one(), otherwise, we could post() several times and only wake up one thread.
Can you test your code removing the m_count == 0 condition?
Well, I've removed the m_count == 0 condition and it seems to work properly. Every time a protected resource is freed, a notify_one() is performed and a locked thread is unlocked.
Do you take in consideration a correction in a next boost release?
Yes, boost 1.40 will have this correction.
Best, Cosimo Calabrese.
Best, Ion

Hi Ion,
Yes, boost 1.40 will have this correction.
I've looked for the correction, but I didn't find it... I've looked here: http://svn.boost.org/svn/boost/branches/release/boost/interprocess/sync/emul... Is it the right place where to find the new release? I'm newby to boost's svn... Regards, Cosimo Calabrese.

Cosimo Calabrese escribió:
Hi Ion,
Yes, boost 1.40 will have this correction.
I've looked for the correction, but I didn't find it...
I've looked here: http://svn.boost.org/svn/boost/branches/release/boost/interprocess/sync/emul...
Is it the right place where to find the new release? I'm newby to boost's svn...
You are right, the change was lost somewhere during my local changes. I've fixed it in trunk and release. Thanks! Ion

Ion Gaztañaga wrote:
No, the API just allows V(1). And VxN operations is what I'm suggesting if N resources are freed by the producer. Is it usual to have V(N) in semaphores? I have never seen it.
Well, the Windows ReleaseSemaphore operation does off the top of my head. Also the older semop feature on UNIX (NB not sem_post). From opengroup: If /sem_op/ is a positive integer and the calling process has alter permission, the value of /sem_op/ is added to /semval/ James

Cosimo Calabrese wrote:
Is there a way to unlock all the producers with the boost::interprocess::semaphore? Or I have mistaken the approach of the problem?
If a semaphore count represents free resources, shouldn't post be performed by the producer to signal available resources? You can try this: http://en.wikipedia.org/wiki/Producer-consumer_problem#Using_semaphores semaphore fillCount = 0 semaphore emptyCount = BUFFER_SIZE procedure producer() { while (true) { item = produceItem() down(emptyCount) putItemIntoBuffer(item) up(fillCount) } } procedure consumer() { while (true) { down(fillCount) item = removeItemFromBuffer() up(emptyCount) consumeItem(item) } } where down is "wait" and up is "post". Best, Ion
participants (5)
-
Cosimo Calabrese
-
igaztanaga@gmail.com
-
Ion Gaztañaga
-
James Mansion
-
Zachary Turner