[thread] multiple producers, single consumer - bosst-specific and generic questions
Hello everyone!
Thanks for all the help I already got. And excuse my poor understanding of
of multithreading.
I would like to implement a "multiple producer single consumer" pattern.
My current solution is working with signals, which, I think is not very
efficient. And also it doesn't work properly with the code I have, (see end of
this mail). But besides that problem I have a generic question.
If I worked with buffers and mutex, there are two choices, it seems:
1. Use one buffer, at the cost of deletion from the middle, when one producer
terminates (so consumer won't communicate with a dead thread).
2. Have a buffer per producer, but also have a synchronisation per buffer and
the consumer must look in several buffers, plus I don't know, how many
producers there will be.
This all should lead to a user interface, with different input-methods. So
what would you suggest? It must be speedy and very CPU friendly.
Thanks in advance for consideration. I'd also be happy for some suggested
reading. Only one thing: printed books or documents in non-plain-text form
besides .doc, .pdf and .ps aren't of any use to me, for I'm blind and using
Linux text based environment.
Kindest regards (For code, see below)
Julien
*** CODE SNIPPET ***
boost::mutex my_mtx; // the Mutex for passing on events/chars
// assumptions:
// the following variables are passed to the Input object as a reference
// int& its_q_size;
// condition& its_in_condition;
// condition& its_out_condition;
// boost::signals2::signal
Hello everyone! Thanks for all the help I already got. And excuse my poor understanding of of multithreading. I would like to implement a "multiple producer single consumer" pattern. My current solution is working with signals, which, I think is not very efficient. And also it doesn't work properly with the code I have, (see end of this mail). But besides that problem I have a generic question. If I worked with buffers and mutex, there are two choices, it seems: 1. Use one buffer, at the cost of deletion from the middle, when one producer terminates (so consumer won't communicate with a dead thread). 2. Have a buffer per producer, but also have a synchronisation per buffer and the consumer must look in several buffers, plus I don't know, how many producers there will be. This all should lead to a user interface, with different input- methods. So what would you suggest? It must be speedy and very CPU friendly. Thanks in advance for consideration. I'd also be happy for some suggested
That's a good point, in that having different queues for each thread would prevent contention there. But, with an unknown open-ended number of threads, I'd shy away from that. I've developed (at work) a non-blocking and non-slow-atomic-instruction (I need a good term for that) queue that is the other way around -- one writer, many readers. But I've not had the chance to do the opposite. But much of what I learned can carry over. You could use a ring buffer that the various writers access. Each does an atomic increment on the W index to obtain a unique index to use, and then populates that element of the buffer's array. The reader uses the R pointer to consume items. The trick is, you have several writers all filling their items at the same time, and the R comes along and tries to use one. The reader needs to know that the writer is done filling it. So, the reader sets a flag to indicate "not-ready" after it is done using it and before changing R. The writer populates the fields and then clears the flag, indicating "ready", last. Now the reader knows to wait if it gets to an element where the flag is still set, even though W has moved ahead. To preserve the atomicness of the increment-and-read, the W is always incrementing, and never resets. Instead, the writer has to do the modulo of the ring buffer size when it uses it. The other hard part is watching for empty or full states. Decide how to tell them apart, and how to test for them asynchronously and without possibility of a deadlock. The one I just did like that is specialized, and will never be full. I just sized the buffer to know how many entries are even possible in my application. Let me know if you want to discuss it further. --John TradeStation Group, Inc. is a publicly-traded holding company (NASDAQ GS: TRAD) of three operating subsidiaries, TradeStation Securities, Inc. (Member NYSE, FINRA, SIPC and NFA), TradeStation Technologies, Inc., a trading software and subscription company, and TradeStation Europe Limited, a United Kingdom, FSA-authorized introducing brokerage firm. None of these companies provides trading or investment advice, recommendations or endorsements of any kind. The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.
On Thu, Sep 10, 2009 at 2:32 PM, John Dlugosz
You could use a ring buffer that the various writers access. Each does an atomic increment on the W index to obtain a unique index to use, and then populates that element of the buffer's array. The reader uses the R pointer to consume items. The trick is, you have several writers all filling their items at the same time, and the R comes along and tries to use one. The reader needs to know that the writer is done filling it. So, the reader sets a flag to indicate "not-ready" after it is done using it and before changing R. The writer populates the fields and then clears the flag, indicating "ready", last. Now the reader knows to wait if it gets to an element where the flag is still set, even though W has moved ahead.
If that particular writing thread stalls (or crashes!?), the reader thread may have to wait "forever", even though other writes are finished and waiting. Not impossible to handle, but something to keep in mind. Tony
To preserve the atomicness of the increment-and-read, the W is always incrementing, and never resets. Instead, the writer has to do the modulo of the ring buffer size when it uses it.
The other hard part is watching for empty or full states. Decide how to tell them apart, and how to test for them asynchronously and without possibility of a deadlock.
The one I just did like that is specialized, and will never be full. I just sized the buffer to know how many entries are even possible in my application.
Let me know if you want to discuss it further.
--John
TradeStation Group, Inc. is a publicly-traded holding company (NASDAQ GS: TRAD) of three operating subsidiaries, TradeStation Securities, Inc. (Member NYSE, FINRA, SIPC and NFA), TradeStation Technologies, Inc., a trading software and subscription company, and TradeStation Europe Limited, a United Kingdom, FSA-authorized introducing brokerage firm. None of these companies provides trading or investment advice, recommendations or endorsements of any kind. The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
If that particular writing thread stalls (or crashes!?), the reader thread may have to wait "forever", even though other writes are finished and waiting.
Not impossible to handle, but something to keep in mind.
That's a general issue that is always present. If one thread is in the middle of some elaborate protocol "dance" with other threads, and throws an exception out of there (or just hangs), the other threads will be affected. I see the particular issue here is fault-tolerance -- one worker fails, but others are still producing data. The area that is sensitive is just allocate the slot copy the fields in, being sure to copy the flag last. I don't expect anything to go wrong when just copying PODS from one struct to another. But perhaps I should elaborate that this "critical region" should indeed be that simple: make the record a simple struct or single pointer, not something that is elaborate to construct/copy. I'll have to remember to document that if I make this into a reusable template. TradeStation Group, Inc. is a publicly-traded holding company (NASDAQ GS: TRAD) of three operating subsidiaries, TradeStation Securities, Inc. (Member NYSE, FINRA, SIPC and NFA), TradeStation Technologies, Inc., a trading software and subscription company, and TradeStation Europe Limited, a United Kingdom, FSA-authorized introducing brokerage firm. None of these companies provides trading or investment advice, recommendations or endorsements of any kind. The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.
On Fri, Sep 11, 2009 at 11:55 AM, John Dlugosz
If that particular writing thread stalls (or crashes!?), the reader thread may have to wait "forever", even though other writes are finished and waiting.
Not impossible to handle, but something to keep in mind.
That's a general issue that is always present. If one thread is in the middle of some elaborate protocol "dance" with other threads, and throws an exception out of there (or just hangs), the other threads will be affected.
I see the particular issue here is fault-tolerance -- one worker fails, but others are still producing data. The area that is sensitive is just allocate the slot copy the fields in, being sure to copy the flag last.
I don't expect anything to go wrong when just copying PODS from one struct to another. But perhaps I should elaborate that this "critical region" should indeed be that simple: make the record a simple struct or single pointer, not something that is elaborate to construct/copy.
I'll have to remember to document that if I make this into a reusable template.
If you want to handle non-PODs as well, then it might be best to have a parallel array - write the data into a slot in the 'data array', then CAS the index into the 'index array'. ie sort of your own custom allocator. I'm playing with these issues currently in a version of my own. Tony
On Fri, Sep 11, 2009 at 4:38 PM, Gottlob Frege
On Fri, Sep 11, 2009 at 11:55 AM, John Dlugosz
wrote: If that particular writing thread stalls (or crashes!?), the reader thread may have to wait "forever", even though other writes are finished and waiting.
Not impossible to handle, but something to keep in mind.
That's a general issue that is always present. If one thread is in the middle of some elaborate protocol "dance" with other threads, and throws an exception out of there (or just hangs), the other threads will be affected.
I see the particular issue here is fault-tolerance -- one worker fails, but others are still producing data. The area that is sensitive is just allocate the slot copy the fields in, being sure to copy the flag last.
I don't expect anything to go wrong when just copying PODS from one struct to another. But perhaps I should elaborate that this "critical region" should indeed be that simple: make the record a simple struct or single pointer, not something that is elaborate to construct/copy.
By the way, the other problem is priority inversion - if Thread 1 is trying to allocate/copy/flag, and thread 2 is spin waiting on that, but Thread 2 has higher priority (and they are on the same/only CPU) then you are deadlocked. Or does thread 2 sleep in the spin? Tony
hello julien,
Thanks for all the help I already got. And excuse my poor understanding of of multithreading. I would like to implement a "multiple producer single consumer" pattern. My current solution is working with signals, which, I think is not very efficient. And also it doesn't work properly with the code I have, (see end of this mail). But besides that problem I have a generic question.
you can check out my proposed boost.lockfree library, which contains a multi-writer/multi-reader fifo queue. you can get the sources from my git repository at [1] and the docs at [2] ... best, tim [1] git://tim.klingt.org/boost_lockfree.git [2] http://tim.klingt.org/boost_lockfree/ -- tim@klingt.org http://tim.klingt.org Just what the hell is the experimental tradition? Morton Feldman
Hello Tim! Can you estimate the time, when your library gets approved as a Boost library? I'm asking because the library I want to write should be easily installable and useable on all the Boost supported systems. I'll most probably start writing real code sometime next summer or autumn. Kindest regards and thanks for helping Julien P.S.: Don't I know you from LAU/LAD and Linux Audio Conference? -------- Music was my first love and it will be my last (John Miles) ======== FIND MY WEB-PROJECT AT: ======== http://ltsb.sourceforge.net the Linux TextBased Studio guide ======= AND MY PERSONAL PAGES AT: ======= http://www.juliencoder.de
Can you estimate the time, when your library gets approved as a Boost library? I'm asking because the library I want to write should be easily installable and useable on all the Boost supported systems. I'll most probably start writing real code sometime next summer or autumn.
well, don't ask me, i submitted a review request a few weeks ago, but didn't get any response yet ...
P.S.: Don't I know you from LAU/LAD and Linux Audio Conference?
yes, we briefly met at zkm a few years ago :) cheers, tim -- tim@klingt.org http://tim.klingt.org Linux is like a wigwam: no windows, no gates, apache inside, stable.
participants (4)
-
Gottlob Frege
-
John Dlugosz
-
Julien Claassen
-
Tim Blechmann