[asio] schedule work on all threads of a single io_service::run() pool?
Greetings -- Is there a way to ask the io_service to distribute a particular handler to every thread that is running the service? My use case is per-thread watchdogs, and I need the worker threads to ping the watchdog on some regular interval (e.g., every 5 seconds). I didn't see anything like this in the docs, and I didn't see anything that looked promising when I looked through the source. (I'll freely admit that the ASIO source is beyond my ability to comprehend it without much more study, however.) I can approximate it by having my worker threads do something like this (pseudocode): while ( true ) { boost::system::error_code ec; io_service::poll_one( ec ); if ( ec == stopped ) break; watchdog::ping(); sleep( 500ms ); // trying to balance lag and wakeups } But that has the obvious problems of lag and more wakeups than I really need. To make this hack work, I think I need some calls that probably don't exist; instead of that "sleep" call, I would like to be able to do something like a timed wait on a condition variable: io_service::all_threads_cond.timed_wait( 5s ); That way, I would only wake up every 5 seconds (to feed the watchdog), unless the io_service sent a "notify_one" to the condition variable and this thread happened to be picked. Extra pings aren't much of an issue, but it would be nice to minimize them. Anyway. I'll continue digging, but if anyone has suggestions, I'd love to hear them. Thanks in advance! Best regards, Anthony Foiani
Is there a way to ask the io_service to distribute a particular handler to every thread that is running the service? My use case is per-thread watchdogs, and I need the worker threads to ping the watchdog on some regular interval (e.g., every 5 seconds).
No, io_service threads are expected to be used for scalability purposes only, and the application logic is expected to be "decoupled" from the threading. What you can do instead is to use multiple io_service's, instead of multiple threads running 1 io_service (see io_service-per-CPU example).
Igor --
Thanks for the quick reply.
Igor R
Is there a way to ask the io_service to distribute a particular handler to every thread that is running the service? My use case is per-thread watchdogs, and I need the worker threads to ping the watchdog on some regular interval (e.g., every 5 seconds).
No, io_service threads are expected to be used for scalability purposes only, and the application logic is expected to be "decoupled" from the threading.
Ok, that makes sense. I think I confused myself: thinking that the per-thread "heartbeat" isn't really application logic, so much as low-level plumbing...
What you can do instead is to use multiple io_service's, instead of multiple threads running 1 io_service (see io_service-per-CPU example).
Hm. Let stare at that example for a bit. [Ponders] I think I understand what you're getting at. I want to set up an io_service-per-thread, with each io_service having a deadline_timer to handle the heartbeat. The only other complicated bit is selecting an io_service to handle an incoming request. It's a bit irksome to have to trade off the scalability to get this extra feature. I wonder if I can somehow do a double dispatch of sorts, or maybe do some accounting to see if I can make sure I always select an idle io_service if possible. Thanks again for the reply. Best regards, Anthony Foiani
Anthony Foiani
I want to set up an io_service-per-thread, with each io_service having a deadline_timer to handle the heartbeat. The only other complicated bit is selecting an io_service to handle an incoming request.
Then there's this solution (Andrés's answer): http://stackoverflow.com/questions/12166513/boostasio-thread-pools-and-threa... (or: http://preview.tinyurl.com/alot5eo ) If I replace his run_one with poll_one, I think that covers what I'm looking for. I think it solves the scalability concern, because threads that are currently working on other tasks will not be woken by the central pool's notify. Just in case someone else finds this in the archives... Best, Anthony Foiani
I want to set up an io_service-per-thread, with each io_service having a deadline_timer to handle the heartbeat. The only other complicated bit is selecting an io_service to handle an incoming request.
Can't you just "round-robin" them?
It's a bit irksome to have to trade off the scalability to get this extra feature.
Why io_service-per-core would give worse scalability than thread-per-core?
Igor --
Thanks again for the reply.
Igor R
Can't you just "round-robin" them?
I think that's a problem if some jobs are slower than others. As an example (with the time axis going down the list): Request Thread 1 Thread 2 Thread 3 (Type) Queue / Work Queue / Work Queue / Work -------- ------------ ------------ ------------ A (Slow) A B (Fast) A B C (Fast) A B C D (Fast) D / A C E (Fast) D / A E F (Fast) D / A E F G (Fast) D,G / A F At this point, I have two idle thread/services, and one working thread/service with two more tasks queued up. So if I just round-robin across all existing threads/services, I can get fast jobs piled up "behind" slow jobs. By comparison, if I have the infrastructure to only assign to idle threads/services, it looks like this: Request Thread 1 Thread 2 Thread 3 (Type) Queue / Work Queue / Work Queue / Work -------- ------------ ------------ ------------ A (Slow) A B (Fast) A B C (Fast) A B C D (Fast) A D C E (Fast) A D E F (Fast) A F E G (Fast) A F G Only tasks A and G are still being run, and they are in parallel; the other fast tasks are completed.
Why io_service-per-core would give worse scalability than thread-per-core?
It's not about io_service-per-core vs. thread-per-core; it's about whether it's io_service-per-thread or io_service-across-multiple- threads. One io_service with multiple threads allows idle threads to obtain work as soon as it becomes available; having an io_service with only one thread of execution doesn't seem to offer that. Unless I'm missing the point, which is (as always) very possible! :) Anyway, thanks again, and please don't hesitate to hit me with the clue-by-four if I'm being dense. Best regards, Anthony Foiani
participants (2)
-
Anthony Foiani
-
Igor R