"Igor R"
If those requests are scheduled on two different worker threads, they could execute simultaneously, and still cause problems.
Sorry, I forgot that you run one io_service in several threads... Why wouldn't you scale your application using "io_service per CPU" approach, rather than "thread per CPU"?
At first glance, thread per CPU seemed simpler. Also I was planning on having my read callback handle some commands that may be slow, such as database queries, and I didn't want all other clients in the same io_service to block while that's happening. But certainly I can change that.
Like this:
http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio/examples.html#boost...
Thanks, I had looked at that briefly, I will take a closer look. So the advantage of io_service per CPU is that, with only one run() thread on each io_service, requests are serialized on the run() thread. Any callbacks are queued to the io_service(). Any client sessions which are attached to that io_service are guaranteed that only one callback will be called at a time. The reference material for "strands" calls this running the io_service in an implicit strand. If the client session is cancel()ed or close()d, does that clear out the callback queue of anything related to that session? Or do I need to be aware of the possibility that the session is now closed()d somehow? For example, if I have two read callbacks queued, and the first causes the connection to close, do I need to worry that if the second tries to write back to the connection, the wrong thing could happen? Or will boost::asio protect against that? Also, should I be able to get the same per-session request serialization by using strands? Thanks again! -----Scott.