
On Sun, Oct 10, 2010 at 4:57 PM, for-gmane
My server app needs to be "highly responsive". Ie. it shall handle connection attemps with a higher priority. Currently it has the following problem: if a handler function like the one below takes too much time, then unfortunately no new connections will be handled as long as the operation below lasts (ie. new connections have to wait). Which alternatives are there to solve this problem? (I was thinking of doing the accept in a seperate thread, but not sure how and whether that would work?)
void session::handle_read(const boost::system::error_code& error, size_t bytes_transferred) { if (!error) { // PROBLEM HERE: // ... a lengthy job here blocks the whole app (ie. new connections have to wait) // ... which coding/design alternatives would solve this problem?
boost::asio::async_write(socket_, boost::asio::buffer(data_, bytes_transferred), boost::bind(&session::handle_write, this, boost::asio::placeholders::error)); } else { delete this; } }
What I personally do (and not have had issues with *yet*) is create the standard io connecter, I feed it an initial thing that async waits on incoming connections, I think spawn a number of threads equal to the hardware_concurrency value just doing a run() in each of them on the io service, then in the main thread it just 'joins' them all waiting for them to die. In the initial thing that async waits on incoming connections, on an incoming connection it spawns a new async handler, tosses it away (I have it setup so when the socket dies, so does the handler, whether explicitly or implicitly) then immediately loops back and waits for another incoming connection asynchronously. The handlers wait on the socket for incoming data, send data, etc... Since the io service was 'run()' in multiple other threads, depending on the processor count, I get decently max usage of the cpu with my other worker data (which 'pause' themselves on occasion and setup a timer for one millisecond later to be called again and pick up where they left off until they complete their data, basically a cooperative multi-threading inside the pre-emptive multithreading model since I have a limited thread count).