I am not an asio expert by any means, though I've dabbled. YMMV.
On Wed, Jul 22, 2009 at 8:25 PM, Alex Black
Does this code look ok? Specifically, does this look like a good way to have a fixed # of threads service requests, and have each thread asychronously write out the response to their client?
Seems like the right thing to do.
In the code below, I left out HandleHttpRequest, and that function parses the request, does some work, and writes out the response 8kb at a type, using async_write.
note: the threads don't wait until the previous async_write is done, they each just keep calling async_write until all their data is out. (but they are careful to make sure their buffers are around for long enough)
Based on what I've read here recently I understand that each async_write is done in succession. This may make a difference on systems with more cores, even if you aren't seeing any problems. So you probably need a handler for the async_write that does successive async_writes until everything is written.
[...] void CBaseWebServer::StartAsync(int port) { [...] for ( int i = 0; i < numberOfThreads; i++ ) { // Fire up a thread to run the IO service shared_ptr<thread> pThread = shared_ptr<thread>(new thread(bind(&CBaseWebServer::RunIoService, this, i)));
N threads entering this->RunIoService...
[...] void CBaseWebServer::RunIoService(int threadId) { m_pPerThreadId.reset( new int(threadId) );
N threads resetting what appears to be a single shared pointer; I'm assuming its a shared_ptr. The problem is that a shared_ptr *instance* isn't thread-safe, so having N threads independently resetting and assigning it is going to cause problems (the reference counting *is* thread-safe, but each thread needs its own shared_ptr, each pointing to the same thing). Having said that, I don't see the purpose of the pointer anyway, as it isn't used in the code sample. Not sure about the error handling on accept errors; I might have it call StartAccept from HandleAccept regardless of the error, though there might be some errors I'd give up on.