Hi,
The main points, that I learned, are:
·
Don’t call close(), but post() it.
·
Check is_open() before starting new async operations.
Btw, would it be ok to call close() in an event handler?
________________________________
From: Boost-users
Thus after processing the successfully read data (or discarding it, if you prefer), you can check socket.is_open() before starting a fresh read.
OK, everything's sorted out now. I like the is_open() suggestion since I don't have to introduce another state variable to stop the loop.
Thanks again!
-- Stian
________________________________
From: Boost-users
That's what I meant. I'll try to be more precise. QC below stands for the io_service's queue contents.
1. io_service dequeues and executes a completed read handler. This handler starts a new async read operation... QC: [] (empty). Pending: async_read. 2. The program enqueues to io_service a lambda that calls close. QC: [ {close} ] Pending: async_read. 3. In the mean-time (say, during the enqueue), the started async_read operation completed successfully in parallel because it didn't block at all. QC: [ {close} {ReadHandler} ] Pending: none. 4. io_service dequeues {close}, nothing to cancel, the handle is closed. QC: [ {ReadHandler} ] 5. io_service dequeues {ReadHandler}, which initiates a new read from the closed handle. QC: []
This assumes that asynchronous operations and notifications are, well, asynchronous. IOW, there is a time window (en-/dequeueing and execution of {close}) during which async_read can complete and end up non-cancellable, so {ReadHandler} is enqueued with success status.
What part of the puzzle am I missing?
In #5, before ReadHandler is executed the socket is already closed. Thus after processing the successfully read data (or discarding it, if you prefer), you can check socket.is_open() before starting a fresh read. This will of course be false in the sequence above, at which point you just return instead of starting the read, and then once the handler exits the objects will fall out of existence (if you're using the shared_ptr lifetime pattern). If there's no other work to do at that point then the io_service will also exit naturally. If the close ended up queued after ReadHandler, then is_open() will still be true, you will start a new read operation, and then either the above occurs (if the read actually completes before it starts executing the close), or the close does find something to abort and this will enqueue ReadHandler with operation_aborted. There is no race between checking is_open() and starting the next read because both the ReadHandler and the close are occurring in the same strand, so can't happen concurrently. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.orgmailto:Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users