Hi, I have followed this thread for the past days and I learned a lot about using boost::asio. Thanks a lot. The main points, that I learned, are: * Don't call close(), but post() it. * Check is_open() before starting new async operations. Btw, would it be ok to call close() in an event handler? I am using boost::asio for about a year in several projects, and In each of them I had different bugs in my code, some were really hard to debug.. There are so many things to regard. I tried to use boost::coroutine, once, but I had that much trouble, that for the next project I went back to using regular handlers. At first, I used async_read_some(), later I discovered async_read() and async_read_until(). I have got the feeling of approaching the 'optimal' solution step by step, but I also think, there is still some road ahead. Is there somewhere in the internet or in the boost documentation a kind of best practices for using boost::asio? 73, Mario Von: Boost-users [mailto:boost-users-bounces@lists.boost.org] Im Auftrag von Stian Zeljko Vrba via Boost-users Gesendet: Donnerstag, 1. Februar 2018 08:51 An: boost-users@lists.boost.org Cc: Stian Zeljko Vrba; Gavin Lambert Betreff: Re: [Boost-users] asio: cancelling a named pipe client
Thus after processing the successfully read data (or discarding it, if you prefer), you can check socket.is_open() before starting a fresh read.
OK, everything's sorted out now. I like the is_open() suggestion since I don't have to introduce another state variable to stop the loop.
Thanks again!
-- Stian
________________________________
From: Boost-users
That's what I meant. I'll try to be more precise. QC below stands for the io_service's queue contents.
1. io_service dequeues and executes a completed read handler. This handler starts a new async read operation... QC: [] (empty). Pending: async_read. 2. The program enqueues to io_service a lambda that calls close. QC: [ {close} ] Pending: async_read. 3. In the mean-time (say, during the enqueue), the started async_read operation completed successfully in parallel because it didn't block at all. QC: [ {close} {ReadHandler} ] Pending: none. 4. io_service dequeues {close}, nothing to cancel, the handle is closed. QC: [ {ReadHandler} ] 5. io_service dequeues {ReadHandler}, which initiates a new read from the closed handle. QC: []
This assumes that asynchronous operations and notifications are, well, asynchronous. IOW, there is a time window (en-/dequeueing and execution of {close}) during which async_read can complete and end up non-cancellable, so {ReadHandler} is enqueued with success status.
What part of the puzzle am I missing?
In #5, before ReadHandler is executed the socket is already closed. Thus after processing the successfully read data (or discarding it, if you prefer), you can check socket.is_open() before starting a fresh read. This will of course be false in the sequence above, at which point you just return instead of starting the read, and then once the handler exits the objects will fall out of existence (if you're using the shared_ptr lifetime pattern). If there's no other work to do at that point then the io_service will also exit naturally. If the close ended up queued after ReadHandler, then is_open() will still be true, you will start a new read operation, and then either the above occurs (if the read actually completes before it starts executing the close), or the close does find something to abort and this will enqueue ReadHandler with operation_aborted. There is no race between checking is_open() and starting the next read because both the ReadHandler and the close are occurring in the same strand, so can't happen concurrently. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.orgmailto:Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users