Thanks for the response. The server program is calling boost::asio::async_read for waiting for client's data, I am wondering why the async_read didn't fire an error when the remote socket connection is gone. I thought that boost::asio::async_read should detect lost connection. The fact is it does in most of the situations when the client program is ended, but it cannot detect when the client device suddenly lost power connection, I think that might because TCP only send syn / ack during connection / disconnection, the syn/ack would not be active after establishing the connection, correct me if I am wrong here. You are right, if I have to use keep alive, I'll use it in user level which could better control the time and use less bandwidth than the system keep alive. Thank you. Kind regards, - j On Thu, Jun 29, 2017 at 3:09 PM, Andreas Wehrmann via Boost-users < boost-users@lists.boost.org> wrote:
On 06/29/2017 06:28 AM, jupiter via Boost-users wrote:
Hi,
The asio tcp server can detect the disconnection when the client program stop. But I have small device when it lost power, the server won't be able to detect the TCP disconnection. Does the boost socket has TCP timeout? What is an effective way to handle this situation other than running heartbeat in user level?
Thank you.
Kind regards,
- j
Well, you could always activate TCP keepalive via socket options and set the corresponding timeout and interval values accordingly.
However, it depends on what you want to do really: The TCP keepalive mechanism is something that is implemented by the TCP stack of the operating system and will only check whether the connection is still alive, but that may not be enough because it won't check whether the service you're connected to is actually working/responding. And this is where the application layer heartbeat comes back in.
Hope this gives you some idea.
Regards, Andreas
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users