Hi Frédéric,
I got it.. Just one more point..
You mentioned:
after you call asyn_write/read, you wait until done==true and in your call back function, you acquire the mutex, done == true, then notify_all().
a)
Should this wait be not done before we call sync read/write?
OR
Is it that we do it after calling async read/write so that threads can do their I/O work and the actual callback functions are called sequentially?
b)
Should we use interprocess_mutex and interprocess_condition instead or normal mutex and condition since different clients and the server will run in different processes ?
Best Regards,
Nishant Sharma
Best Regards,
Nishant Sharma
-----Original Message-----
From: Frédéric [mailto:ufospoke@gmail.com]
Sent: Tuesday, October 17, 2017 5:02 PM
To: boost-users@lists.boost.org
Cc: Sharma, Nishant
I have created an Asynchronous TCP server and client and it works fine. Since it is asynchronous, it uses API’s like async_read, async_write etc.. Is it possible that the same code be used to do “synchronous” communication which results in blocking any further task in the queue unless the current i/o task is complete ?
You can do that with synchronization: std::condition_variable cond; std::mutex mutex; auto done=false; after you call asyn_write/read, you wait until done==true and in your call back function, you acquire the mutex, done == true, then notify_all(). F