I use one io_service in one process to run two servers A and B each has its own port, own context and SSL configuration to accept inbound connections. Although both A and B have boost::asio::deadline_timer, but I think there is only one thread is running: (gdb) info threads Id Target Id Frame * 1 Thread 0x7ffff7fe0bc0 (LWP 31204) "ssl_connection" 0x00007ffff4a690a3 in __epoll_wait_nocancel () at ../sysdeps/unix/syscall-template.S:84 The problem I got: - If I started only one server A, it worked just fine for both async_read and async_write. - If I started both servers A and B, even B was in idle, server A could sometime be crashed after receiving and responding just one message, then crashed at next async_read in the first second, it was more stable if it did not call async_write. SEGFAULT: Fault address=0x55b3700d00cf, #4 0x000055555557f98a in boost::asio::detail::task_io_service_operation::complete (this=0x55555584adc0, owner=..., ec=..., bytes_transferred=0) at /usr/include/boost/asio/detail/task_io_service_operation.hpp:38 #5 0x0000555555581f82 in boost::asio::detail::task_io_service::do_run_one ( this=0x555555814710, lock=..., this_thread=..., ec=...) at /usr/include/boost/asio/detail/impl/task_io_service.ipp:372 #6 0x0000555555581ab4 in boost::asio::detail::task_io_service::run ( this=0x555555814710, ec=...) at /usr/include/boost/asio/detail/impl/task_io_service.ipp:149 #7 0x000055555558221f in boost::asio::io_service::run (this=0x7fffffffe150) at /usr/include/boost/asio/impl/io_service.ipp:59 It seems that async_write and async_read messed up, but that problem was intermittent, especially when the server A passed first second, it could be running for hours for doing consequent many async_read and async_write simultaneously. It was only one thread, what could cause that problem? Thank you.
Not enough data here to help you I'm afraid.
You may want to post the question (with MVCE) on stackoverfow.com.
Some questions spring to mind:
1. do you ensure the lifetime of the receive/send buffers during async
operations?
2. are you using strands? If so are you binding your async handlers to the
strand?
3. How are you managing the lifetimes of your server and connection
objects? Did you remember to capture shared_ptrs (for example) in the async
handlers?
On Sat, 5 Jan 2019 at 10:34, hh h via Boost
I use one io_service in one process to run two servers A and B each has its own port, own context and SSL configuration to accept inbound connections. Although both A and B have boost::asio::deadline_timer, but I think there is only one thread is running:
(gdb) info threads Id Target Id Frame * 1 Thread 0x7ffff7fe0bc0 (LWP 31204) "ssl_connection" 0x00007ffff4a690a3 in __epoll_wait_nocancel () at ../sysdeps/unix/syscall-template.S:84
The problem I got:
- If I started only one server A, it worked just fine for both async_read and async_write.
- If I started both servers A and B, even B was in idle, server A could sometime be crashed after receiving and responding just one message, then crashed at next async_read in the first second, it was more stable if it did not call async_write.
SEGFAULT: Fault address=0x55b3700d00cf,
#4 0x000055555557f98a in boost::asio::detail::task_io_service_operation::complete (this=0x55555584adc0, owner=..., ec=..., bytes_transferred=0) at /usr/include/boost/asio/detail/task_io_service_operation.hpp:38 #5 0x0000555555581f82 in boost::asio::detail::task_io_service::do_run_one ( this=0x555555814710, lock=..., this_thread=..., ec=...) at /usr/include/boost/asio/detail/impl/task_io_service.ipp:372 #6 0x0000555555581ab4 in boost::asio::detail::task_io_service::run ( this=0x555555814710, ec=...) at /usr/include/boost/asio/detail/impl/task_io_service.ipp:149 #7 0x000055555558221f in boost::asio::io_service::run (this=0x7fffffffe150) at /usr/include/boost/asio/impl/io_service.ipp:59
It seems that async_write and async_read messed up, but that problem was intermittent, especially when the server A passed first second, it could be running for hours for doing consequent many async_read and async_write simultaneously. It was only one thread, what could cause that problem?
Thank you.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Richard Hodges hodges.r@gmail.com office: +442032898513 home: +376841522 mobile: +376380212 (this will be *expensive* outside Andorra!) skype: madmongo facebook: hodges.r
Thanks Richard, you are right that the send buffer using shared_ptr streambuf created during sending the message, it is not in life time that could cause the problem, changed to a lifetime buffer I think that fixed it, still in testing. As always, appreciate your insight thought.
If all async_read and async_write need a static life time buffer, do I need to worry about potential buffer overwritten before the buffer process is completed? Take an example, if a lift_time_buffer is a static buffer for async_read, when the socket gets the data, it fills data into the buffer, then the buffer is passed to higher level applications for processing, before the buffer process is completed, the socket receives more data and write the data to the buffer again since it is async process, that will certainly result data corruption. The same story could apply for async_write if one static life time buffer to async_write. Is that thought overstated? Or it needs a real concern?
Once you initiate an async operation against a memory buffer, the contents
of that buffer are *undefined* from the moment you have called the
async_XXX function until the moment control is resumed in the handler
function you submitted.
For example, imagine an object, even one protected by a strand...
void myobject::initiate_read()
{
// notes:
// making completion handlers mutable allows asio to move them
internally, which is an optimisation
// it also allows them to carry move-only objects as part of their state
auto handler = [self = this->shared_from_this()]
(auto ec, auto bytes_transferred) mutable
{
// You are now in the the completion handler.
// you may now read from self->mybuffer
self->handle_read(ec, bytes_transferred);
};
// initiate async function.
asio::async_read_until(mysocket,
mybuffer, // in this case we're using a
streambuf but the principles are the same
'\n', // if we used
asio::buffer(some-container) as the buffer object
asio::bind_executor(mystrand, //
because we're using a strand, we must bind the handler
std::move(handler)); // to
the strand to ensure that no two handlers run simultaneously
// mybuffer is now in an undefined state.
// it is inappropriate to use it in any way from now on
// the place to read it is in the method myobject::handle_read
}
void myobject::handle_read(system::error_code ec, std::size_t
bytes_transferred)
{
// for composed operations like async_read_until it is possible that some
data is read *and* that we get
// an error indicated.
// however, for now we'll ignore that complication
if (ec == system::error_code()) // the correct way to check for 'no
error'
{
// it's safe to manipulate the buffer here as there are no async
operations
// in flight that touch it.
std::string s;
auto is = std::istream(&mybuffer);
std::getline(is, s);
do_something(s);
// NOW re-initiate the async read
initiate_read();
}
else
{
// do error handling here and don't reinitiate read
}
}
On Sun, 6 Jan 2019 at 07:50, hh h via Boost
If all async_read and async_write need a static life time buffer, do I need to worry about potential buffer overwritten before the buffer process is completed?
Take an example, if a lift_time_buffer is a static buffer for async_read, when the socket gets the data, it fills data into the buffer, then the buffer is passed to higher level applications for processing, before the buffer process is completed, the socket receives more data and write the data to the buffer again since it is async process, that will certainly result data corruption. The same story could apply for async_write if one static life time buffer to async_write.
Is that thought overstated? Or it needs a real concern?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Richard Hodges hodges.r@gmail.com office: +442032898513 home: +376841522 mobile: +376380212 (this will be *expensive* outside Andorra!) skype: madmongo facebook: hodges.r
Thanks Richard, that was well explained, I am now thinking to have a class to wrap the shared_ptrboost::asion::streambuf to lock it down during calling the async_xxx until it is back to the handler function then to release it. Thank you so much Richard.
Controlling the streambuf's lifetime with a shared pointer is probably an
error.
The buffer will need to live as long as the socket lives. It will contain
extra data after each async_read which you will want to preserve.
I would suggest that it's a member owned by the connection object, in the
same way that the socket is
On Sun, 6 Jan 2019, 09:08 hh h via Boost Thanks Richard, that was well explained, I am now thinking to have a
class to wrap the shared_ptrboost::asion::streambuf to lock it down
during calling the async_xxx until it is back to the handler function
then to release it. Thank you so much Richard. _______________________________________________
Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost
I currently use a fixed size of array char buffer[MAX_SIZE] as a life time buffer attached to the class running on async_read, I think that was your point it should live as long as the socket lives. I think it should be fine for running a single thread, for multiple threads it might cause issues. Anyway, I am going to run a single thread, to use two static life time buffers, one for async_read, one for async_write. Thanks Richard.
Yes, you will need separate buffers for read and write. Even without
multi-threading, since it's entirely possible to have an async_read in
progress at the same time as an async_write on the same socket.
For multiple threads, host a
`asio::strandasio::io_context::executor_type` in each io object (server,
connection, client).
Bind all the class's handlers to that strand (see my example above). The
strand behaves like an extremely efficient mutex, with respect to handler
invocation, for your object.
On Mon, 7 Jan 2019 at 17:11, hh h via Boost
I currently use a fixed size of array char buffer[MAX_SIZE] as a life time buffer attached to the class running on async_read, I think that was your point it should live as long as the socket lives. I think it should be fine for running a single thread, for multiple threads it might cause issues.
Anyway, I am going to run a single thread, to use two static life time buffers, one for async_read, one for async_write.
Thanks Richard.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Richard Hodges hodges.r@gmail.com office: +442032898513 home: +376841522 mobile: +376380212 (this will be *expensive* outside Andorra!) skype: madmongo facebook: hodges.r
Thanks Richard, I was wandering the strand real role, thanks for the
explanation.
As you pointed out the two life time static buffers one for read one
for write should be the same as the life time of the socket attached
to each session. In a server, there could be more than thousand
session connections, in terms of resource management, would you think
it is good or bad idea if to use a global buffer pool management class
so the global buffers can be shared by all sessions?
One thing is clear, that a fixed size array like boost::array
On Tue, 8 Jan 2019 at 03:22, hh h via Boost
Thanks Richard, I was wandering the strand real role, thanks for the explanation.
As you pointed out the two life time static buffers one for read one for write should be the same as the life time of the socket attached to each session. In a server, there could be more than thousand session connections, in terms of resource management, would you think it is good or bad idea if to use a global buffer pool management class so the global buffers can be shared by all sessions?
When writing software for high concurrent use, you have to assume that at some point, every client will read and write at the same time. Whether you use some centralised buffer resource or not, your total maximum working set size will be the same in either case. In this case you're better off allocating the maximum memory a session will need and having it owned by the session because if you run out of resources due to too many sessions, its better that it happens before the session is connected than half way through the user's operations. Therefore, if your server is able to handle (say) 5000 clients at the same time, you may as well allocate the memory for 5000 connections at program start (or even statically). If you don't have enough memory, better to know early!
One thing is clear, that a fixed size array like boost::array
readBuffer for each session async_read should not be used (which I am currently using), allocate and reserve more than thousands MAX_SIZE buffer in memory is not going to work.
It depends how big your MAX_SIZE is. If it's (say) 100k (a huge buffer for most use cases) then 1,000 concurrent connections requires 1000*2*100k = 200Mb. 200 megabytes is not a lot of memory in a modern server.
I'll have to see if it can be replaced by other boost simple smart buffer like boost::asio::buffer which can be converted to a raw pointer char* for feeding msgpack unpack input. (Vinnie did mention to use beast, might be another option, I have to see how complicated that will be)
asio::buffer creates a "buffer definition object" i.e. an object that describes the address and size of the actual buffer memory. Be careful about this. The asio ConstBufferSequence concept does not model actual memory, it models the idea of a sequence of "memory references". This is a slightly unclear (in my view) part of the documentation. It's worth looking at the example code, building it and single-stepping though it.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Thank you so much Richard, as always appreciate for your sharing and
insight comments.
On 1/8/19, Richard Hodges via Boost
On Tue, 8 Jan 2019 at 03:22, hh h via Boost
wrote: Thanks Richard, I was wandering the strand real role, thanks for the explanation.
As you pointed out the two life time static buffers one for read one for write should be the same as the life time of the socket attached to each session. In a server, there could be more than thousand session connections, in terms of resource management, would you think it is good or bad idea if to use a global buffer pool management class so the global buffers can be shared by all sessions?
When writing software for high concurrent use, you have to assume that at some point, every client will read and write at the same time. Whether you use some centralised buffer resource or not, your total maximum working set size will be the same in either case. In this case you're better off allocating the maximum memory a session will need and having it owned by the session because if you run out of resources due to too many sessions, its better that it happens before the session is connected than half way through the user's operations.
Therefore, if your server is able to handle (say) 5000 clients at the same time, you may as well allocate the memory for 5000 connections at program start (or even statically). If you don't have enough memory, better to know early!
One thing is clear, that a fixed size array like boost::array
readBuffer for each session async_read should not be used (which I am currently using), allocate and reserve more than thousands MAX_SIZE buffer in memory is not going to work. It depends how big your MAX_SIZE is. If it's (say) 100k (a huge buffer for most use cases) then 1,000 concurrent connections requires 1000*2*100k = 200Mb. 200 megabytes is not a lot of memory in a modern server.
I'll have to see if it can be replaced by other boost simple smart buffer like boost::asio::buffer which can be converted to a raw pointer char* for feeding msgpack unpack input. (Vinnie did mention to use beast, might be another option, I have to see how complicated that will be)
asio::buffer creates a "buffer definition object" i.e. an object that describes the address and size of the actual buffer memory. Be careful about this. The asio ConstBufferSequence concept does not model actual memory, it models the idea of a sequence of "memory references". This is a slightly unclear (in my view) part of the documentation. It's worth looking at the example code, building it and single-stepping though it.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (2)
-
hh h
-
Richard Hodges