
Hello,
I am having a hard time getting my Asio usage off the ground.
I started with what I thought was a nominal (knock on wood)
asynchronous server, which runs in a worker thread, but was running
into seg fault issues.
In the interest of "simplifying" the issue, I decided to simplify to a
"blocking" server, hopefully also hosted in a thread, but am also
running into seg fault issues.
First I am trying to capture the server instance in a lambda passing
it to a thread as a runner. Then I am trying to capture a shared
pointer to the server thinking there is some move semantic or
something. Seg fault in either case.
Most of the examples are one main being the sole main thread hosting
the io_service. Would that this were the case with me. This has to be
one worker among several other workers running in the same process.
Wouldn't mind if there were a couple of ideas recommending what I
might explore next.
Thank you...
In the diagnostic routine I've got wired up:
boost::shared_ptrdchem::asio::ip::tcp::server2 server_
= boost::make_shareddchem::asio::ip::tcp::server2(17017);
boost::thread runner_([&server_]() {
server_->start_server();
});
runner_.start_thread();
while (!timer_.has_elapsed_query_wait(wait_timeout_,
elapsed_milliseconds_)) {
//...
}
runner_.interrupt();
runner_.join();
And the server itself, header first:
class server2 {
public:
/**
* \brief Endpoint type definition.
*/
typedef boost::asio::ip::tcp::endpoint endpoint_type;
/**
* \brief Acceptor type definition.
*/
typedef boost::asio::ip::tcp::acceptor acceptor_type;
/**
* \brief Service type definition.
*/
typedef boost::asio::io_service service_type;
/**
* \brief Socket type definition.
*/
typedef boost::asio::ip::tcp::socket socket_type;
/**
* \brief Socket pointer type definition.
*/
typedef boost::shared_ptr

On Wed, Jul 24, 2013 at 6:57 PM, Michael Powell
Hello,
I am having a hard time getting my Asio usage off the ground.
I started with what I thought was a nominal (knock on wood) asynchronous server, which runs in a worker thread, but was running into seg fault issues.
Taking a step back... I want to accomplish some IPC through socket channels, as a sort of event broker type architecture. So it will be important for internal callers to pub/sub mutex-protected event-channels, which subsequently serialize/deserialize through the running socket-worker-thread. But... All the Asio examples I am reading seem to prefer an instance of io_service in the main, which the main's sole purpose is to run a server. I am beginning to wonder if that's the case here. At least io_service needs to be available to the main thread? A bit puzzled. I'm sure it is straightforward once you get past these couple of hurdles.
In the interest of "simplifying" the issue, I decided to simplify to a "blocking" server, hopefully also hosted in a thread, but am also running into seg fault issues.
First I am trying to capture the server instance in a lambda passing it to a thread as a runner. Then I am trying to capture a shared pointer to the server thinking there is some move semantic or something. Seg fault in either case.
Most of the examples are one main being the sole main thread hosting the io_service. Would that this were the case with me. This has to be one worker among several other workers running in the same process.
Wouldn't mind if there were a couple of ideas recommending what I might explore next.
Thank you...
In the diagnostic routine I've got wired up:
boost::shared_ptrdchem::asio::ip::tcp::server2 server_ = boost::make_shareddchem::asio::ip::tcp::server2(17017);
boost::thread runner_([&server_]() { server_->start_server(); }); runner_.start_thread();
while (!timer_.has_elapsed_query_wait(wait_timeout_, elapsed_milliseconds_)) { //... }
runner_.interrupt(); runner_.join();
And the server itself, header first:
class server2 { public:
/** * \brief Endpoint type definition. */ typedef boost::asio::ip::tcp::endpoint endpoint_type;
/** * \brief Acceptor type definition. */ typedef boost::asio::ip::tcp::acceptor acceptor_type;
/** * \brief Service type definition. */ typedef boost::asio::io_service service_type;
/** * \brief Socket type definition. */ typedef boost::asio::ip::tcp::socket socket_type;
/** * \brief Socket pointer type definition. */ typedef boost::shared_ptr
socket_pointer; /** * \brief Port type definition. */ typedef unsigned int port_type;
protected:
/** * \brief Returns whether bytes are available. * \param s A socket reference. */ static bool are_bytes_readable(socket_type& s);
/** * \brief Starts a new session. * \param s Starts running a new session. */ void session(socket_pointer s);
public:
/** * \brief Constructor * \param port A TCP port. */ server2(port_type port);
/** * \brief Destructor */ virtual ~server2();
/** * \brief Starts the server. */ void start_server();
private:
/** * \brief Service. */ service_type m_service;
/** * \brief Port. */ port_type m_port; };
///////////////////////////////////////////////////////////////////// server2::server2(port_type port) : m_service(), m_port(port) { }
///////////////////////////////////////////////////////////////////// server2::~server2() { }
///////////////////////////////////////////////////////////////////// bool server2::are_bytes_readable(socket_type& s) {
boost::asio::socket_base::bytes_readable cmd_(true); s.io_control(cmd_); std::size_t cmd_result_ = cmd_.get(); return cmd_result_ > 0; }
///////////////////////////////////////////////////////////////////// void server2::start_server() {
try {
endpoint_type ep_(boost::asio::ip::tcp::v4(), m_port); acceptor_type a_(m_service, ep_);
std::cout << "accepting connection" << std::endl;
boost::system::error_code ec_;
auto s_ = boost::make_shared
(m_service); a_.accept((*s_), ec_); if (!ec_) return;
std::cout << "starting session" << std::endl;
session(s_); } catch (boost::thread_interrupted& tiex) { } catch (...) { } }
///////////////////////////////////////////////////////////////////// void server2::session(socket_pointer s) {
auto& s_ = (*s);
while (true) {
{ utils::sleep_for_milliseconds(100);
threading::thread_interruption_disabler tid;
std::cout << "are bytes readable" << std::endl;
if (!are_bytes_readable(s_)) continue;
std::cout << "reading some" << std::endl;
boost::system::error_code ec_; std::vector<byte> buffer_;
size_t length_ = s_.read_some(boost::asio::buffer(buffer_), ec_);
//Connection reset cleanly by peer. if (ec_ == boost::asio::error::eof) break; else if (ec_) //Some other error. throw boost::system::system_error(ec_);
boost::asio::write(s_, boost::asio::buffer(buffer_, length_)); } } }
Regards,
Michael Powell

Michael Powell
Most of the examples are one main being the sole main thread hosting the io_service. Would that this were the case with me. This has to be one worker among several other workers running in the same process.
There's no requirement that there be only one io_service per process, nor that it be dealt with in the main (original) thread. My current design actually has a bit of an explosion of io_service instances: for each client, I end up with one input + thread and one output + thread, to make sure that the input processing / data generation doesn't overwhelm the output bandwidth (and hence cause memory exhuastion by generating data faster than we can send it).
Wouldn't mind if there were a couple of ideas recommending what I might explore next.
Hopefully obvious, but the main reason for seg faults is when threads expect an object to exist somewhere, but a different thread has already destroyed ("destructed") the object.
boost::thread runner_([&server_]() { server_->start_server(); }); runner_.start_thread();
Boost.thread starts automatically, I thought? For that matter, where does "start_thread" even exist? It's not in the current documentation... Ah, looks like it was removed about a year ago. Fair enough. Just use creation as start.
while (!timer_.has_elapsed_query_wait(wait_timeout_, elapsed_milliseconds_)) { //... }
runner_.interrupt(); runner_.join();
Given that thread interruption is a bit dicey, you might consider doing the timeout work in the thread / io_service itself. That way, the controller only needs to launch the thread, then join it.
boost::system::error_code ec_; a_.accept((*s_), ec_); if (!ec_) return;
Since you're catching exceptions *anyway*, don't bother with error codes unless you're detecting specific values (as you do for eof elsewhere).
void server2::session(socket_pointer s) {
while (true) { {
No reason for this extra scope, either.
std::vector<byte> buffer_; size_t length_ = s_.read_some(boost::asio::buffer(buffer_), ec_);
This is fishy. buffer_ has length 0, yet you're trying to read into it. Maybe add a "buffer_.resize( 1024 )" (or whatever sounds good to you). Good luck, t.

On Thu, Jul 25, 2013 at 2:30 AM, Anthony Foiani
Michael Powell
writes: Most of the examples are one main being the sole main thread hosting the io_service. Would that this were the case with me. This has to be one worker among several other workers running in the same process.
There's no requirement that there be only one io_service per process, nor that it be dealt with in the main (original) thread.
My current design actually has a bit of an explosion of io_service instances: for each client, I end up with one input + thread and one output + thread, to make sure that the input processing / data generation doesn't overwhelm the output bandwidth (and hence cause memory exhuastion by generating data faster than we can send it).
Wouldn't mind if there were a couple of ideas recommending what I might explore next.
Hopefully obvious, but the main reason for seg faults is when threads expect an object to exist somewhere, but a different thread has already destroyed ("destructed") the object.
boost::thread runner_([&server_]() { server_->start_server(); }); runner_.start_thread();
Boost.thread starts automatically, I thought?
For that matter, where does "start_thread" even exist? It's not in the current documentation...
Ah, looks like it was removed about a year ago. Fair enough. Just use creation as start.
Okay.
while (!timer_.has_elapsed_query_wait(wait_timeout_, elapsed_milliseconds_)) { //... }
runner_.interrupt(); runner_.join();
Given that thread interruption is a bit dicey, you might consider doing the timeout work in the thread / io_service itself. That way, the controller only needs to launch the thread, then join it.
Not sure what you mean by dicey. If by dicey you mean that accept() blocks altogether, interrupt requests and all, then, yes, I'll agree dicey. Obviously, this won't work either, because if a client never connects (which is a possibility when testing), then that can't hang up the rest of the app. Can you be more specific? In the io_service itself?
boost::system::error_code ec_; a_.accept((*s_), ec_); if (!ec_) return;
Since you're catching exceptions *anyway*, don't bother with error codes unless you're detecting specific values (as you do for eof elsewhere).
I am, I believe, checking for bytes, are bytes available, this sort of thing. Instead of receive blocking when there is no data, which is one plausible use case. As soon as I schedule an async_accept, I get a segmentation fault. I don't know the inner workings of the io_service that well, but I suspect that may be because I have created the server outside the running thread. Something you mentioned earlier though, which is starting to click with me: io_service might be the thing I want to run and/or have time out, and just bypass needing to do anything with threads. It's a little bit different than what I expected and/or am used to.
void server2::session(socket_pointer s) {
while (true) { {
No reason for this extra scope, either.
std::vector<byte> buffer_; size_t length_ = s_.read_some(boost::asio::buffer(buffer_), ec_);
This is fishy. buffer_ has length 0, yet you're trying to read into it. Maybe add a "buffer_.resize( 1024 )" (or whatever sounds good to you).
Good luck, t. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users

I am now running on the server itself, no threads or anything.
And without diving too far into this rabbit hole...
I am targeting ArchLinux for ARM. As soon as the async_accept goes, I
get "Illegal instruction".
That's with or without calls to acceptor listen.
Don't know what's going on with io_service, but I may need to come up
with a different solution than Asio if that won't work for our target
platform.
On Thu, Jul 25, 2013 at 10:10 AM, Michael Powell
On Thu, Jul 25, 2013 at 2:30 AM, Anthony Foiani
wrote: Michael Powell
writes: Most of the examples are one main being the sole main thread hosting the io_service. Would that this were the case with me. This has to be one worker among several other workers running in the same process.
There's no requirement that there be only one io_service per process, nor that it be dealt with in the main (original) thread.
My current design actually has a bit of an explosion of io_service instances: for each client, I end up with one input + thread and one output + thread, to make sure that the input processing / data generation doesn't overwhelm the output bandwidth (and hence cause memory exhuastion by generating data faster than we can send it).
Wouldn't mind if there were a couple of ideas recommending what I might explore next.
Hopefully obvious, but the main reason for seg faults is when threads expect an object to exist somewhere, but a different thread has already destroyed ("destructed") the object.
boost::thread runner_([&server_]() { server_->start_server(); }); runner_.start_thread();
Boost.thread starts automatically, I thought?
For that matter, where does "start_thread" even exist? It's not in the current documentation...
Ah, looks like it was removed about a year ago. Fair enough. Just use creation as start.
Okay.
while (!timer_.has_elapsed_query_wait(wait_timeout_, elapsed_milliseconds_)) { //... }
runner_.interrupt(); runner_.join();
Given that thread interruption is a bit dicey, you might consider doing the timeout work in the thread / io_service itself. That way, the controller only needs to launch the thread, then join it.
Not sure what you mean by dicey. If by dicey you mean that accept() blocks altogether, interrupt requests and all, then, yes, I'll agree dicey.
Obviously, this won't work either, because if a client never connects (which is a possibility when testing), then that can't hang up the rest of the app.
Can you be more specific? In the io_service itself?
boost::system::error_code ec_; a_.accept((*s_), ec_); if (!ec_) return;
Since you're catching exceptions *anyway*, don't bother with error codes unless you're detecting specific values (as you do for eof elsewhere).
I am, I believe, checking for bytes, are bytes available, this sort of thing. Instead of receive blocking when there is no data, which is one plausible use case.
As soon as I schedule an async_accept, I get a segmentation fault.
I don't know the inner workings of the io_service that well, but I suspect that may be because I have created the server outside the running thread.
Something you mentioned earlier though, which is starting to click with me: io_service might be the thing I want to run and/or have time out, and just bypass needing to do anything with threads.
It's a little bit different than what I expected and/or am used to.
void server2::session(socket_pointer s) {
while (true) { {
No reason for this extra scope, either.
std::vector<byte> buffer_; size_t length_ = s_.read_some(boost::asio::buffer(buffer_), ec_);
This is fishy. buffer_ has length 0, yet you're trying to read into it. Maybe add a "buffer_.resize( 1024 )" (or whatever sounds good to you).
Good luck, t. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users

Then if I try to do any sort of boost::move into a session(const
socket& s): i.e. session(boost::move(s)).start(), I get errors that
move is ambiguous.
https://groups.google.com/forum/#!topic/boost-list/VNTr_waJVMk
That I can tell from the blogs, etc, this has been since 1.50 (at
least)? It is now 1.54.
I find it hard to imagine that anyone is using Boost.Asio, at least
targeting Linux flavors and/or for ARM.
Anyone? Thank you...
On Thu, Jul 25, 2013 at 10:32 AM, Michael Powell
I am now running on the server itself, no threads or anything.
And without diving too far into this rabbit hole...
I am targeting ArchLinux for ARM. As soon as the async_accept goes, I get "Illegal instruction".
That's with or without calls to acceptor listen.
Don't know what's going on with io_service, but I may need to come up with a different solution than Asio if that won't work for our target platform.
On Thu, Jul 25, 2013 at 10:10 AM, Michael Powell
wrote: On Thu, Jul 25, 2013 at 2:30 AM, Anthony Foiani
wrote: Michael Powell
writes: Most of the examples are one main being the sole main thread hosting the io_service. Would that this were the case with me. This has to be one worker among several other workers running in the same process.
There's no requirement that there be only one io_service per process, nor that it be dealt with in the main (original) thread.
My current design actually has a bit of an explosion of io_service instances: for each client, I end up with one input + thread and one output + thread, to make sure that the input processing / data generation doesn't overwhelm the output bandwidth (and hence cause memory exhuastion by generating data faster than we can send it).
Wouldn't mind if there were a couple of ideas recommending what I might explore next.
Hopefully obvious, but the main reason for seg faults is when threads expect an object to exist somewhere, but a different thread has already destroyed ("destructed") the object.
boost::thread runner_([&server_]() { server_->start_server(); }); runner_.start_thread();
Boost.thread starts automatically, I thought?
For that matter, where does "start_thread" even exist? It's not in the current documentation...
Ah, looks like it was removed about a year ago. Fair enough. Just use creation as start.
Okay.
while (!timer_.has_elapsed_query_wait(wait_timeout_, elapsed_milliseconds_)) { //... }
runner_.interrupt(); runner_.join();
Given that thread interruption is a bit dicey, you might consider doing the timeout work in the thread / io_service itself. That way, the controller only needs to launch the thread, then join it.
Not sure what you mean by dicey. If by dicey you mean that accept() blocks altogether, interrupt requests and all, then, yes, I'll agree dicey.
Obviously, this won't work either, because if a client never connects (which is a possibility when testing), then that can't hang up the rest of the app.
Can you be more specific? In the io_service itself?
boost::system::error_code ec_; a_.accept((*s_), ec_); if (!ec_) return;
Since you're catching exceptions *anyway*, don't bother with error codes unless you're detecting specific values (as you do for eof elsewhere).
I am, I believe, checking for bytes, are bytes available, this sort of thing. Instead of receive blocking when there is no data, which is one plausible use case.
As soon as I schedule an async_accept, I get a segmentation fault.
I don't know the inner workings of the io_service that well, but I suspect that may be because I have created the server outside the running thread.
Something you mentioned earlier though, which is starting to click with me: io_service might be the thing I want to run and/or have time out, and just bypass needing to do anything with threads.
It's a little bit different than what I expected and/or am used to.
void server2::session(socket_pointer s) {
while (true) { {
No reason for this extra scope, either.
std::vector<byte> buffer_; size_t length_ = s_.read_some(boost::asio::buffer(buffer_), ec_);
This is fishy. buffer_ has length 0, yet you're trying to read into it. Maybe add a "buffer_.resize( 1024 )" (or whatever sounds good to you).
Good luck, t. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users

Michael Powell
I find it hard to imagine that anyone is using Boost.Asio, at least targeting Linux flavors and/or for ARM.
Dunno about ARM, but I've been using Boost.ASIO (since version 1.43 or so, currently on 1.51) on Linux (x86-64 and ppc32, kernel versions 2.6.2x through 3.9.x) with great success. Doesn't mean that there's not potential issues -- ARM is a very fragmented "architecture" to support, with a huge number of variants and licensed cores etc. So it's possible that you're doing everything right, but something in your toolchain is failing you. (That "illegal instruction" is worrisome, and makes me wonder if your toolchain is 100% aligned with your hardware.)
Anyone? Thank you...
You might want to start debugging this by trying to get the ASIO tutorials and examples running on your target hardware. If that works, then we can see whether you're using ASIO in a way that is unexpected. Good luck, Tony

We're good. No worries...
On Thu, Jul 25, 2013 at 3:22 PM, Anthony Foiani
Michael Powell
writes: I find it hard to imagine that anyone is using Boost.Asio, at least targeting Linux flavors and/or for ARM.
Dunno about ARM, but I've been using Boost.ASIO (since version 1.43 or so, currently on 1.51) on Linux (x86-64 and ppc32, kernel versions 2.6.2x through 3.9.x) with great success.
Doesn't mean that there's not potential issues -- ARM is a very fragmented "architecture" to support, with a huge number of variants and licensed cores etc. So it's possible that you're doing everything right, but something in your toolchain is failing you. (That "illegal instruction" is worrisome, and makes me wonder if your toolchain is 100% aligned with your hardware.)
You aren't kidding re: ARM is a very fragmented architecture to support. As you can imagine, "toolchain" is usually among the foremost inter- and intra-office discussions. More than it needs to be IMHO. It's possible that it's not. I just looked and I believe we were possibly (stronger, probably) not targeting the correct processor.
Anyone? Thank you...
You might want to start debugging this by trying to get the ASIO tutorials and examples running on your target hardware.
If time permits I shall. For now I am going with a "simpler" client/server socket implementation. It's a little closer to the socket itself, for what we need to get done.
If that works, then we can see whether you're using ASIO in a way that is unexpected.
Willing to admit, possibly not. I don't think it's that complicated. A certain amount of faith I am putting in the io_service scheduling its call backs, etc.
Good luck, Tony _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users

Michael Powell
On Thu, Jul 25, 2013 at 2:30 AM, Anthony Foiani
wrote: Given that thread interruption is a bit dicey, you might consider doing the timeout work in the thread / io_service itself. That way, the controller only needs to launch the thread, then join it.
Not sure what you mean by dicey. If by dicey you mean that accept() blocks altogether, interrupt requests and all, then, yes, I'll agree dicey.
I only meant that boost thread interrupts are processed only at very specific times; I don't recall offhand if an ASIO synchronous accept would qualify. There's a list of them here: http://www.boost.org/doc/libs/1_54_0/doc/html/thread/thread_management.html#... (or: http://preview.tinyurl.com/p959dyz )
Obviously, this won't work either, because if a client never connects (which is a possibility when testing), then that can't hang up the rest of the app.
Can you be more specific? In the io_service itself?
The author of ASIO gave the basic pattern for implementing timeouts in his blog, let me find the link... http://blog.think-async.com/2010/04/timeouts-by-analogy.html The basic idea is that you start two asynchronous actions: 1. async accept (calls handler when a connection comes in). 2. timer (calls handler when timer expires). Note that the timer (2) doesn't have to be for the whole duration of the timeout; to use Chris's analogy, it's only how often you want to check the "parking meter", not the actual time on the meter itself. Now, in your accept handler, you cancel the timer (or at least reset the "parking meter" counter, so that the timer sees that you've "fed the meter".) In your timer handler, you check the meter; if it has expired, you cancel the accept operation. (These two operations should probably be done against the same io_service; if that io_service is being run by multiple threads, you will have to make sure to deal with synchronization issues. Easiest is probably to run an acceptor and its timer in a single strand.) I use exactly this pattern for read timeouts, and it works great.
Since you're catching exceptions *anyway*, don't bother with error codes unless you're detecting specific values (as you do for eof elsewhere).
I am, I believe, checking for bytes, are bytes available, this sort of thing. Instead of receive blocking when there is no data, which is one plausible use case.
The use above is checking the error code return from synchronous accept; that shouldn't indicate "bytes available", so much as "something went very wrong with the acceptor". That's why I was suggesting that you just use exceptions. Error codes from something like read_until can sometimes give you the information you want, but that's pretty rare. The typical ASIO style is to set things up so your code is "told" when there is data available, without needing to poll data sources.
As soon as I schedule an async_accept, I get a segmentation fault.
I don't know the inner workings of the io_service that well, but I suspect that may be because I have created the server outside the running thread.
Hm... so long as the execution context (in this case, the stack frame in which you allocated the server) is still active, then you should be fine. But if you exit that frame after creating it, then yes, that object has been destructed, and you're asking the io_service to run on garbage memory.
Something you mentioned earlier though, which is starting to click with me: io_service might be the thing I want to run and/or have time out, and just bypass needing to do anything with threads.
It's a little bit different than what I expected and/or am used to.
That kinda sums up the entire "learn to use ASIO" experience for me. :) Let us know if you manage to get the tutorial examples running, then we can work from there. Good luck! Best regards, Anthony Foiani

Okay, something has obviously changed from 1.53. Which means my
comprehension/usage needs to get educated. How do I know? Glad you
asked.
I had a UDP beacon building and working under 1.53, but which when I
take that and run it under 1.54, now fails with a bad file descriptor
exception.
terminate called after throwing an instance of
'boost::exception_detail::clone_impl
' what(): assign: Bad file descriptor Aborted
As per my usage, not a clue. Maybe I need to be specifying some
precompiler definition or something.
Are there any potential snags? Like a dependency on date/time,
threads, something like that, I should be aware of?
Possibly if I address that and get it working for the "simple" basic
UDP writer example I cobbled together, that also addresses the TCP
side of things.
On Thu, Jul 25, 2013 at 3:40 PM, Anthony Foiani
Michael Powell
writes: On Thu, Jul 25, 2013 at 2:30 AM, Anthony Foiani
wrote: Given that thread interruption is a bit dicey, you might consider doing the timeout work in the thread / io_service itself. That way, the controller only needs to launch the thread, then join it.
Not sure what you mean by dicey. If by dicey you mean that accept() blocks altogether, interrupt requests and all, then, yes, I'll agree dicey.
I only meant that boost thread interrupts are processed only at very specific times; I don't recall offhand if an ASIO synchronous accept would qualify. There's a list of them here:
http://www.boost.org/doc/libs/1_54_0/doc/html/thread/thread_management.html#... (or: http://preview.tinyurl.com/p959dyz )
Obviously, this won't work either, because if a client never connects (which is a possibility when testing), then that can't hang up the rest of the app.
Can you be more specific? In the io_service itself?
The author of ASIO gave the basic pattern for implementing timeouts in his blog, let me find the link...
http://blog.think-async.com/2010/04/timeouts-by-analogy.html
The basic idea is that you start two asynchronous actions:
1. async accept (calls handler when a connection comes in).
2. timer (calls handler when timer expires).
Note that the timer (2) doesn't have to be for the whole duration of the timeout; to use Chris's analogy, it's only how often you want to check the "parking meter", not the actual time on the meter itself.
Now, in your accept handler, you cancel the timer (or at least reset the "parking meter" counter, so that the timer sees that you've "fed the meter".)
In your timer handler, you check the meter; if it has expired, you cancel the accept operation.
(These two operations should probably be done against the same io_service; if that io_service is being run by multiple threads, you will have to make sure to deal with synchronization issues. Easiest is probably to run an acceptor and its timer in a single strand.)
I use exactly this pattern for read timeouts, and it works great.
Since you're catching exceptions *anyway*, don't bother with error codes unless you're detecting specific values (as you do for eof elsewhere).
I am, I believe, checking for bytes, are bytes available, this sort of thing. Instead of receive blocking when there is no data, which is one plausible use case.
The use above is checking the error code return from synchronous accept; that shouldn't indicate "bytes available", so much as "something went very wrong with the acceptor". That's why I was suggesting that you just use exceptions.
Error codes from something like read_until can sometimes give you the information you want, but that's pretty rare. The typical ASIO style is to set things up so your code is "told" when there is data available, without needing to poll data sources.
As soon as I schedule an async_accept, I get a segmentation fault.
I don't know the inner workings of the io_service that well, but I suspect that may be because I have created the server outside the running thread.
Hm... so long as the execution context (in this case, the stack frame in which you allocated the server) is still active, then you should be fine. But if you exit that frame after creating it, then yes, that object has been destructed, and you're asking the io_service to run on garbage memory.
Something you mentioned earlier though, which is starting to click with me: io_service might be the thing I want to run and/or have time out, and just bypass needing to do anything with threads.
It's a little bit different than what I expected and/or am used to.
That kinda sums up the entire "learn to use ASIO" experience for me. :)
Let us know if you manage to get the tutorial examples running, then we can work from there.
Good luck!
Best regards, Anthony Foiani _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
participants (2)
-
Anthony Foiani
-
Michael Powell