Cliff,
Your code is doing that, correctly, although maybe not optimally in design.
On re-reading your code, I'm wondering about sending and receiving in the same (main) thread (yes, I see the io_service is being run in a separate thread) - are all of the datagrams sent before the receiving object can process them? This would (obviously) cause some issues. I'll have to review Asio docs to see how the threading works in this case (the "b" and "a" server objects are created in the same (main) thread). If this turns out to be part (or all) of the problem, move either "a" or "b" to a separate thread.
actually, putting the 'a' and 'b' object execution into a separate thread solved the issue. excellent!
Also, consider that you're doing a lot more work when receiving the datagrams, than when you're sending them:
-- Creating a std::string object from the char buf (creates a std::string, copies the data) -- Pushing a std::string object on to a vector - this will cause another std::string to be created and another buffer copy, as well as some (quite signficant) std::string copies when the internal buffer of the vector is filled up, and a reallocation occurs, causing all of the existing std::string objects to be copied again
To optimize the receiving logic, consider some or all of the following:
-- Use a std::list instead of std::vector to queue up the incoming data (this will eliminate the reallocation copying) -- Use a reference counted string and store them in the container (Chris created a nice shared_const_buffer class in some example code that could be used - I've taken that class, enhanced it, wrote some unit tests, and am using it extensively).
thank you for these suggestions.
I doubt that Asio performs any incoming UDP datagram buffering (you can look at the Asio code, to verify). It's almost surely the network layer that is discarding the datagrams, after the receiving buffer fills up.
I see
This is always going to be the case where datagrams are coming in faster than they can be processed.
It's also a problem with TCP, and a classic problem is that an app is sending data through a TCP connection faster than the receiver can drain / process it. Eventually, the receiving buffer fills up, and TCP flow control kicks in. For blocking TCP I/O, the sending app will block until the receiving app catches up. With async I/O, the sending app completion handler won't be invoked until the data flow catches up (which is another reason why I like async networking much better than blocking I/O).
indeed... thanks again, Akos