"Jonathan Franklin"
On Mon, Oct 19, 2009 at 2:37 PM, Eric Twietmeyer
wrote: This current project requires UDP as well as TCP, and it seems somehow to be the UDP communication in particular that is initiating the problem (we can sort of remove the UDP portion and then this problem does not occur).
FWIW, we have an application that runs on XP which uses ASIO with both UDP and TCP. We talk to a third-party server that uses UDP broadcasts to tell us what TCP server endpoints (multiple) to connect to. We then connect via TCP, and do fairly high-throughput data transfer (gig-e speeds) using async_read(). The app is constantly receiving UDP broadcasts, via async_receive_from().
I also wrote a simulator for the third-party server that uses ASIO to do non-blocking UDP broadcasts, asynchronously handle TCP connects, and synchronous TCP writes.
We have experienced no issues using both TCP and UDP with ASIO within the same app framework.
Jon
Thanks to everyone for the feedback. That gives me much more confidence that this really should be working. In our case the process connects to a "client" via TCP and a "server" via UDP and acts basically as a data cacher. So long as the end client keeps requesting data that has not been cached, the UDP pipe will be used to fetch more data from the server and then it is passed over TCP to the client. There is therefore high throughput on both the TCP and UDP side of the our application. If the memory corruption does not occur quickly (as is sometimes the case), then it is possible to simply feed back data via TCP that has been cached and then the application seems stable. This is why I said that it seems to be UDP that is causing the issue as it only occurs when the system is run in such a manner that the client keeps needing new, uncached, data.