Hello, I am creating a SIP load tester. The load tester will mimic thousands of SIP end-points registering and making phone calls to a SIP proxy. Originally I have created one udp::socket object for each mimic'd SIP end-point. Each listening and executing async_receive_from() on its own port. So if I have 1000 SIP end-points, I have 1000 udp::sockets listening, receiving, and writing. I am expecting to go up to 20,000 SIP end-points in production. The reason I did this is because when packets arrive to the udp::sockets, I know exactly which SIP end-point it belongs to... which saves the app the hassle of having to parse through the new SIP packet to figure out which SIP end-point it should go through. But I did not take into account any back-end context switching that boost::asio::io_service might be doing or other overhead it might do. I cannot find much information about this. The alternative would be to have one udp::socket (or maybe a few, one on its own thread for example) and parse the incoming packets to figure out which SIP end-point it belongs to. So this comes down to parsing incoming SIP messages and using one (or a few) udp::sockets on its (their) own thread, versus using many udp::sockets but no SIP message parsing. Does anyone have any insight into the back-end context switching that boost::asio uses to work its magic? Does 20,000 udp::sockets each listenning and receiving scream out 'NO DO NOT DO THIS'? Any insight is welcomed. thanks jose