
Rene Rivera wrote: [...]
This means that since I'm writing my own router I need the best performance I can get. But to give some minimal concrete numbers... Given a single gigabit line, a minimal example as a server is likely going to have multiple lines, and the most favorable situation of handling full 64K UDP packets one would need to handle about 1500 messages a second (or 2/3 of a millisecond per message). But in practice it would have to be faster than that. This means minimizing the most expensive steps, i.e. memory allocations.
Currently AFAICT in order to use the async operations asio requires a model which allocates a new handler for each message received. This might be fine for many situations where those handlers will change from message to message. But for my use case I have only one handler that needs to get called for *any* message that comes in. In the asio case it means that for each message I received it would: remove the handler from the demuxer map, call the handler, which would do my custom parsing and routing, and push a new async_receive of myself (which creates a new handler object and inserts it again into the demux map). This is clearly suboptimal, and will result in considerable performance degradation. To refer to some concrete code, one can look at the Daytime.6 tutorial which does basically that procedure.
Two things come to mind... 1. What inefficiencies are inherent to the design, and what are simply an implementation detail, and 2.
Did you try to use the library?
No.
It'd probably be helpful if you create a simple throughput test and post the results so that there is a clear target for Asio to match or exceed. This will also validate your objections as they currently seem to be based on intuition. :-)