On Wed, May 18, 2022 at 12:16 PM Ruben Perez via Boost
The trouble here is that when calling connection::query, MySQL sends all the results beforehand. So you will have to read those packets even if you want to discard them; otherwise, you won't be able to use the connection further.
Prepared statements allow fetching a limited number of rows, using cursors. This is not implemented yet, and is tracked by https://github.com/anarthal/mysql/issues/20. Even in this case, you have to manually ask the server "I want 5 rows", and you will have to read those 5 rows before going further.
LibPQ (and the protocol below it I assume) are similar. You must wait and "consume" the whole resultset of a prepared statement, while with a cursor, you fetch "batches" of your chosen size(s). BUT at least with LibPQ, turns out the Cursor approach is slower overall, in my testing. The Cursor approach gives you faster time-to-first-row (in sync mode at least), and you don't need to have libpq accumulate a large resultset in its own memory, since you can process and discard the smaller batches, but you pay for that with being up to 2X slower overall, probably from the increased round-trips I guess (the slowdown probably depends on the batch sizes too). In your case, you are async, so unlike libpq (sync mode), you have good time-to-first-row in both cases, and still allow processing the rows as they arrive. But I'd double check the overhead of Statement vs Cursor, with different batch sizes, on a larger resultset, in your performance testing. But that's straying a bit outside the scope of your library maybe. Although you have high-performance as clearly in-scope, so tradeoffs like these, which granted are outside your control, are still important to bench for, and mention in the doc IMHO. My $0.02.