On Mon, Jun 26, 2017 at 7:47 AM, Niall Douglas via Boost
If you have a severe algorithmic flaw in your implementation, reviewers would be right to reject your library.
If you are stating that Beast has a "severe algorithmic flaw" then
please back up your claims with more than opinion. However, note the
following:
* At the time Boost.Http was reviewed it used the NodeJS parser which
operates in chunks [1]. No "severe algorithmic flaw" came up then.
* PicoHTTPParser, which Beast's parser is based on, outperforms NodeJS
by over 600% [2]
* For parsers operating on discontiguous buffers, structured elements
such as request-target, field names, and field values must be
flattened (linearized) to be presented to the next layer which means
temporary storage and buffer copying [3, 4]. So buffer copies cannot
be avoided. Beast makes the decision to do one big buffer copy up
front instead of many small buffer copies as it goes. The evidence
shows this tradeoff is advantageous.
But maybe you are suggesting that functions like basic_fields::insert
should take as their first parameter `gsl::span