
On Mon, Jun 26, 2017 at 7:47 AM, Niall Douglas via Boost <boost@lists.boost.org> wrote:
If you have a severe algorithmic flaw in your implementation, reviewers would be right to reject your library.
If you are stating that Beast has a "severe algorithmic flaw" then please back up your claims with more than opinion. However, note the following: * At the time Boost.Http was reviewed it used the NodeJS parser which operates in chunks [1]. No "severe algorithmic flaw" came up then. * PicoHTTPParser, which Beast's parser is based on, outperforms NodeJS by over 600% [2] * For parsers operating on discontiguous buffers, structured elements such as request-target, field names, and field values must be flattened (linearized) to be presented to the next layer which means temporary storage and buffer copying [3, 4]. So buffer copies cannot be avoided. Beast makes the decision to do one big buffer copy up front instead of many small buffer copies as it goes. The evidence shows this tradeoff is advantageous. But maybe you are suggesting that functions like basic_fields::insert should take as their first parameter `gsl::span<gsl::span<char>>` instead of `string_view` [5]? That would be quite inconvenient. [1] https://github.com/nodejs/http-parser [2] https://github.com/fukamachi/fast-http/tree/6b9110347c7a3407310c08979aefd650... [3] https://github.com/vinniefalco/Beast/blob/8982e14aa65b9922ac5a00e5a5196a08df... [4] "In case you parse HTTP message in chunks...your data callbacks may be called more than once" https://github.com/nodejs/http-parser/blob/master/README.md#callbacks [5] https://github.com/vinniefalco/Beast/blob/8982e14aa65b9922ac5a00e5a5196a08df...