On Sun, Jul 2, 2017 at 10:54 AM, Bjorn Reese via Boost
If the above is used to read a chunked transfer, what happens to chunk-ext fields? Are they inserted as header fields or are they lost?
Chunk extensions are not valid HTTP headers. `beast::http::parser` does not store them in the `basic_fields`. It doesn't store them at all, they are simply discarded.
From https://tools.ietf.org/html/rfc7230#section-4.1.1 "The chunked encoding is specific to each connection and is likely to be removed or recoded by each recipient (including intermediaries) before any higher-level application would have a chance to inspect the extensions. Hence, use of chunk extensions is generally limited to specialized HTTP services such as "long polling" (where client and server can have shared expectations regarding the use of chunk extensions) or for padding within an end-to-end secured connection.
To my understanding, chunk-extensions are a rare niche use-case with meaning only to applications using a custom interpretation at each end of the connection. In fact 5 years ago the IETF almost deprecated them: https://trac.ietf.org/trac/httpbis/ticket/343 Beast doesn't go out of its way to help you get at the extensions, but it also doesn't make it impossible. On the other hand, I do not have significant expertise with HTTP servers; if a compelling use-case presents itself this is an aspect of the library which may be improved, in a backward-compatible way.
Given that this is an example of incremental reading, why does it use read() rather than read_some()?
The contracts for those functions are as follows `read` continues until there's an error or the message is complete `read_some` continues until it gets at least one byte, an error occurs, or the message is complete In the example I posted the behaviors are very similar since the buffer has a 512-byte limit. Either would work.