On Mon, Sep 21, 2020 at 2:04 PM Harshada Wighe via Boost < boost@lists.boost.org> wrote:
5. Compared to our in house JSON library, this library was slightly slower. We have very simple but very large JSON data. We use objects and arrays, but only integer numbers with no fractional part and very large ASCII strings, and no booleans and no nulls. In one of our standalone conversion tools we tried using this library and found it had a 4% longer runtime on a small 170 MB file.
The parser allocates too much memory:
6. I think this is because the design is not optimized for the case when all the JSON content is already stored in memory. In our parser we point to the strings in the original buffer.
What you describe here is in-situ parsing. This is not what Boost.JSON aims to provide, nor is it feasible for an incremental parser (in the vast majority of use-cases). As an aside, I'm quite surprised that it performed so well against an in-situ parser. If your in-house library does not validate UTF-8 byte sequences, consider turning it off for Boost.JSON in your benchmark. It could result in a considerable increase in performance.