On Sun, Sep 22, 2019 at 8:06 PM Vinnie Falco via Boost < boost@lists.boost.org> wrote:
[...] I have developed a brand-new JSON library for this project,
in accordance with the following design goals:
* Robust support for custom allocators throughout. * Array and object interfaces closely track their corresponding C++20 container equivalents. * Use `std::basic_string` for strings. * Minimize use of templates for reduced compilation times. * Parsers and serializers work incrementally (['online algorithms]). * Elements in objects may also be iterated in insertion order.
Hi, What about performance? Have you heard of https://github.com/lemire/simdjson? Where would your library fall in that benchmark at the above link? The already mentioned and popular nlohmann/json has a convenient API, but fairs poorly in that benchmark for example. Also, in client/server communications, it's less often a few huge JSON documents, but rather lots of small documents, so the constant "startup" time of the parser matters too, and in that same vein, a PULL-parser that allows to build the native data-structures directly, rather than the DOM-like approach of fully converting the document to a built-in JSON object, and then convert that to the native-structure, avoids the temp document, which is especially useful for large documents. Finally, there are many corner cases in JSON parsing. Is there the equivalent of Autobabhn for WebSocket, but for JSON parsing? Any plans to integrate with such infrastructure, assuming there's one? Thanks --DD