aedis review starts today

The formal review of the aedis starts today (15th) and ends next week on Tuesday (24th). The library was developed & submitted by Marcelo. Aedis ================= Aedis is a Redis client library built on top of Boost.Asio that implements the latest version of the Redis communication protocol RESP3. Some of its distinctive features are * A connection class that provides high-level functions to execute Redis commands and receive server pushes concurrently. * Automatic pipelining of commands. * Support for STL containers and serialization of user-defined data types. * Low-latency: Responses can be read in their final data-structure without creating copies. Aedis links Github: https://github.com/mzimbres/aedis Homepage: https://mzimbres.github.io/aedis/ Git tag: v1.4.1 Tarball: https://github.com/mzimbres/aedis/archive/refs/tags/v1.4.1.tar.gz Redis Links: Redis: https://redis.io/ RESP3: https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md Pipelines: https://redis.io/docs/manual/pipelining/ Redis commands: https://redis.io/commands/ Local Redis =========== Redis can be also easily installed on your local machine, for example, on Debian run "apt install redis". You might also want to use docker https://www.docker.com/blog/how-to-use-the-redis-docker-official-image/ Redis server for the Boost review ================================= We also have a public aedis server available for testing, please contact me or Marcelo for the details. Review ======== Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost. Also please indicate the time & effort spent on the evaluation and give the reasons for your decision. The review is open to anyone who is prepared to put in the work of evaluating and reviewing the library. Prior experience in contributing to Boost reviews is not a requirement. Some questions to consider: - Does this library bring real benefit to C++ developers for real world use-case? - Do you have an application for this library? - Does the API match with current best practices? - Is the documentation helpful and clear? - Did you try to use it? What problems or surprises did you encounter? - What is your evaluation of the implementation? Additionally, stating your knowledge of redis and asio would be helpful for me. More information about the Boost Formal Review Process can be found at: http://www.boost.org/community/reviews.html Reviews can also be submitted privately to me, and I will not disclose who sent them. I might however cite passages from those reviews in my final decision. Thanks to Marcelo for providing us with a new library & thanks for all the reviews in advance.

On Sun, Jan 15, 2023 at 7:17 AM Klemens Morgenstern via Boost <boost@lists.boost.org> wrote:
The formal review of the aedis starts today (15th) and ends next week on Tuesday (24th).
I have questions. Why is async_run needed? Why doesn't the library just keep an operation running in the background automatically for the user? Thanks

On Tue, 17 Jan 2023 at 14:38, Vinnie Falco via Boost <boost@lists.boost.org> wrote:
I have questions. Why is async_run needed?
It was introduced to coordinate reads on the socket 1. When a server-push is received, async_run gives IO control to connection::async_receive so it can read the push from the socket. 2. When the response to a request arrives it gives IO control to the next pending connection::async_exec. 3. Notice that in order to support pushes, we need a way to keep reading from the socket, the question that still must be answered is why we need both async_run and async_receive (see below). async_run is also used to coordinate writes socket 4. async_run will also spawn an operation that will write pending requests to the socket when it becomes possible. For example, you call async_exec to execute a command but there is an ongoing write from a previous call or we are waiting for a response to arrive and can't write yet. The implementation will suspend the async_exec call until the request can be written and its response has arrived.
Why doesn't the library just keep an operation running in the background automatically for the user?
5. Because there wouldn't be a good way to communicate write errors to the user, for example, if async_write fails async_run will complete with the error reported by async_write. 6. Because the connection can be reconnected: async_run can be called in a loop by the user to proceed with the execution of the pending async_execs after a disconnection. This is meant to support quick and efficient failover. Offering this functionality as a built-in feature might bloat the class. Some questions that might arise are * Why do we need a connection that can be reconnected? Because a connection that is resilient against failovers and disconnections in general makes user code a lot simpler, you won't have to implement retries on user side, see [3] for example. Because it offers better performance: Imagine we have a chat server with 100k Websocket connections and the admin wants to force a failover to another Redis instance. Without this feature we would have 100k sessions throwing, catching and retrying, which is concerning IMO. * Why don't you merge async_run and async_receive in a single function. That is possible but would make the code more complex AFAICS. The separation between async_run and async_receive makes it possible to have a clean reconnect loop like [1] and a clean loop that receives pushes like [2]. * Why don't you trigger writes from async_exec instead of spawning a write operation from async_run? I found it harder to implement it like this, perhaps I am missing some async primitives like those from asem library. At least, I think there would be a performance hit, reasons we can discuss later. [1] https://github.com/mzimbres/aedis/blob/c88fcfb9edbaca3cc90e761d8b3087c0eed4a... [2] https://github.com/mzimbres/aedis/blob/c88fcfb9edbaca3cc90e761d8b3087c0eed4a... [3] https://github.com/mzimbres/aedis/blob/c88fcfb9edbaca3cc90e761d8b3087c0eed4a...

The formal review of the aedis starts today (15th) and ends next week on Tuesday (24th).
My quick comment so far from skimming the documentation (whose presentation is awesome for pure Doxygen output + CSS) is that the names from_bulk and to_bulk are too generic for ADL customization points. They need to be renamed to something less likely to conflict. I suggest aedis_from_bulk and aedis_to_bulk which should be distinctive enough, but other variations are possible.

On Tue, 17 Jan 2023 at 15:34, Peter Dimov via Boost <boost@lists.boost.org> wrote:
I suggest aedis_from_bulk and aedis_to_bulk which should be distinctive enough, but other variations are possible.
Good point, thanks. I opened an issue to track this: https://github.com/mzimbres/aedis/issues/42. Marcelo

On Sun, Jan 15, 2023 at 9:17 AM Klemens Morgenstern via Boost <boost@lists.boost.org> wrote:
[snip]
Review ========
Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost. Also please indicate the time & effort spent on the evaluation and give the reasons for your decision.
The review is open to anyone who is prepared to put in the work of evaluating and reviewing the library. Prior experience in contributing to Boost reviews is not a requirement.
Some questions to consider:
- Does this library bring real benefit to C++ developers for real world use-case?
I think so, yes.
- Do you have an application for this library?
I do not.
- Does the API match with current best practices?
No. In particular, the API requires the user to write lots of Redis strings by hand. A typo will not be detected until runtime. The set of Redis keywords is fixed; there's no reason not to make an enumeration of them all, like this: enum class cmd { ACL_CAT, // List the ACL categories or the commands inside a category ACL_DELUSER, // Remove the specified ACL users and the associated rules ACL_DRYRUN, // Returns whether the user can execute the given command // without executing the command. ACL GENPASS, // Generate a pseudorandom secure password to use for ACL users ACL_GETUSER, // Get the rules for a specific ACL user ACL_LIST, // List the current ACL rules in ACL config file format ACL_LOAD, // Reload the ACLs from the configured ACL file ACL_LOG, // List latest events denied because of ACLs in place // ... etc. }; I actually did all of them to see how much effort it would take. It took me about 20 or 30 minutes, working from the list on the Redis website (I just copy/pasted the commands listed at https://redis.io/commands/). Further, when enqueuing commands, there's no checking that a command and its arguments make sense together. For instance, here is some example code from the docs: resp3::request req; req.push("HELLO", 3); req.push("HGETALL", "hset-key"); req.push("QUIT"); If I change that last line to: req.push("QUITE"); or to: req.push("QUIT", "oops"); or even to: auto it = 42, getting = 13; req.push("QUIT", this, is, getting, "ridiculous"); ... the code is well formed, but nonsensical. I'd like the lib to know that "QUITE" isn't a thing, and that "QUIT" takes no arguments, and fail to compile if I mess any of this up via typo or copy/paste. One possible fix would be to change resp3::request::push*() to take a non-type template parameter that can be used to check the args: template<cmd command, class... Ts> void push(Ts const &... args) { static_assert(args_are_compatible_v<command, std::remove_cvref_t<Ts>...>); // Do the real work here... } template<cmd command, class Key, class ForwardIterator> void push_range( Key const & key, ForwardIterator begin, ForwardIterator end, typename std::iterator_traits<ForwardIterator>::value_type * = nullptr) { static_assert(key_and_range_are_compatible_v< command, std::remove_cvref_t<Key>, ForwardIterator>); // Do the real work here... } A more thorough-going change that I would like even more would be this: template<cmd c, typename... Params> struct command_t { static constexpr cmd command = c; command_t(Params &&... params) params_{(Params &&) params} {} std::tuple<value_or_reference_wrapper_t<Params>...> params_; }; std::string_view string_name(cmd c) { return /* mapping from cmd to strings must exist somewhere, obvs. */; } // value_or_reference_wrapper_t should produce a reference_wrapper<T> when // std::is_reference_v<T> is true, and T otherwise. IOW, move rvalue // references to small things like ints and std::string_views into the // command_t, and refer to lvalue references like std::vector<int> const & // (examples below). inline constexpr struct HELLO_t { using return_type = /* whatever HELLO returns from Redis, as a C++ type */; // HELLO might need multiple overloads, and/or to derive from command_t<...> // (see below); the https://redis.io/commands description says it takes optional parameters. auto operator()(int param) { return command_t<cmd::HELLO, int>(std::move(param)); } } HELLO; inline constexpr struct HGETALL_t { using return_type = /* whatever HGETALL returns from Redis, as a C++ type */; auto operator()(std::string_view param) { return command_t<cmd::HGETALL, int>(std::move(param)); } } HGETALL; // "nullary" commands should inherit from command_t<cmd::NAME> to avoid // superfluous parens at the call site. inline constexpr struct QUIT_t : command_t<cmd::QUIT> { using return_type = /* whatever QUIT returns from Redis, as a C++ type */; } QUIT; // etc. All the commands need this treatment. If you want to keep the result // type flexible, you can make return_type a kind-enumerator instead. For // instance, instead of std::vector<int> it might be // return_kind::number_array, where return_kind is some enumeration. The // kind-enumumerator would need to be static constexpr. Though this looks // like a lot of work, you can pretty easily make macros that do all this. template<cmd c, typename... Params> void resp3::request::push(command_t<c, Params...> command) { // Process the command as the code previously did, except now the params // *must* match the command, because of how the command_t is constructed // at the call site. } void user_code_1() { resp3::request req; req.push(HELLO(3)); req.push(HGETALL("hset-key")); req.push(QUIT); } // Even better, we can write this directly: void user_code_v2() { // No need to allocate a request object for smallish requests; you only // need to allocate the underlying buffer that the bytes are serialized // into. If you insist, you can always overload with an initial allocator // param. co_await conn->async_exec_request(HELLO(3), HGETALL("hset-key"), QUIT); } void user_code_2() { using exec_resp_type = std::tuple< std::optional<std::string>, // get std::optional<std::vector<std::string>>, // lrange std::optional<std::map<std::string, std::string>> // hgetall >; std::tuple< aedis::ignore, // multi aedis::ignore, // get aedis::ignore, // lrange aedis::ignore, // hgetall exec_resp_type, // exec > resp; // Since each command is a single object, we can also do this: co_await conn->async_exec_transaction( MULTI, GET("key1"), LRANGE("key2", 0, -1), HGETALL("key3"), EXEC, resp); // resp might need to be the first param, but you get the idea. // As before, we don't actually need to allocate a request object, just // the buffer of bytes that it would normally fill. // We can also use the statically-available return type (or return kind // enumerator) of each arg, and knowledge of the fact that this is a transaction, to // static_assert that the type of resp is compatible. } // Even better, we can use the fact that resp is the only meaningful value to // allow the user to write this instead: void user_code_2_v2() { using exec_resp_type = std::tuple< std::optional<std::string>, // get std::optional<std::vector<std::string>>, // lrange std::optional<std::map<std::string, std::string>> // hgetall >; exec_resp_type resp; co_await conn->async_exec_transaction( MULTI, GET("key1"), LRANGE("key2", 0, -1), HGETALL("key3"), EXEC, resp); } These are the kinds of easy-to-use-correctly and hard-to-use-incorrectly interfaces that I would expect of such a library. I think the marshalling/unmarshalling and handling of ASIO parts of the existing design make my life easier, but with raw string literals in the interface, I consider the library interface work only half-done.
- Is the documentation helpful and clear?
It is geared more towards users who already know all about Redis, and it is more reference-oriented. There is a real lack of the tutorial-style docs that Boost is known for. Boost reaches a really wide audience. There are a lot of people who might not be very DB-savvy, who might want to use Aedis, since "Hey, I already have Boost, why not use this DB library?" There are no docs for such users. For Redis aficionados, there should also be a step-by-step guide to how to do real work with the lib. The examples seem pretty good. I think one or more of them need to be integrated into the docs as a tutorial. There are lots of examples of other Boost libs that do this. The reference documentation is often too spare. For instance: https://mzimbres.github.io/aedis/classaedis_1_1resp3_1_1request.html#a9951b1... I must convert data to a string. Are there any requirements on the string? Can it have internal nulls, characters requiring escaping, etc.? I don't necessarily think that the docs need to talk about this in the reference section, but since this is not covered in the also-spare tutorial section, how am I supposed to know what the string requirements are? The Serialization section in https://mzimbres.github.io/aedis/index.html is similarly problematic. This is the example: // User defined type. struct mystruct {...}; // Serialize it in to_bulk. void to_bulk(std::pmr::string& to, mystruct const& obj) { std::string dummy = "Dummy serializaiton string."; aedis::resp3::to_bulk(to, dummy); } I was baffled by this at first reading. Why is obj there? It is unused. If "Dummy serializaiton string." were replaced with /* stringify obj here */, the example makes sense. I did not understand that I was supposed to do that until I read the reference mentioned earlier. There are a lot of things like this in the docs -- more than I have time to list. I don't think the docs are up to the Boost standard.
- Did you try to use it? What problems or surprises did you encounter?
I did not.
- What is your evaluation of the implementation?
I did not look at much of the implementation. What little I did see seemed reasonable.
Additionally, stating your knowledge of redis and asio would be helpful for me.
I've used ASIO a lot, and never used Redis. I vote to REJECT. If the library could be refactored so that the user-facing interface was more friendly, I would change my vote. Zach

On Tue, 17 Jan 2023 at 20:39, Zach Laine via Boost <boost@lists.boost.org> wrote:
No. In particular, the API requires the user to write lots of Redis strings by hand. A typo will not be detected until runtime. The set of Redis keywords is fixed; there's no reason not to make an enumeration of them all, like this:
I had enums in the past but I had to remove them for a couple of reasons 1. There are a multitude of possible commands in the Redis stack, for example * Redis search: https://redis.io/commands/?group=search * Redis graph: https://redis.io/commands/?group=graph * Redis time series: https://redis.io/commands/?group=timeseries * Redis json: https://redis.io/commands/?group=json * Perhaps more I might be missing righ now. Then, we would probably agree that the command syntax is equally important. To check it properly I would have to write a generator that parses the description of each command and generates a checker, see https://github.com/redis/redis/tree/unstable/src/commands. I have made no thoughts so far about how that would work and if I get that correctly I would still have to watch for versioning (what changes from release to release), what new commands were implemented and which were deprecated. That might be enough for Redis, but them I would be keeping other Redis-like databases off, for example * https://github.com/Snapchat/KeyDB * https://github.com/scylladb/scylladb/blob/master/docs/dev/redis.md * https://github.com/dragonflydb/dragonfly * Perhaps more. I don't know how compatible they are with the Redis API. I can't afford working on this right now and in the foreseeable future, I see it almost as a second library of a module of my library e.g. aedis::redis::search::request or something. <snip>
// Even better, we can write this directly: void user_code_v2() { // No need to allocate a request object for smallish requests; you only // need to allocate the underlying buffer that the bytes are serialized // into. If you insist, you can always overload with an initial allocator // param. co_await conn->async_exec_request(HELLO(3), HGETALL("hset-key"), QUIT); }
Aedis will coalesce requests before writing to the socket and reuse the buffer. The small buffer optimization is however something that might make sense for the request, however it is simpler to clear() the request and add new commands, namely, reuse the already allocated buffer. <snip>
These are the kinds of easy-to-use-correctly and hard-to-use-incorrectly interfaces that I would expect of such a library. I think the marshalling/unmarshalling and handling of ASIO parts of the existing design make my life easier, but with raw string literals in the interface, I consider the library interface work only half-done.
Aedis solves the problem of efficient async-io for the RESP3 protocol and offers a way to serialize user data-structures into an array of blob-strings. In this sense it is more about RESP3 than about Redis, perhaps I am advertising it wrongly. <snip>
For Redis aficionados, there should also be a step-by-step guide to how to do real work with the lib. The examples seem pretty good.
Thanks. I will think more about your other points and eventually commit more tomorrow. Many thanks for taking some time to Review Aedis. Marcelo

Additionally, stating your knowledge of redis and asio would be helpful for me.
I've used ASIO a lot, and never used Redis.
I vote to REJECT.
If the library could be refactored so that the user-facing interface was more friendly, I would change my vote.
Zach
My opening email didn't state this clearly, but you can also vote to "conditionally accept". That usually means you have to give the author an actionable condition that he has to fulfill to get your yes vote during THIS review. The way you currently voted means that you might vote differently in a future review. A conditional accept is useful when you want changes, but think another full review is not needed to address your issues. I am mentioning that, because your reasoning sounds like a conditional accept to me, so I just want to be sure you are aware of that option, since I didn't explicitly state this.

On 18/01/2023 08:34, Klemens Morgenstern via Boost wrote:
Additionally, stating your knowledge of redis and asio would be helpful for me.
I've used ASIO a lot, and never used Redis.
I vote to REJECT.
If the library could be refactored so that the user-facing interface was more friendly, I would change my vote.
Zach
My opening email didn't state this clearly, but you can also vote to "conditionally accept". That usually means you have to give the author an actionable condition that he has to fulfill to get your yes vote during THIS review.
The way you currently voted means that you might vote differently in a future review. A conditional accept is useful when you want changes, but think another full review is not needed to address your issues.
I am mentioning that, because your reasoning sounds like a conditional accept to me, so I just want to be sure you are aware of that option, since I didn't explicitly state this.
Zach has been on here longer than I have, and has written as many library reviews as I have, so I'm pretty sure if he says reject he means exactly that. Explaining some more: some libraries can be fixed up, even if the fixup list is quite long. Some are fundamentally broken and need rearchitecture, in the reviewer's opinion. The latter would be a reject. I have no opinion on this specific library, but I have had a majority of libraries I proposed here rejected. Niall

Zach has been on here longer than I have, and has written as many library reviews as I have, so I'm pretty sure if he says reject he means exactly that.
Boost is a big community and I don't know who has been active for how long, so I hope I didn't step on anyone's toes. I just wanted to make sure my announcement didn't cause any mishaps.

No toes were stepped on, but Niall is right -- I did mean reject rather than conditionally accept. To me conditional acceptance implies that the library is "almost there", but that it needs tweaks. I think the changes I'd like to see in Aedis are more extensive than that, and would indicate a re-review. Zach On Wed, Jan 18, 2023 at 10:28 AM Klemens Morgenstern via Boost <boost@lists.boost.org> wrote:
Zach has been on here longer than I have, and has written as many library reviews as I have, so I'm pretty sure if he says reject he means exactly that.
Boost is a big community and I don't know who has been active for how long, so I hope I didn't step on anyone's toes. I just wanted to make sure my announcement didn't cause any mishaps.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Zach Laine wrote:
- Does the API match with current best practices?
No. In particular, the API requires the user to write lots of Redis strings by hand. A typo will not be detected until runtime. The set of Redis keywords is fixed; there's no reason not to make an enumeration of them all, like this:
enum class cmd { ACL_CAT, // List the ACL categories or the commands inside a category ACL_DELUSER, // Remove the specified ACL users and the associated rules ACL_DRYRUN, // Returns whether the user can execute the given command // without executing the command. ACL GENPASS, // Generate a pseudorandom secure password to use for ACL users ACL_GETUSER, // Get the rules for a specific ACL user ACL_LIST, // List the current ACL rules in ACL config file format ACL_LOAD, // Reload the ACLs from the configured ACL file ACL_LOG, // List latest events denied because of ACLs in place // ... etc. };
Defining a few hundred enumerators doesn't strike me as particularly fruitful exercise, even if they were static and guaranteed to cover every command (they are neither.) Plus, it doesn't even gain that much. You catch spelling errors in the command names, but nothing else. ...
void user_code_1() { resp3::request req; req.push(HELLO(3)); req.push(HGETALL("hset-key")); req.push(QUIT); }
That's better, but there's no need to make HELLO et al variables, you could just have struct redis::cmd_hello { int protover; }; and then req.push( redis::cmd_hello{ 3 } ); but even then, HELLO is actually HELLO [protover [AUTH username password] [SETNAME clientname]] so the struct can't be as simple, and the majority of the Redis commands don't map well to C++. Plus, note that the above says resp3::request, that is, this is a generic RESP3 request; disallowing the low-level stringly typed interface will make this not work for other RESP3 servers, and there's _probably_ room in Boost for a generic RESP3 client. A specialized Redis client based on type-safe C++ structs for the commands will no doubt be useful, but I'm not sure maintaining one by hand is the right way to go. Taking https://github.com/redis/redis/tree/4ffcec29af9dc1246540d94834540528576c68a4... and automatically generating the C++ glue from that may be more manageable.

On Sun, Jan 15, 2023 at 7:17 AM Klemens Morgenstern via Boost <boost@lists.boost.org> wrote:
Aedis is a Redis client library built on top of Boost.Asio that implements the latest version of the Redis communication protocol RESP3.
This is my review for Boost.Aedis. I am an Asio expert and a Redis ignoramus. In fact I had no idea what Redis was until I heard about Aedis.
Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost.
I believe we should ACCEPT this library for the following reasons: * The API is lean, efficient, and works * The library uses Asio correctly * Redis seems popular and useful * Aedis is a prerequisite for building higher-level abstractions. I have studied the interface and implementation of the library and also asked the author questions on the C++ Language Slack. There is always a solid rationale for design choices, and often they harmonize with the way Redis works (at least, I think they do... I'm not a Redis expert but I am learning). I have confidence that Marcelo will be able to maintain and evolve this library to the high standards that users expect from Boost. There is value in Boost providing a comprehensive set of libraries for implementing cloud services. We already have Asio, Beast, JSON, URL, MySQL, and ancillary libraries such as Describe and the world's fastest Unordered maps. With Aedis and the possible upcoming Mustache, and some new networking libraries I am writing, there is a compelling case that the "monolithic Boost" can be the premiere collection for building network applications and services in C++. I spent several hours evaluating the library, looking at the interface and the implementation, and studying the tests and examples. I set breakpoints in Visual Studio and stepped right into Marcelo's innermost thoughts. I opened the Kimono so to speak. On a side note, now that I have learned how Redis structures its responses and the complexities involved - Marcelo has done a fine job of finding an efficient and simple data structure (the resp3::node) and accompanying I/O algorithms to represent it but in a way that gives callers flexibility in how they want to consume the results. Plus, it is implicitly allocator-capable and permits optimizations without intrusive API declarations (like Allocator template parameters). Installing ------------ Using vcpkg on Windows I installed both 32-bit and 64-bit OpenSSL. Then I built the Visual Studio 2019 Solution for Aedis using this line: cmake -G "Visual Studio 16 2019" -A x64 -B bin64 -DCMAKE_TOOLCHAIN_FILE=C:/vcpkg/scripts/buildsystems/vcpkg.cmake Visual Studio opened the solution with no problems. It has a bunch of C++17 and C++20 examples along with tests and several other projects. Right away I was able to build with zero errors, and run the tests and examples. It seems these programs do expect to see a local Redis server on port 6379 which I didn't have set up but I was provided a private remote server and the examples worked fine with them. New Boost libraries still require Jamfiles so those will need to be added. Answers to Questions ------------------------------
- Does this library bring real benefit to C++ developers for real world use-case?
Well...I guess? A Google search for +"Redis" produces 54 million results. A Google search for +"Redis" +"C++" produces 7 million results. If these numbers mean anything, it indicates there is demand. If we assume that Redis is useful then it follows this library would be useful, as it provides an idiomatic asynchronous client for interacting with Redis servers. .....three days later.... Anyway now that I have used the examples and studied Redis more I can say that Redis is pretty damn cool. It is simple yet powerful in a way that the stuffy and confining structure of SQL isn't. The combination of Beast, Aedis, and a cloud Redis instance is enough to implement a limitless number of different types of web applications.
- Do you have an application for this library?
Me personally, nothing specific. But I can see how it could be used to build those web applications.
- Does the API match with current best practices?
The library adopts Asio's "universal model for asynchronous operations" which is a very sensible choice given that the library is based on Asio. I would consider Aedis to be a LOW-LEVEL library. That is, it represents a solid foundation upon which higher level abstractions can be built. I also believe that there is "nothing between Aedis and Asio", which is precisely what you want for a low-level library. It hits the right balance - the connection object does useful things for you such as multiplexing and reconnecting, while the request and response interfaces avoid getting in your way or making too many assumptions. It was pointed out in another review that the API should use enums or something more type-safe. I agree that this could be an improvement but I think this sort of thing is what a higher-level abstraction would provide. Aedis can be the foundation for that higher-level abstraction. Such an abstraction might use Boost.Describe and/or Boost.JSON for example.
- Is the documentation helpful and clear?
FUCK NO; It's like my ex-wife; beautiful but useless. This is the weakest part of the library. Fortunately the API is simple enough that if you stare at or copy and paste the examples you can get a decent start. Sometimes the docs are outright perplexing for example in the Responses section it says "below is a request with a compile time size" but what? No that can't be right, the compiler has no idea that the request has 3 elements. Anyway the docs need a lot of work but I wouldn't hold up the library for that.
- Did you try to use it? What problems or surprises did you encounter?
Yep I played around with the examples. The adapt() model is kind of hard to wrap my head around, if I use a tuple then the number of types are fixed at compile time but when I run a Redis command the number of results can vary so how does this work? What's this max_read_size business? It says maximum size of the read buffer but there is no real explanation of what this means.
- What is your evaluation of the implementation?
The implementation is basically perfect. It uses all of the Asio best practices. The author invented great solutions to the problems which arise specifically because of how the Redis interface works. In particular the way that async_exec and async_run are designed allows for fully optimized interaction with the Redis server, as well as maximum flexibility in terms of how the caller will use it with Asio. Suggestions ---------------- * Add example code snippets to all of the Javadocs * Add a real "Reference" which links every public symbol in the library to a page explaining what it is. For example here's the Reference for Boost.JSON <https://www.boost.org/doc/libs/1_81_0/libs/json/doc/html/json/quickref.html> * Add an interface target that has all the header files so that they can be browse in Visual Studio's Solution Explorer. * resp3::node needs a variadic constructor or something to be able to pass arguments to the contained String member's constructor. * resp3::node could use some protection (static_assert maybe) to prevent stupidity like using const or reference types for String * "String" as a concept needs a concise definition in the documentation. Example: <https://www.boost.org/doc/libs/master/libs/url/doc/html/url/concepts/charset.html> * "resp3::type" as a type name is not technically *wrong* but could be confusing as hell since it is usually the return value of metafunctions. I use "kind" in my libraries. But this is a personal choice. * resp3::to_string returns a char pointer instead of a string_view? (and isn't marked noexcept) * resp3::is_aggregate is not constexpr. I could see a higher-level wrapper around aedis wanting to determine at compile-time if a type constant is an aggregate in order to provide a more type-safe API. * ResponseAdapter needs a concept specification * detail::ignore_response needs to be public since it is being used as a default template parameter in a public function * resp3::read and resp3::write are synchronous, and look like they don't use the connection object. Consider removing them, only offer features that have demonstrable need. Add them back only if they are requested. * What's the point of separating the read and write functions into different headers? Its not like you can only do one or the other... Callers will always need both sets of function declarations. * if you keep write() then either Request needs a concept definition or it should not be a template parameter. * Consider using tag_invoke instead of the current to_bulk and from_bulk customization points. * to_bulk has a Request template parameter, which needs a concept definition. * Implementation functions like add_blob and add_header could call reserve() on the string to prevent * resp3::request might return string_view from payload(), which is a nice form of type-erasure. Currently it returns `std::basic_string< char, std::char_traits<char>, std::pmr::polymorphic_allocator<char>>` which is quite a mouthful. * The Range template parameter in push_range needs a concept definition or you need to use an existing concept from the C++ Standard. Is it a ForwardRange? Is it a range of some type? You got some explainin' to do! * The docs for push_range which accepts two iterators does not list the constraints on the iterator's value type. From the implementation it looks like it needs something resembling a string but you need to explain that. Perhaps anything convertible to string_view? Here's an example: <https://github.com/boostorg/url/blob/47ae94a6455ece36c39db965978264cefa302bd3/include/boost/url/pct_string_view.hpp#L162> * In function templates that use concepts I would use either enable_if or static_assert if the type does not meet the requirements. I usually prefer static_assert because it gives the user a nice error message and you can put a comment there. Example: <https://github.com/boostorg/url/blob/33de44de1b4a22a77a73090adfa783217ed52103/include/boost/url/impl/params_encoded_ref.hpp#L39> * "Key" is a template parameter in some places and there's no concept definition or Constraints clause...I sense a pattern here. * Remove the interfaces which allow for allocator customization (via memory_resource / pmr). There is no motivating use-case and no benchmarks. If users request it you can add it back later without breaking anything. Spoiler Alert: The library is already written in a way that allows optimal allocation patterns, I am skeptical that using a custom allocator can improve it. But everyone is welcome to try :). You better bring receipts though. If this library is accepted into Boost then it should receive work on the documentation and a tune-up along the lines of all my Suggestions above, before it should be integrated into the release. Note however that my ACCEPT is without conditions. -- Regards, Vinnie Follow me on GitHub: https://github.com/vinniefalco

On Wed, Jan 18, 2023 at 7:47 AM Vinnie Falco <vinnie.falco@gmail.com> wrote:
* Implementation functions like add_blob and add_header could call reserve() on the string to prevent
What I meant to say is: * Implementation functions like add_blob and add_header could call reserve() on the string first to prevent the need for more than one reallocation. Thanks

On Sun, Jan 15, 2023 at 7:17 AM Klemens Morgenstern via Boost <boost@lists.boost.org> wrote:
The formal review of the aedis starts today (15th) and ends next week on Tuesday (24th).
I have questions too 1. From what I gathered from docs and source code, connection implements a queue where requests are placed before sending. async_exec puts/pops from the queue, and async_run does the actual I/O. The benefits being you can call async_exec without the usual restriction of "single outstanding operation" that Beast and MySQL impose. Is that right? 2. If there are write errors in a request, it is async_run, and not async_exec, the function that gets the error is async_run, and not the individual requests queued by async_exec. Right? 3. If a server push is received but no async_receive is outstanding, according to docs, "the connection will hang". Does that mean that any responses to other requests I wrote with async_exec will not complete until I call async_receive (so I can expect a deadlock there)? Also, I can see there is a heuristic rule to determine what is a server push response and what is not, how does this interact with my comment above? 4. From what I've gathered, the protocol can be made full duplex (you may write further requests while reading responses to previously written requests), but the queue acts as half-duplex (it won't write a batch until the response to the previous batch has been completely read). This can be circumvented using the low-level API. Am I getting the picture right? Or are there further protocol limitations I'm not aware of? 5. Merging async_run into async_exec and async_receive is possible but we lack the implementation tools required (namely Klemens' asem async synchronization primitives). You mention that there would be a performance loss, why? 6. From the implementation, I've seen every request allocates a new asio timer, I'd say for communication between the async_exec and async_receive tasks. Is allocating those I/O objects cheap? 7. There is no sync counterpart to async_receive, async_exec and async_run, which is unusual in Asio-based libraries - is that intentional? 8. Docs state that async_run "will trigger the write of all requests i.e. calls to async_exec that happened prior to this call". I assume it will also write requests that are queued while async_run is in progress, right? 9. What is the recommended way of using async_run, async_exec and async_receive together with multi-threading? Would using a basic_stream_socket<ip::tcp, strand> as AsyncReadWriteStream be enough? 10. From what I've seen, all tests are integration tests, in that they test with an actual Redis instance. Regards, Ruben.

On Thu, Jan 19, 2023 at 9:32 AM Ruben Perez via Boost <boost@lists.boost.org> wrote:
7. There is no sync counterpart to async_receive, async_exec and async_run, which is unusual in Asio-based libraries - is that intentional?
Speaking for myself, I am done with synchronous Asio. It offers no control over timeouts, no cancellation, and no possibility to implement full duplex algorithms which are common outside of HTTP/1. Synchronous sockets just don't provide enough utility to justify the cost (extra implementation, documentation, examples, tests, and maintenance). Thanks

On Thu, Jan 19, 2023 at 9:50 AM Ruben Perez <rubenperez038@gmail.com> wrote:
Speaking for myself, I am done with synchronous Asio.
Would be great to reach an agreement as a community here - I'm paying those costs.
You don't have to do the same thing as everyone else, there's no rule. If synchronous socket operations make sense for your library, then you should keep them. If you do not believe they provide sufficient value for their cost, you should not. Thanks

On Thu, 19 Jan 2023 at 18:32, Ruben Perez via Boost <boost@lists.boost.org> wrote:
I have questions too
1. From what I gathered from docs and source code, connection implements a queue where requests are placed before sending. async_exec puts/pops from the queue,
Correct.
and async_run does the actual I/O.
No, The IO (read) is done by each individual async_exec call, so it can parse resp3 directly in their final storage without copies (i.e. the data structure passed by the user on the adapt function). When the first async_exec is done reading its part of the response it will pass IO control to the next async_exec in the queue and so on until the whole pipeline has been processed, only then control is returned to async_run. That means, async_run coordinate IO between async_execs and async_receive.
The benefits being you can call async_exec without the usual restriction of "single outstanding operation" that Beast and MySQL impose. Is that right?
Correct. This model can make better use of command pipelines and keep the number of connections to the database low (one is enough).
2. If there are write errors in a request, it is async_run, and not async_exec, the function that gets the error is async_run, and not the individual requests queued by async_exec. Right?
Correct.
3. If a server push is received but no async_receive is outstanding, according to docs, "the connection will hang". Does that mean that any responses to other requests I wrote with async_exec will not complete until I call async_receive (so I can expect a deadlock there)?
Correct. Data sent from the server is not consumed on behalf of the user, he must do it.
Also, I can see there is a heuristic rule to determine what is a server push response and what is not, how does this interact with my comment above?
A push has a fixed RESP3 type (the resp3 message starts with a > character). The heuristic you are referring to handles corner cases. Say I send a command that expects no response, but I do an error in its expected syntax: The server will communicate an error as a response, in this case I have no other alternative than passing it to async_receive, as it shouldn't cause the connection to be closed, it can be further used (a questionable thing but possible).
4. From what I've gathered, the protocol can be made full duplex (you may write further requests while reading responses to previously written requests),
That is not how I understand it, I send a request and wait for the response. That is why pipeline is so important: https://redis.io/docs/manual/pipelining/. Otherwise why would it need pipelining?
but the queue acts as half-duplex (it won't write a batch until the response to the previous batch has been completely read).
Even full-duplex would require a queue as requests must wait any ongoing write to complete i.e. prevent concurrent writes. And we also need the incoming order in order to demultiplex them when the response arrives.
This can be circumvented using the low-level API.
The low-level API is there as a reference. I can't name any reason for using it instead of the connection, there are small exceptions which I will fix with time.
Am I getting the picture right? Or are there further protocol limitations I'm not aware of?
AFAIK, resp3 is request/response with pushes, which means you might be sending a request while the server is sending a push.
5. Merging async_run into async_exec and async_receive is possible but we lack the implementation tools required (namely Klemens' asem async synchronization primitives).
No. async_exec can't be merged with anything. There is perhaps a way to merge async_run and async_receive. But it looks complex to me, so I have to investigate more. There are advantages and disadvantages. The main point is to guarantee there are no concurrent writes.
You mention that there would be a performance loss, why?
Putting it in simple terms, I have observed situations where a push is in the middle of a response, that means the async_exec must be suspended and IO control passed to async_receive and then back to the suspended async_exec. This ping pong between async operations is complex to implement without async_run. I think there is a way though, we can talk more about that later.
6. From the implementation, I've seen every request allocates a new asio timer, I'd say for communication between the async_exec and async_receive tasks. Is allocating those I/O objects cheap?
The timer is used to suspend async_exec while we wait for its responses. It is allocated using the completion handler associated allocator, so memory allocation can be controlled by the user.
7. There is no sync counterpart to async_receive, async_exec and async_run, which is unusual in Asio-based libraries - is that intentional?
Yes. I think it is not possible and meaningful to provide a sync implementation. You can however see the sync example for an alternative.
8. Docs state that async_run "will trigger the write of all requests i.e. calls to async_exec that happened prior to this call". I assume it will also write requests that are queued while async_run is in progress, right?
Correct.
9. What is the recommended way of using async_run, async_exec and async_receive together with multi-threading? Would using a basic_stream_socket<ip::tcp, strand> as AsyncReadWriteStream be enough?
The idiomatic thing to do is to pass a strand as the connection executor.
10. From what I've seen, all tests are integration tests, in that they test with an actual Redis instance.
The low_level.cpp and request.cpp are not integration tests. The others use a locally installed redis instance with the exception of the TLS test that uses db.occase.de:6380. Very good questions btw. Regards, Marcelo

No, The IO (read) is done by each individual async_exec call, so it can parse resp3 directly in their final storage without copies (i.e. the data structure passed by the user on the adapt function).
Out of curiosity, does having the parsing done in async_exec (vs. in async_run) have to do with making no copies?
2. If there are write errors in a request, it is async_run, and not async_exec, the function that gets the error is async_run, and not the individual requests queued by async_exec. Right?
Correct.
What will happen with the request posted with async_exec? Will it error with some special code, or will it just not complete until a reconnection happens?
A push has a fixed RESP3 type (the resp3 message starts with a > character). The heuristic you are referring to handles corner cases. Say I send a command that expects no response, but I do an error in its expected syntax: The server will communicate an error as a response, in this case I have no other alternative than passing it to async_receive, as it shouldn't cause the connection to be closed, it can be further used (a questionable thing but possible).
If I'm getting this correctly, if you make a mistake and async_exec contains an invalid request, that error response is treated as a push. Some additional questions: a. Will async_exec complete successfully, or with an error? b. Can you provide an example of the kind of mistake that would cause this to happen? c. What will happen if async_receive has not been called because no server pushes are expected? Will it deadlock the connection?
4. From what I've gathered, the protocol can be made full duplex (you may write further requests while reading responses to previously written requests),
That is not how I understand it, I send a request and wait for the response. That is why pipeline is so important: https://redis.io/docs/manual/pipelining/. Otherwise why would it need pipelining?
What I mean: request batch A is written. While you are waiting for the response, request batch B is enqueued using async_exec. I've understood that batch B will wait until A's response has been fully read before it's written. Is there a reason to make B wait, instead of writing it as soon as A's write is complete?
The low-level API is there as a reference. I can't name any reason for using it instead of the connection, there are small exceptions which I will fix with time.
BTW, is the low-level API intended to be used by final users? As in, will it be stable and maintained?
Putting it in simple terms, I have observed situations where a push is in the middle of a response, that means the async_exec must be suspended and IO control passed to async_receive and then back to the suspended async_exec. This ping pong between async operations is complex to implement without async_run. I think there is a way though, we can talk more about that later.
Out of curiosity again, can you not deserialize the push from async_exec, then notify (e.g. via a condition variable or similar) to async_receive?
10. From what I've seen, all tests are integration tests, in that they test with an actual Redis instance.
The low_level.cpp and request.cpp are not integration tests. The others use a locally installed redis instance with the exception of the TLS test that uses db.occase.de:6380.
How do you handle parallel test executions? As in a CI running multiple jobs in parallel. As an alternative, you may want to use a Docker container for your CIs (which I've found to be much more practical). Let me know if you go down this path, as I've got experience. Regards, Ruben.

Hey everyone, This is my review of the proposed Aedis library. Please take all opinions with a grain of salt.
What is your evaluation of the design?
It's by and large idiomatic Asio down to a T. If one knows Asio well, this library will integrate perfectly with all their existing code and idioms. Though I find myself perplexed by the documentation suggesting the necessity for having reconnections while then delegating that responsibility to me, the user. In the ideal, this would be abstracted from me with an opt-in way of me handling reconnecting manually but as long as users are comfortable copy-pasting from the docs and example code, this is a non-issue. The design emphasizes parsing directly into structures which I enjoy. This is also the only library I've seen that puts PMR front-and-center in its API which I do like. C++17 added PMR for a reason, it's good to see authors taking advantage of this and I think it reflects the innovative spirit of Boost that I've grown to appreciate and admire. The design seems to preclude allowing users to handle I/O on their terms which I dislike. The lowest level interface is `resp3::read()` which is dependent upon Asio concepts. Otherwise, the design is simple and effective.
What is your evaluation of the implementation?
The implementation is actually quite clean and much like its presented interfaces, is idiomatic Asio down to a T. I'm not overwhelmingly familiar with the implementation details as I haven't spent a lot of time with it but after looking at the connection_base class, I feel like I'd be able to debug this library given my level of Asio expertise. Debuggability is paramount when evaluating quality of implementation. The code is laid out quite nicely and uses names effectively. I have no trouble reading it.
What is your evaluation of the documentation?
Stylistically it sets a new standard for how Boost libraries should look and feel. The examples are clear and concise and cover a wide-range of potential use-cases. It's not clear from the documentation if standalone Asio is supported. I think that would be a big user concern.
What is your evaluation of the potential usefulness of the library?
It's useful in the sense that if you need a working Redis client, this fits the bill. While tightly integrating with Asio is nice, this library is almost a tad too late. Having deployed Asio applications and servers, if I had needed a Redis client I would've already been using one of the existing clients as noted on the official Redis website. This library would then require me gradually migrate our existing codebase towards Aedis which requires buy-in from one's team and also from the business. Depending on things like team size and how the company internally manages dependencies and deployments, this is either a non-issue or can create a lot of friction. I've worked on a small team before and migrated us from nlohmann JSON to Boost.JSON, much to the chagrin of my coworkers who didn't view the shift as justified. For greenfield projects looking to use Asio, this library is the most obvious choice.
Did you try to use the library? With what compiler? Did you have any problems?
I did not try to use the library aside from pulling it down and building and running its examples and tests. I used modern compilers (gcc-12) and ran into no problems.
How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I put in a moderate amount of effort. Historically I've never been too motivated to add Redis to applications I've developed. I did, however, study the resp3 protocol to get a better feel for how the library could be designed around it.
Are you knowledgeable about the problem domain?
I'm not very knowledgeable about in-memory key-value stores in general. I'm very familiar with networking and using "advanced" APIs for working with myriad protocols.
Do you think the library should be accepted as a Boost library?
I'm going to vote REJECT. In my view, if there's room in Boost for a Redis library, it would be one that exposes a low-level protocol library similar to Beast or OpenSSL with its linked BIO pairs. This would enable the library to be I/O runtime-agnostic and would still permit the library author to provide an integration with Asio like it has today. Second, the value of this library being included in Boost isn't clear to me. No libraries currently in Boost would benefit from Aedis' inclusion. Aedis benefits somewhat from being in Boost as it can then evolve in lockstep with Asio but unless Asio makes massive changes to its core Stream abstractions, it's safe to say that this library will be stable outside of Boost. While reddit is not necessarily a kind place that reflects the state of C++ developers universally, the reddit thread: https://www.reddit.com/r/cpp/comments/10cnc99/candidate_boostaedis_review_st... did not seem to generate significant user interest in the library being accepted. Interestingly, most of the reddit thread is dedicated to discussing monolithic Boost. While largely non-productive (as most reddit conversations are), we do see that many users are not excited about Yet Another Boost Library. If there's strong user engagement somewhere else, I'll change my REJECT to an ACCEPT but until then, I don't believe Boost itself benefits from Aedis and I don't believe users benefit from Aedis being in Boost outside of those wishing to use Boost as a package manager. Instead, I'd opt into writing a vcpkg portfile and a Conan recipe for this library and distribute it that way. Users will still undoubtedly benefit from this library. Code Review Notes: * fix TODOs in the test suite before requesting formal Boost review * any parser facing the network needs fuzzing * the section comparing Aedis to redis++ should be first and foremost in the docs as it's the first question people looking for a redis client are going to ask - Christian

On Fri, Jan 20, 2023 at 1:24 PM Christian Mazakas via Boost <boost@lists.boost.org> wrote:
It's not clear from the documentation if standalone Asio is supported. I think that would be a big user concern.
If Aedis becomes a Boost library, which Boost users will be concerned that Aedis only supports Boost Asio? Note that Beast only supports Boost Asio. Thanks

On Fri, 20 Jan 2023 at 22:24, Christian Mazakas via Boost <boost@lists.boost.org> wrote:
Hey everyone,
This is my review of the proposed Aedis library. Please take all opinions with a grain of salt.
What is your evaluation of the design?
It's by and large idiomatic Asio down to a T. If one knows Asio well, this library will integrate perfectly with all their existing code and idioms.
Though I find myself perplexed by the documentation suggesting the necessity for having reconnections while then delegating that responsibility to me, the user.
To implement automatic reconnection the library would have to make many decisions on behalf of the user * How much should I wait before trying to reconnect? * Should backoff be implemented? * Reconnect to the same instance or a replica? * Resolve names over a Redis sentinel? * Where should I log reconnect errors? * When should I stop trying to reconnect? <snip>
The design seems to preclude allowing users to handle I/O on their terms which I dislike.
The lowest level interface is `resp3::read()` which is dependent upon Asio concepts.
Would you like to have the parser public in the API? What "handle I/O on their terms" means exactly. <snip>
What is your evaluation of the documentation?
Stylistically it sets a new standard for how Boost libraries should look and feel.
The examples are clear and concise and cover a wide-range of potential use-cases.
Great, thanks.
It's not clear from the documentation if standalone Asio is supported. I think that would be a big user concern.
It is not. How is that a concern for a library that is part of boost? <snip>
Do you think the library should be accepted as a Boost library?
I'm going to vote REJECT.
In my view, if there's room in Boost for a Redis library, it would be one that exposes a low-level protocol library similar to Beast or OpenSSL with its linked BIO pairs. This would enable the library to be I/O runtime-agnostic and would still permit the library author to provide an integration with Asio like it has today.
This library was built explicitly for ASIO, there are no plans for making it usable for anything other than that. This means this expectation can't be possibly fulfilled.
Second, the value of this library being included in Boost isn't clear to me.
No libraries currently in Boost would benefit from Aedis' inclusion.
That is true for many other libraries. But I respectfully disagree, I would find it very valuable to have a complete Web stack in Boost, alongside URL, Json, Beast and MySql and possibly Mustache. <snip>
it's safe to say that this library will be stable outside of Boost.
So what? I don't want to dispute your decision, but this is not a reason to reject libraries AFAICS.
While reddit is not necessarily a kind place that reflects the state of C++ developers universally, the reddit thread: https://www.reddit.com/r/cpp/comments/10cnc99/candidate_boostaedis_review_st...
did not seem to generate significant user interest in the library being accepted. Interestingly, most of the reddit thread is dedicated to discussing monolithic Boost. While largely non-productive (as most reddit conversations are),
IMO, using a Reddit thread to evaluate interest in Redis clients is very misleading, in fact I don't think it can contribute anything to the review. Additionally, although C++ has a large number of Redis client projects on github it comes far behind other languages in terms of popularity. Bellow I accumulated the number of Github stars of the four most popular clients of each language 1. Java: 36k 2. Node-js: 27k 4. Go: 27k 5. PHP: 17K 3. Python: 13k 6. C#: 7k 7. C: 6k 8. Ruby: 4k 9. Rust: 3k 10. C++: 2k This means the Redis community in C++ is itself very small comparatively, let alone inside Boost. This is not to make a case for my library, but to show the big potential this functionality has in regard to users.
we do see that many users are not excited about Yet Another Boost Library.
Well, then why are we reviewing another library? <snip>
Code Review Notes: * fix TODOs in the test suite before requesting formal Boost review * any parser facing the network needs fuzzing * the section comparing Aedis to redis++ should be first and foremost in the docs as it's the first question people looking for a redis client are going to ask
Will be done. Thanks for taking the time to review the library. Regards, Marcelo

On Fri, 20 Jan 2023 at 22:24, Christian Mazakas via Boost <boost@lists.boost.org> wrote:
Do you think the library should be accepted as a Boost library?
I'm going to vote REJECT.
In my view, if there's room in Boost for a Redis library, it would be one that exposes a low-level protocol library similar to Beast or OpenSSL with its linked BIO pairs.
I'm missing here a reference to the MySQL [1] by Ruben Perez Otherwise, I don't fully understand that. [1] https://lists.boost.org/Archives/boost//2022/06/253193.php Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

Christian Mazakas wrote: ...
While reddit is not necessarily a kind place that reflects the state of C++ developers universally, the reddit thread: https://www.reddit.com/r/cpp/comments/10cnc99/candidate_boostaedis_re view_starts_today_jan_15/
did not seem to generate significant user interest in the library being accepted. Interestingly, most of the reddit thread is dedicated to discussing monolithic Boost. While largely non-productive (as most reddit conversations are), we do see that many users are not excited about Yet Another Boost Library.
If there's strong user engagement somewhere else, I'll change my REJECT to an ACCEPT but until then, I don't believe Boost itself benefits from Aedis and I don't believe users benefit from Aedis being in Boost outside of those wishing to use Boost as a package manager.
"We should reject the library so that it can be developed outside of Boost, because this will make people who hate Boost happier" is not a valid argument in the context of a Boost formal review. The review is meant to determine whether the library meets the standards for inclusion into Boost. It's not our job to decide whether the library would have been better off outside of Boost; this is the author's call, and by submitting, he has already demonstrated what he thinks.

Hello everyone, This is my review of the Aedis library. This is my first attempt at a Boost review, so I hope that it is helpful. Also, I'm attempting to send this from Gmail, so I apologize if the formatting doesn't come out right.
Additionally, stating your knowledge of redis and asio would be helpful for me.
I am using Redis in a project, so I am familiar with it, though nothing too advanced. I have looked at the Asio documentation before, but this is my first time actually trying it out (I've been meaning to).
Does this library bring real benefit to C++ developers for real world use-case?
Yes, absolutely, I think this is a great addition to the Asio ecosystem.
Do you have an application for this library?
Yes, I'd like to replace the hiredis code in the aforementioned project with something C++ with asynchronous support, but none of the existing options seemed great, and I want to try to move to using Asio in the future, so this library seems like a perfect fit.
Does the API match with current best practices?
I'm not that familiar with Asio best practices. I don't have a problem with the raw strings for this API; a more strongly-typed API could be added on top later.
Is the documentation helpful and clear?
Overall I thought it was a bit sparse. I think the other reviews have made some great suggestions, in particular Ruben Perez's comments on what could be added, such as a discussion of design decisions. I was initially put off by the lack of C++17 examples (since I'm not using it yet for work), but thinking about it with a long-term view I think the C++20 style is a good thing. However, I thought the use of the "common" files in the examples was less than helpful; I think it would be easier to follow if the examples were each complete (maybe some "common" code could be added to the library?). Is it important to send "HELLO"? I don't see that explained. In the Redis documentation for "QUIT", it says "Clients should not use this command", so maybe you should explain why it is used. It was not immediately obvious why a `tuple` is used for the response, so maybe some explanation of that is warranted the first time it is shown, or at least a reference to the "Responses" section (e.g. "response type will be explained later in the 'Responses' section"). I'd also like to see an explanation of why the `adapt` function call is required. Why can't I just pass the response type? Most of the examples show multiple commands, and it's great that it can handle that, but it seems like a common case would be to execute a single command and get a response. It seems like there should be some shortcut included in the library, for example: std::optional<std::string> reply; co_await conn.async_exec(resp3::request("GET", "FOO"), adapt(reply));
Did you try to use it? What problems or surprises did you encounter?
Yes, I played with the examples. I mostly tried using it in the C++17 style. I had errors compiling the cpp17_intro.cpp example on GCC 10 (gcc-toolset-10 on RHEL8),so I had to switch to GCC 12 ("no matching function call to from_chars..."). I initially tried to use the library via CMake but Boost::asio was not a valid target for me. The first time I tried to modify one of the examples to do something different, I got a SIGSEGV with no obvious indication of what went wrong. It was an object lifetime issue - obvious in retrospect that the request and reply objects need to live past the async_exec call, but I guess the lambdas make it easy to forget how things are actually flowing. More an Asio/C++ issue than Aedis. This may also be due to my unfamiliarity with Asio, but when I tried the "use_future" completion token, get() never returned: resp3::request req; req.push("QUIT"); auto f = conn.async_exec(req, adapt(), net::use_future); std::cout << "QUIT: " << f.get() << std::endl;
What is your evaluation of the implementation?
I didn't look very deeply into the implementation.
Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost. Also please indicate the time & effort spent on the evaluation and give the reasons for your decision.
I think the explanation from Ruben's review seems reasonable, so I will also say REJECT in the current state, but I very much would like this library to be included in Boost in the future. It sounds like the error handling may need some more consideration, and I'd like to see the documentation expanded. I would also like to see the name changed to Boost.Redis if accepted, to make it easier to identify and find. I spent around 5-6 hours over a few days evaluating the library. Thanks very much to Marcelo for working on this library! - Sam Hartsfield

On Tue, 24 Jan 2023 at 18:12, Sam Hartsfield via Boost <boost@lists.boost.org> wrote:
Is the documentation helpful and clear?
I was initially put off by the lack of C++17 examples (since I'm not using it yet for work),
There are three C++17 example in the docs - https://mzimbres.github.io/aedis/cpp17__intro_8cpp_source.html - https://mzimbres.github.io/aedis/cpp17__intro__sync_8cpp_source.html
Is it important to send "HELLO"? I don't see that explained.
Yes it is necessary to upgrade to the protocol version used by the library. That is why all examples use it.
In the Redis documentation for "QUIT", it says "Clients should not use this command", so maybe you should explain why it is used.
I use QUIT for clarity. If you start having TIME_WAIT problems server-side, just don't use it. Notice however that in Aedis you will often need only once connection in your entire application, which reduces the load on the server.
It was not immediately obvious why a `tuple` is used for the response, so maybe some explanation of that is warranted the first time it is shown,
The documentation is focused on how to use Aedis and not much on explaining design decisions. The response to commands must be stored somewhere, a tuple provides a simple way to group them in a way that they can be indexed.
or at least a reference to the "Responses" section (e.g. "response type will be explained later in the 'Responses' section"). I'd also like to see an explanation of why the `adapt` function call is required. Why can't I just pass the response type?
Adapt is there to make customization for user data structures easy to integrate. It also removes some of the complexity from the response object, for example, it provides a way to specify how much you allow the implementation to read from the socket for each response and overall. I don't document that because it is something 99% of the user won't ever need.
Most of the examples show multiple commands, and it's great that it can handle that, but it seems like a common case would be to execute a single command and get a response. It seems like there should be some shortcut included in the library, for example:
std::optional<std::string> reply; co_await conn.async_exec(resp3::request("GET", "FOO"), adapt(reply));
The request object must outlive the asynchronous operation, so this is not going to work.
Did you try to use it? What problems or surprises did you encounter? <snip>
This may also be due to my unfamiliarity with Asio, but when I tried the "use_future" completion token, get() never returned:
resp3::request req; req.push("QUIT"); auto f = conn.async_exec(req, adapt(), net::use_future); std::cout << "QUIT: " << f.get() << std::endl;
There are many reasons for why this can go wrong. I would have started with the synchronous example https://mzimbres.github.io/aedis/cpp17__intro__sync_8cpp_source.html Thanks for investing time on this, Marcelo

On Fri, 20 Jan 2023 at 16:47, Ruben Perez <rubenperez038@gmail.com> wrote:
No, The IO (read) is done by each individual async_exec call, so it can parse resp3 directly in their final storage without copies (i.e. the data structure passed by the user on the adapt function).
Out of curiosity, does having the parsing done in async_exec (vs. in async_run) have to do with making no copies?
Yes, that is the main reason. It reduces latency.
2. If there are write errors in a request, it is async_run, and not async_exec, the function that gets the error is async_run, and not the individual requests queued by async_exec. Right?
Correct.
What will happen with the request posted with async_exec? Will it error with some special code, or will it just not complete until a reconnection happens?
The behavior is configurable. The user can decide whether the request should remain suspended until a reconnection occurs. The default behaviour is to complete async_exec with an error. For read only commands for example you don't care about them being executed twice (or commands like SET) for others it might be wrong to risk double execution. See https://mzimbres.github.io/aedis/classaedis_1_1resp3_1_1request.html#structa... for more information.
A push has a fixed RESP3 type (the resp3 message starts with a > character). The heuristic you are referring to handles corner cases. Say I send a command that expects no response, but I do an error in its expected syntax: The server will communicate an error as a response, in this case I have no other alternative than passing it to async_receive, as it shouldn't cause the connection to be closed, it can be further used (a questionable thing but possible).
If I'm getting this correctly, if you make a mistake and async_exec contains an invalid request, that error response is treated as a push.
No. Spelling or syntax errors in commands will be received as responses and async_exec will complete with error. There is only one corner case where it is received as a push, the SUBSCRIBE command, because it does not have a response. When Redis sends the response I am not expecting any so I must treat it like a push although it has no resp3 push type.
Some additional questions: a. Will async_exec complete successfully, or with an error?
With an error.
b. Can you provide an example of the kind of mistake that would cause this to happen?
request req; req.push("SUBSCRIBE"); // oops, forgot to set a channel. This will cause the server to send a response to a command that has no response.
c. What will happen if async_receive has not been called because no server pushes are expected? Will it deadlock the connection?
The connection can't consume anything on behalf of the user, so it will wait forever.
4. From what I've gathered, the protocol can be made full duplex (you may write further requests while reading responses to previously written requests),
That is not how I understand it, I send a request and wait for the response. That is why pipeline is so important: https://redis.io/docs/manual/pipelining/. Otherwise why would it need pipelining?
What I mean: request batch A is written. While you are waiting for the response, request batch B is enqueued using async_exec. I've understood that batch B will wait until A's response has been fully read before it's written.
Yes, that's how the implementation works.
Is there a reason to make B wait, instead of writing it as soon as A's write is complete?
My understanding is that I have to wait. RESP3 is not like e.g. the Kafka protokol where you can continuously write. I will check this further though.
The low-level API is there as a reference. I can't name any reason for using it instead of the connection, there are small exceptions which I will fix with time.
BTW, is the low-level API intended to be used by final users? As in, will it be stable and maintained?
It has been stable for over a year now, but it is not intended for end users unless perhaps in corner cases. By using it instead of the connection the user loses automatic pipelining and event demultiplexing, which is hard to implement.
Putting it in simple terms, I have observed situations where a push is in the middle of a response, that means the async_exec must be suspended and IO control passed to async_receive and then back to the suspended async_exec. This ping pong between async operations is complex to implement without async_run. I think there is a way though, we can talk more about that later.
Out of curiosity again, can you not deserialize the push from async_exec, then notify (e.g. via a condition variable or similar) to async_receive?
No. At the time a push comes, there might be no call to async_exec e.g. a subscribe-only app. See the subscriber example.
10. From what I've seen, all tests are integration tests, in that they test with an actual Redis instance.
The low_level.cpp and request.cpp are not integration tests. The others use a locally installed redis instance with the exception of the TLS test that uses db.occase.de:6380.
How do you handle parallel test executions? As in a CI running multiple jobs in parallel.
I don't run parallel tests in the CI. Improving this would be nice though.
As an alternative, you may want to use a Docker container for your CIs (which I've found to be much more practical). Let me know if you go down this path, as I've got experience.
Ok, thanks. I have used Boost.MySql as a reference to Github actions and some other things. Regards, Marcelo

Out of curiosity, does having the parsing done in async_exec (vs. in async_run) have to do with making no copies?
Yes, that is the main reason. It reduces latency.
I think I haven't explained myself - I've understood that doing the deserialization in async_run would require copying the data, while doing it in async_exec doesn't. Why?
[...] There is only one corner case where it is received as a push, the SUBSCRIBE command, because it does not have a response. When Redis sends the response I am not expecting any so I must treat it like a push although it has no resp3 push type.
From the discussion, I've got the impression that an application can send regular commands (e.g. SET, GET, and so on) while receiving server pushes (via SUBSCRIBE and async_receive). However, the official docs (https://redis.io/docs/manual/pubsub/) seem to disallow it ("A client subscribed to one or more channels should not issue commands") - it seems to make some exceptions to these, like ping, reset, unsubscribe and further subscribes. Is this true?
b. Can you provide an example of the kind of mistake that would cause this to happen?
request req; req.push("SUBSCRIBE"); // oops, forgot to set a channel.
From the docs again, it seems like you always get a first message stating that you subscribed successfully to that channel, and the number of channels you're currently subscribed to. Would it be possible to parse that as a kind of "subscribe response"? It's currently being handled as a regular push.
Regards, Ruben.

On Sat, 21 Jan 2023 at 15:01, Ruben Perez <rubenperez038@gmail.com> wrote:
Out of curiosity, does having the parsing done in async_exec (vs. in async_run) have to do with making no copies?
Yes, that is the main reason. It reduces latency.
I think I haven't explained myself - I've understood that doing the deserialization in async_run would require copying the data, while doing it in async_exec doesn't. Why?
Because that would require some form of type erasing. Each call to async_exec can have a different adapter co_await conn->async_exec(req, adapt(type1)); co_await conn->async_exec(req, adapt(type2)); // Elsewhere in the code co_await conn->async_exec(req, adapt(type3)); // ... ... To be able to read in async_run I would need those adapters, but that requires type-erasing to store them in the request queue. Even if I manage to do this cheaply, what would I gain? I don't think much.
[...] There is only one corner case where it is received as a push, the SUBSCRIBE command, because it does not have a response. When Redis sends the response I am not expecting any so I must treat it like a push although it has no resp3 push type.
From the discussion, I've got the impression that an application can send regular commands (e.g. SET, GET, and so on) while receiving server pushes (via SUBSCRIBE and async_receive). However, the official docs (https://redis.io/docs/manual/pubsub/) seem to disallow it ("A client subscribed to one or more channels should not issue commands") - it seems to make some exceptions to these, like ping, reset, unsubscribe and further subscribes. Is this true?
The docs are probably referring to the old protocol version RESP2. The new version RESP3 has a push type as you can see here https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md where you can find Push types are not related to replies, since they are information that the server may push at any time in the connection, so the client should keep reading if it is reading the reply of a command. Push data may be interleaved with any protocol data, but always at the top level, so the client will never find push data in the middle of a Map reply for instance. Note that in this mode it is possible to get both replies and push messages at the same time, interleaved in any way. This is how it becomes possible to implement client-side-caching using only one connection, see https://redis.io/docs/manual/client-side-caching/
b. Can you provide an example of the kind of mistake that would cause this to happen?
request req; req.push("SUBSCRIBE"); // oops, forgot to set a channel.
From the docs again, it seems like you always get a first message stating that you subscribed successfully to that channel, and the number of channels you're currently subscribed to. Would it be possible to parse that as a kind of "subscribe response"? It's currently being handled as a regular push.
The reply to the SUBSCRIBE command is a push type on the wire, so I can't distinguish it from other kinds of pushes and therefore I have to treat them the same. This is what a server push looks like on the wire btw ">4\r\n+pubsub\r\n+message\r\n+some-channel\r\n+some message\r\n" Regards, Marcelo

Final set of questions, I hope. 1. If you write something like aedis::request req; std::tuple<std::optional<std::string>> res; req.push("LPOP", "something_not_a_list"); // will cause an error co_await conn.async_exec(req, adapt(res)); You seem to get an error code reported. The server does send an error response with diagnostics, but I haven't found a way to access that message. Is there any way? 2. I've modified cpp20_subscriber to add another task that sends commands with async_exec while reading subscription replies. If any of the commands issued with async_exec contain any errors (like the LPOP above), the program crashes with a "conn->cmds_ != 0" assertion. 3. I've seen other Redis (async) libraries implementing connection pools. Unless I'm mistaken, it seems likely that some of the user base will eventually require this feature. I see some overlap between the queuing system you have in place in connection and the responsibilities of the connection pool (I'd usually implement this kind of queueing at the connection pool level). What's your view of this? a. Is connection pooling planned in the future for Aedis? Or is it considered the responsibility of a higher-level component? b. In the latter case, would that component have to make use of the low-level API? Regards, Ruben.

On Sat, 21 Jan 2023 at 16:57, Ruben Perez <rubenperez038@gmail.com> wrote:
Final set of questions, I hope.
1. If you write something like
aedis::request req; std::tuple<std::optional<std::string>> res; req.push("LPOP", "something_not_a_list"); // will cause an error co_await conn.async_exec(req, adapt(res));
You seem to get an error code reported. The server does send an error response with diagnostics, but I haven't found a way to access that message. Is there any way?
One way is to change the response type to `std::vector<resp3:node<String>>` so it can store both success and error replies (without discarding the error message), namely * On error vec.front().value will contain the diagnostic error message sent by the server. And vec.front().data_type will be resp3::type::simple_error. * On success vec.front().value will contain the expected value and vec.front().data_type will be resp3::type::blob_string. This is a bit cumbersome but works. There are plans to simplify this, follow this issue for more https://github.com/mzimbres/aedis/issues/40. I know Boost.MySql invented error_info to deal with this problem. I did not adopt it because * Deviates the completion signature from well established practice of having error_code as first parameter f(erro_code ec, ...) or * Requires additional parameters to the completion e.g. f(error_code ec, error_info ei, ...) * Raises question about who allocates the string. * The error information is also only useful for logging and the user can't react to it so why not log it right away.
2. I've modified cpp20_subscriber to add another task that sends commands with async_exec while reading subscription replies. If any of the commands issued with async_exec contain any errors (like the LPOP above), the program crashes with a "conn->cmds_ != 0" assertion.
That is probably a bug in the demultiplexing of events (the most complex thing in Aedis). You shouldn't be able to cause any crash. Tracking it here: https://github.com/mzimbres/aedis/issues/50 and thanks for reporting.
3. I've seen other Redis (async) libraries implementing connection pools. Unless I'm mistaken, it seems likely that some of the user base will eventually require this feature. I see some overlap between the queuing system you have in place in connection and the responsibilities of the connection pool (I'd usually implement this kind of queueing at the connection pool level). What's your view of this?
All libraries I am aware of don't provide event demultiplexing and reconnectable connections, therefore the connection cannot be shared among e.g. individual http or websocket sessions in a server. To circumvent this problem connection pools must be implemented. They add an overhead though * Increased number of connections on the Redis server. * Loses the ability to implement eficient command pipelines, which will increase the number of syscalls: instead of writing all requests once (one syscall), each connection will write its own request (n syscalls). The same problem is also valid for reading. * Increased usage of resources: 1 vs n connection objects. * Increased overall complexity: again 1 vs n connection objects. So I would say Aedis doesn't need connection pools.
a. Is connection pooling planned in the future for Aedis? Or is it considered the responsibility of a higher-level component?
Ditto. Aedis doesn't need them for the reasons stated above.
b. In the latter case, would that component have to make use of the low-level API?
If somebody wants to implement a connection pool it could use the connection object directly I would say. Regards, Marcelo

I know Boost.MySql invented error_info to deal with this problem. I did not adopt it because
* Deviates the completion signature from well established practice of having error_code as first parameter f(erro_code ec, ...) or This is not true. error_info (now server_diagnostics) are never part of the completion handler signature, but populated by lvalue reference.
* Requires additional parameters to the completion e.g. f(error_code ec, error_info ei, ...) Again, this is not true.
* Raises question about who allocates the string. The string is allocated as any other data passed by lvalue reference.
I'm not saying error_info (aka server_diagnostics) is the solution, but none of these arguments are true.
* The error information is also only useful for logging and the user can't react to it so why not log it right away. I'm not getting this. How can I log something if I can't access it?

On Sat, 21 Jan 2023 at 18:28, Ruben Perez <rubenperez038@gmail.com> wrote:
I know Boost.MySql invented error_info to deal with this problem. I did not adopt it because
* Deviates the completion signature from well established practice of having error_code as first parameter f(erro_code ec, ...) or This is not true. error_info (now server_diagnostics) are never part of the completion handler signature, but populated by lvalue reference.
* Requires additional parameters to the completion e.g. f(error_code ec, error_info ei, ...) Again, this is not true.
* Raises question about who allocates the string. The string is allocated as any other data passed by lvalue reference.
I'm not saying error_info (aka server_diagnostics) is the solution, but none of these arguments are true.
Indeed, it is passed to the initiating function https://anarthal.github.io/mysql/mysql/ref/boost__mysql__connection/async_qu... template<class CompletionToken> auto async_query( boost::string_view query_string, error_info& output_info, CompletionToken&& token); Which is similar to what I am proposing in the github issue I point you at. Sorry for the confusion, I have discussed extensively with others why not pass in the completion handler and though I had seen this first in Boost.MySql when I wrote a review.
* The error information is also only useful for logging and the user can't react to it so why not log it right away. I'm not getting this. How can I log something if I can't access it?
A high-level library might decide not to pass this information to the user but log it right away. Users will react to error_code and not to error_info which AFAICS is only used for logging. Regards, Marcelo

making the error text available is not only logging. it can be passed on to a user in interactive sessions giving the user (human) ability to act on it right away. I wouldn't call that "only useful for logging" even though I guess it is logging in some sense. Sent with Proton Mail secure email. ------- Original Message ------- On Saturday, January 21st, 2023 at 18:47, Marcelo Zimbres Silva via Boost <boost@lists.boost.org> wrote:
On Sat, 21 Jan 2023 at 18:28, Ruben Perez rubenperez038@gmail.com wrote:
I know Boost.MySql invented error_info to deal with this problem. I did not adopt it because
* Deviates the completion signature from well established practice of having error_code as first parameter f(erro_code ec, ...) or This is not true. error_info (now server_diagnostics) are never part of the completion handler signature, but populated by lvalue reference.
* Requires additional parameters to the completion e.g. f(error_code ec, error_info ei, ...) Again, this is not true.
* Raises question about who allocates the string. The string is allocated as any other data passed by lvalue reference.
I'm not saying error_info (aka server_diagnostics) is the solution, but none of these arguments are true.
Indeed, it is passed to the initiating function https://anarthal.github.io/mysql/mysql/ref/boost__mysql__connection/async_qu...
template<class CompletionToken>
auto async_query( boost::string_view query_string, error_info& output_info, CompletionToken&& token);
Which is similar to what I am proposing in the github issue I point you at. Sorry for the confusion, I have discussed extensively with others why not pass in the completion handler and though I had seen this first in Boost.MySql when I wrote a review.
* The error information is also only useful for logging and the user can't react to it so why not log it right away. I'm not getting this. How can I log something if I can't access it?
A high-level library might decide not to pass this information to the user but log it right away. Users will react to error_code and not to error_info which AFAICS is only used for logging.
Regards, Marcelo
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sat, 21 Jan 2023 at 19:04, Jakob Lövhall <lovhall@protonmail.com> wrote:
making the error text available is not only logging. it can be passed on to a user in interactive sessions giving the user (human) ability to act on it right away. I wouldn't call that "only useful for logging" even though I guess it is logging in some sense.
Good point. I hadn't thought about interactive sessions. Regards, Marcelo

On Sun, 15 Jan 2023 at 16:17, Klemens Morgenstern via Boost <boost@lists.boost.org> wrote:
The formal review of the aedis starts today (15th) and ends next week on Tuesday (24th).
Hi all, This is my review of the proposed Boost.Aedis. First of all, thank you very much Marcelo for submitting the library, and for your quick responses to my (not few) questions. Thanks also Klemens for acting as review manager.
Does this library bring real benefit to C++ developers for real world use-case?
I strongly believe so. It integrates in the new Asio-based ecosystem we are creating, together with Beast and MySQL. Redis is popular. I buy the vision.
Do you have an application for this library?
I don't. I'm currently just maintaining Boost.MySQL, and there's little room for it in the task. But I've had it in the past - a company in the IoT world that was using Redis as a message-passing system. I would have liked having Aedis back then.
Does the API match with current best practices?
In terms of Asio primitives, yes, it does. It complies to the universal async model, which is what I'd expect. I've also liked how you implemented rebind_executor for SSL streams to make default completion tokens easier - which is something I've recently done in MySQL myself, too. This library introduces something unusual to Asio-based components, which is the command queue. I see advantages on having it (as it provides out-of-the-box efficiency), but also some downsides (it is opinionated, not allowing for strategies like connection pooling). I don't think this is necessarily a bad thing - field experience will give us insights. I do think this makes good docs even more essential. I like how adapt() works. Parsing into compile-time structures is always good. However, as a Redis novice I am, I've had a hard time debugging things when I got a mismatch between the type passed to adapt() and what the command expects. The error happens at parse time, which I think is pretty dangerous, as it's very easy to get things overlooked. As an example of this, the library's own example cpp20_containers, in the hgetall function, uses the type std::map<std::string, std::string> for the hgetall response, which will be okay unless the key is not set, in which case you will get an error. I think this has an enormous potential for causing hidden bugs (and even vulnerabilities). I've also noticed that if you mismatch the sizes of the pipeline request and response, you get an assertion (so UB). I initially went with a lot of (false) confidence, as adapt() changed my mind to "this is type safe, you're not writing C, the compiler will know if you mess it". It's true that if you think it twice, the compiler can't know, but this false confidence is something that may happen to more people. So I think this should somehow be addressed. Exposing just request::push() seems terse. I think some methods for the most common commands would help a lot. Something like request::push_set(string_view key, string_view value). Or maybe request::push(set_cmd(string_view key, string_view value)). Maybe doing that could help address the problem above. I still think a general request::push() should be exposed for the less common commands. The way of handling network errors is very elegant, and it shows the author's effort on it. The reconnect loop is clean. Congratulations. I think error handling at the command level is flawed, though. If a command fails, the entire pipeline operation fails, and no easy access to the diagnostics is provided. I've also seen that Redis will execute all commands in a pipeline even if an error is encountered in one of them. So, in pipeline [C1, C2, C3], if C2 errors, C3 is still executed. The API should provide a way to access C2's error and C3's result. The proposed solution in https://github.com/mzimbres/aedis/issues/40 seems insufficient to me. I'd give Boost.Leaf (or a similar solution) a thought. I don't know if making the connection hang when a push is received and there is no async_receive active is safe. I don't know enough of edge cases to know, but it causes me to raise an eyebrow as user, at least.
Is the documentation helpful and clear?
It is far away from the level I'd ask for a Boost library. Please remember that this is one of the criticisms Boost gets at all times, so we should pay special attention to it. What I think it can be improved: * It needs a proper discussion section, as other libraries do. Explaining goals, design decissions and usage patterns. I'd structure this as: * A proper tutorial. You may assume familiarity with Redis to the point "I know what HGETALL does" (although I would start with a SET/GET). But HELLO and QUIT are not obvious. I'd explain that this 3 is the protocol version, what happens if you don't send a HELLO, whether QUIT is handled specially or just like any other command, etc. Do not place an opaque function "connect" that is not part of the public API in a tutorial. The code should run "as-is". Provide also error handling code. * A design discussion, where you present your design choices with examples and snippets. I like how Boost.URL does it. You've implemented a pretty opinionated queueing system, so I'd draw a diagram of how that works, with the different tasks and how they relate to each other. Also state what benefits does it bring. It would be great if a user could figure out most of the questions I've asked you throughout the review by reading that document. I would include some of the code you currently have in examples as snippets, like the ones in containers, transactions, subscriptions, and so on. A section on error handling would be nice, too. Some docs on request::config flags and when to use each would be great, too. * An example section, where you provide the full listing of all examples. I'm currently using quickbook and its import facility to make this happen, and I'm pretty happy. * A proper reference.
Did you try to use it? What problems or surprises did you encounter?
* Doesn't build with clang15 and libc++ under Ubuntu 22.04, as <memory_resource> appears to not be stable yet (as of libc++ 15). * Builds and runs fine with gcc11 and stdlibc++ under Ubuntu 22.04. I was a little bit let down by finding what seems to be a bug in the queueing system by just playing with the subscription example. I see that all tests covering the connection class (and its underlying queue system) are integrated. This leads to the impression that edge and error conditions haven't been tested as I'd have expected.
What is your evaluation of the implementation?
I think it is good. As others have already looked into it, I just read enough to figure out pieces I was missing to understand the system under review, and won't comment it here.
Additionally, stating your knowledge of redis and asio would be helpful for me.
I have written a similar client library using Asio (Boost.MySQL). I have a shallow knowledge of Redis. I've read the documentation and played with it these days to get a refresher, but I'm far from an expert. I have invested around 2 days in playing with the library and writing this review.
Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost. Also please indicate the time & effort spent on the evaluation and give the reasons for your decision.
I vote to REJECT the library in its current state. I think we need to make sure proper tests, documentation and error handling strategies are in place before proceeding. I think the library has gone in the right direction overall since I last looked at it. I would encourage Marcelo to keep working on it and resubmit it soon. I don't think a full review will be required then, but a mini-review should be enough. Some minor things: * resp3::node< String >: "String: A std::string-like type" - type requirements like this should be properly documented - what does that mean? Concept checking here would be great. * Would be great to provide concept checking for adapt() "The types T1, T2, etc can be any STL container, any integer type and std::string." can it not be aedis::ignore, too? * request::push and request::push_range: the reference contains no info on type requirements. * I would consider renaming the library to Boost.Redis, as it would make purpose clearer. * async_exec reference: "Multiple concurrent calls to this function will be automatically queued by the implementation." this can lead the user think the connection class is thread-safe, which is not. * As a user I wouldn't know if I can really use the low-level API or not. * #include <boost/asio.hpp> in examples makes the compiler cry ;) * The CMakeLists aren't compatible with the Boost superproject. * B2 support should be added at some point. * I think this library shows that Klemen's Asem should be part of Boost. Thanks again Marcelo for taking your time and effort. Regards, Ruben. On Sat, 21 Jan 2023 at 19:40, Marcelo Zimbres Silva <mzimbres@gmail.com> wrote:
On Sat, 21 Jan 2023 at 19:04, Jakob Lövhall <lovhall@protonmail.com> wrote:
making the error text available is not only logging. it can be passed on to a user in interactive sessions giving the user (human) ability to act on it right away. I wouldn't call that "only useful for logging" even though I guess it is logging in some sense.
Good point. I hadn't thought about interactive sessions.
Regards, Marcelo

On Sat, Jan 21, 2023 at 11:01 AM Ruben Perez via Boost <boost@lists.boost.org> wrote:
* Doesn't build with clang15 and libc++ under Ubuntu 22.04, as <memory_resource> appears to not be stable yet (as of libc++ 15).
It would be easiest for Aedis to remove the offering of pmr. The library already manages buffers intelligently without allocator customization. Support for pmr could always be added back later without breaking compatibility if someone can demonstrate a meaningful use-case. Thanks

On Sat, 21 Jan 2023 at 20:00, Ruben Perez <rubenperez038@gmail.com> wrote:
This library introduces something unusual to Asio-based components, which is the command queue. I see advantages on having it (as it provides out-of-the-box efficiency), but also some downsides (it is opinionated, not allowing for strategies like connection pooling).
Why do you think it does not allow connection pooling? The user can create as many connections as he wants. I also don't see it as opinionated as optimal use of the resp3 protocol mandates this feature. The reason why node-js performs so well in the benchmarks even when compared to go, rust, c++ and other languages is because it gets this correctly (automatic pipelining).
I like how adapt() works. Parsing into compile-time structures is always good. However, as a Redis novice I am, I've had a hard time debugging things when I got a mismatch between the type passed to adapt() and what the command expects. The error happens at parse time, which I think is pretty dangerous, as it's very easy to get things overlooked. As an example of this, the library's own example cpp20_containers, in the hgetall function, uses the type std::map<std::string, std::string> for the hgetall response, which will be okay unless the key is not set, in which case you will get an error. I think this has an enormous potential for causing hidden bugs (and even vulnerabilities). I've also noticed that if you mismatch the sizes of the pipeline request and response, you get an assertion (so UB).
To fix this properly I would need a tool similar to what protobuff is to grpc: generate requests and responses stub from a schema. Aedis however is not trying to do that, it provides the infrastructure with which such a tool could be created. Perhaps there is no place in Boost for such a library, but it is still necessary.
The way of handling network errors is very elegant, and it shows the author's effort on it. The reconnect loop is clean. Congratulations.
Thanks
I have invested around 2 days in playing with the library and writing this review.
Wow, great. Thanks for that.
Thanks again Marcelo for taking your time and effort.
Thank you too for the effort you invested, it is highly appreciated. I don't think I can address all your points but they will certainly help me improve Aedis. Regards, Marcelo

On Sat, 21 Jan 2023 at 20:00, Ruben Perez <rubenperez038@gmail.com> wrote:
I like how adapt() works. Parsing into compile-time structures is always good. However, as a Redis novice I am, I've had a hard time debugging things when I got a mismatch between the type passed to adapt() and what the command expects. The error happens at parse time, which I think is pretty dangerous, as it's very easy to get things overlooked.
Solving this would require knowing at compile time what data-structure each command returns at compile time, but this is the kind of information that is only available in the documentation of each command (IIRC not even in a form I could use in a generator). As an alternative I could lock-in users on resp3 types: maps, sets, etc. But then how to represent data structures that have no corresponding C++ types. It would also be inefficient as it would require first copying and then converting. In other words, this should be addressed by a higher level library.
As an example of this, the library's own example cpp20_containers, in the hgetall function, uses the type std::map<std::string, std::string> for the hgetall response, which will be okay unless the key is not set, in which case you will get an error. I think this has an enormous potential for causing hidden bugs (and even vulnerabilities). I've also noticed that if you mismatch the sizes of the pipeline request and response, you get an assertion (so UB).
To avoid this you can wrap the response type in std::optional, for example std::optional<std::map<std::string, std::string>> I don't lock users in this behaviour because they might not want it. Perhaps it is desirable to have an error instead of having to check whether the optional has a value. The user must know what he needs. Regards, Marcelo

On Sat, 21 Jan 2023 at 20:00, Ruben Perez <rubenperez038@gmail.com> wrote:
I like how adapt() works. Parsing into compile-time structures is always good. However, as a Redis novice I am, I've had a hard time debugging things when I got a mismatch between the type passed to adapt() and what the command expects. The error happens at parse time, which I think is pretty dangerous, as it's very easy to get things overlooked.
I am trying to understand how you avoid these risks in Boost.MySql but it seems it suffers from the same problems? I mean this: https://github.com/boostorg/mysql/blob/ac7285c62163c0de40a466ecaeca13f7de5af... conn.query("SELECT salary FROM employee WHERE first_name = 'Underpaid'", result); double salary = result.rows().at(0).at(0).as_double(); The query is not type safe [1] and the result can mismatch the the query. Why would it be considered ok for Boost.MySql and not Boost.Aedis? Disclaimer: I am not trying to make a case or open a precedent, it is purely technical. I might be wrong but I would like to clarify anyway. Regards, Marcelo [1] Typesafe in this sense: https://github.com/rbock/sqlpp11

On Sat, 21 Jan 2023 at 20:00, Ruben Perez <rubenperez038@gmail.com> wrote:
I think error handling at the command level is flawed, though. If a command fails, the entire pipeline operation fails, and no easy access to the diagnostics is provided. I've also seen that Redis will execute all commands in a pipeline even if an error is encountered in one of them. So, in pipeline [C1, C2, C3], if C2 errors, C3 is still executed. The API should provide a way to access C2's error and C3's result. The proposed solution in https://github.com/mzimbres/aedis/issues/40 seems insufficient to me. I'd give Boost.Leaf (or a similar solution) a thought.
This is to make it safe by default and in part because Aedis is centered around the adapter. In the same sense that you can write an adapter to receive responses in the data-structure that makes most sense to the application e.g. std::map, std::unordered_map, boost::flat_map etc. You can also customize the way errors are handled. For example, * Errors like type mismatch are usually unrecoverable and you might want to fail fast and let the async_ operation itself fail instead of proceeding in an unknown state. * If you can tolerate errors like that you can write an adapter that does not communicate an error to the parser. I did not make this the default because it looks unsafe, people will usually expect async_exec(req, adapt(resp)) to throw when the response can't be stored in resp. I am perhaps at fault here for not documenting this feature. I thought however not many people would want this behaviour. Regards, Marcelo

This is my review of Boost.Aedis.
Are you knowledgeable about the problem domain? I have moderate knowledge of ASIO. I have used Redis in a minor capacity in the past.
How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? I've spent a few hours reading RESP3 spec and Redis documentation, then a couple more hours reading Aedis documentation, a little bit of its implementation, and comparing its API with the Python redis library.
Does this library bring real benefit to C++ developers for real world use-case? Certainly.
Do you have an application for this library? No.
Does the API match with current best practices? The Asio part seems fairly standard.
Is the documentation helpful and clear? The documentation is very sparse, even though mostly serviceable. In
As was pointed out by Peter Dimov, to/from_bulk is way to generic for ADL customisation points. They should be more specific like aedis_to_bulk or use tag_invoke (like JSON does). Unlike some previous reviewers I am not so concerned with "stringly-typed" request API. The full type safe approach would require implementing 462 commands as separate types. Several would require accompanying types for optional parameters. The commands are sometimes extended to have more optional parameters. And can result in different response types depending on the context (e.g. because of transactions). This sounds like quite a lot of work. In addition, Redis can load third-party modules which add more commands. This means that the current API is still necessary, and a type-safe one could only be added in addition to, not instead of it. When thinking of how I would use Aedis, I envisioned writing several (possibly, a few dozen) of my own helpers that support the commands I would need and ensure that the proper arguments are used with each command. I do not envision a situation where a significant part of my code becomes string with commands. As such, I do not think the current API creates too much of a hurdle. particular, there needs to be a more thorough explanation of type requirements to arguments of aedis::adapt and aedis::resp3::node. There also needs to be a discussion of what types to/from_bulk supports out of the box.
Did you try to use it? What problems or surprises did you encounter? I have not.
What is your evaluation of the implementation? I haven't spent much time looking through the implementation.
Do you think the library should be accepted as a Boost library? I think the library should be ACCEPTED, on several conditions. First of all, it has to improve its documentation. Second, it has to switch to less generic customization point function names. And finally, It appears from reading documentation and cursory looking through the implementation that only fundamental and standard types are supported by functions like `aedis::adapt`. I think Boost libraries should be interoperable.
If these conditions are met, I believe the library would work very well with several existing Boost libraries, and make it very easy to create network applications with the help of Boost. вс, 15 янв. 2023 г. в 18:17, Klemens Morgenstern via Boost <boost@lists.boost.org>:
The formal review of the aedis starts today (15th) and ends next week on Tuesday (24th).
The library was developed & submitted by Marcelo.
Aedis =================
Aedis is a Redis client library built on top of Boost.Asio that implements the latest version of the Redis communication protocol RESP3.
Some of its distinctive features are * A connection class that provides high-level functions to execute Redis commands and receive server pushes concurrently. * Automatic pipelining of commands. * Support for STL containers and serialization of user-defined data types. * Low-latency: Responses can be read in their final data-structure without creating copies.
Aedis links Github: https://github.com/mzimbres/aedis Homepage: https://mzimbres.github.io/aedis/ Git tag: v1.4.1 Tarball: https://github.com/mzimbres/aedis/archive/refs/tags/v1.4.1.tar.gz
Redis Links: Redis: https://redis.io/ RESP3: https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md Pipelines: https://redis.io/docs/manual/pipelining/ Redis commands: https://redis.io/commands/
Local Redis =========== Redis can be also easily installed on your local machine, for example, on Debian run "apt install redis". You might also want to use docker https://www.docker.com/blog/how-to-use-the-redis-docker-official-image/
Redis server for the Boost review ================================= We also have a public aedis server available for testing, please contact me or Marcelo for the details.
Review ========
Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost. Also please indicate the time & effort spent on the evaluation and give the reasons for your decision.
The review is open to anyone who is prepared to put in the work of evaluating and reviewing the library. Prior experience in contributing to Boost reviews is not a requirement.
Some questions to consider:
- Does this library bring real benefit to C++ developers for real world use-case? - Do you have an application for this library? - Does the API match with current best practices? - Is the documentation helpful and clear? - Did you try to use it? What problems or surprises did you encounter? - What is your evaluation of the implementation?
Additionally, stating your knowledge of redis and asio would be helpful for me.
More information about the Boost Formal Review Process can be found at: http://www.boost.org/community/reviews.html
Reviews can also be submitted privately to me, and I will not disclose who sent them. I might however cite passages from those reviews in my final decision.
Thanks to Marcelo for providing us with a new library & thanks for all the reviews in advance.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I'd like to thank Marcelo for his work. Here's my review:
Does this library bring real benefit to C++ developers for real world use-case?
Definitely. C++ needs good networking facilities and Redis is very popular and useful. It also fits Boost as we have been incorporating a set of libraries so useful people can't ignore it. This should also make people less resistant to Boost because of the "monolith" myth. The library would help Boost have an even better collection of network libraries, and this ecosystem is part of why boost has many users nowadays. As more and more people are moving to C++17, they don't need `boost` as some kind of `std2::` anymore and all we would be left with are discussions about those Reddit complaints.
Do you have an application for this library?
No.
Does the API match with current best practices?
Although it doesn't require coroutines, its unapologetic use in the documentation is quite nice. This is not for all libraries but it's nice that users have more and more references of async code that looks clean. The pipelining interface is also quite nice and intuitive while still doing the hard work. Parsing directly into structures is also great. The internal use of `async_compose` is also clean, or at least as clean as it can be. If "co_await (conn.async_run() || conn.async_exec(req, adapt(resp)));" is a really common pattern, I found it weird that we didn't have a single operation for that. Other comments and examples in the docs helped clarify the issue but I have to admit I found it very confusing that early in the documentation. If they don't always have to go together, maybe we could start in the docs with examples that introduce them separately. Maybe we should have no support for async server pushes and requests in the first examples of the docs. If they really have to go together and really can't be another composed operation, maybe a diagram in the docs explaining the relationship between what these things are doing. If the examples in the docs are representative, `adapt` seems quite repetitive. Is `adapt` really needed? Can't the function or some function just accept `adapt`able types? `to_bulk` looks too general for ADL. Maybe use `tag_invoke`, specializations, or a longer name. `req.push_range` accepting iterators is kind of weird after the other overload does accept a range. At the same time, I understand this is only to differentiate from `req.push`, whose second argument can also be a range, so the common pattern of a single function name constraining the type is not a good solution in this case. So I don't know what's best because I'm bad with names, but it does look weird. Shouldn't `std::pmr::string& to` be `std::pmr::string& from`? And why not `string_view from`? I got confused with the requirements here even after checking the reference. The example with `dummy` didn't help much. Some API to create commands that are always correct seem useful since there seems to be little point in arbitrary command strings or commands without their parameters in the most common applications. I imagine real applications are limited to a small subset of commands. An enumeration to replace the command string doesn't look that good to me but I like Peter's idea of automatically generating the glue https://github.com/redis/redis/tree/4ffcec29af9dc1246540d94834540528576c68a4... . Otherwise, if they are manually maintained, they could exist only for the most common commands. Depending on the API, `push` could even become `raw_push` or something to disincentivize its usage. If new commands are specified but not implemented the user can always use "`raw_push`". Although I'm not sure it should be disincentivized in other RESP3 clients because I don't know what they are. Maybe *none* of that makes sense because there's a lot of conflict between redis and redis-like databases. I don't see how that could be at the moment because they would all be extensions. But then we would have a different problem because the documentation might be slightly misleading in representing what the library is intented to achieve, beginning with its name. What is the definition of a redis-like database, what does this ecosystem look like, what are the commonalities, conflicts, pros, cons, etc...? We had very similar discussions when Boost.MySql was proposed. And the pros and cons I hear are very similar. It feels like some kind of deja vu. I've been wondering if the `req.push("HELLO", 3);` couldn't be implicit somehow, since it's always there and resp3 only supports, well, resp 3. Probably not because we don't know if that `resp3::request` is the first request. But maybe we could think of something. In any case, it looks a lot like a TLS socket where its "hello" (big quotes here) is mandatory, but then there's the pipelining issue and the cost of maintaining and checking the "already_hello" (big quotes again) state. I also noticed we had some discussions about pipelining in Ruben's library. Given the direction Boost seems to be going, I expect that to come up more and more often. I have no idea what the best API would look like, but maybe you guys could discuss that a little. It might make sense to have some convention for this kind of thing for future reference. Or to come up with and explore some alternatives. If I understand Ruben's comments correctly, part of the problem here is communicating errors.
Is the documentation helpful and clear?
The template is nice and doesn't hurt my eye. It would be nice to have different pages instead of anchors to make it easier to navigate. But that might be a matter of taste. `aedis::resp3::request::config::coalesce` is mentioned before it's explained. Either it's important at this point or it isn't. "The request can contains" -> "contain" The `to_bulk` example has an unused variable. It's easier to have a simpler `my_struct` and show and example that really does the thing. redis-rs might be too slow for the plot but the number can still be mentioned in the text. It can also be represented with a truncated bar chart, which is not uncommon. I don't see the point of saying libuv is in the benchmark and then saying it's removed from the benchmark because of this and that. Just don't mention it. Same for the holes in the bar chart. Just don't put them there. Although the redis-rs could still be there as a truncated bar. They also exist in Latex. The hierarchy of the sections should have the echo server benchmark under "Comparisons". With time, the documentation could discuss the most relevant aspects of Redis directly without expecting the user to always refer to the Redis documentation. This helps explain the rationale for each decision. At some point, everyone will need to read the original for a real application. But initially, the docs could be at least in a way the user could get started without knowing much about Redis. For instance, the introduction kind of assumes the user knows what data structures Redis support. And the introductory example already uses an `HGETALL` command which is kind of "advanced" in comparison to simple strings. At this point, the user might think that the most common data structure, or only the data structure, etc, and give up trying the library. For instance, if `connection` MUST be discussed before the data structures, it would be more sensible to use only strings in this introductory example. Then it goes from "Connection" to "Server pushes" before discussing these types. Server pushes are not even included in the long Redis tutorial because it's considered advanced there. Between "Connection" and "Server pushes", we could have a least a small section with a table describing the supported data types and examples of equivalent C++ data structures that can hold these results by default. The official Redis tutorial is actually a good guide in terms of the order of topics for someone who doesn't know that much about Redis. One nice thing to try is to achieve something more goal-oriented, like other boost docs and the very Redis tutorial. This tends to be useful to users regardless of how much they know about Redis. The installation section could (should?) come at the beginning. I believe many of the issues raised in other reviews could be covered in the documentation and that would have avoided a lot of debate.
Did you try to use it? What problems or surprises did you encounter?
Yes. It worked fine with recent versions of the GCC, Clang, and MSVC. I would move everything in the `detail` namespace to files in `aedis/detail` to make the public API easier to read.
What is your evaluation of the implementation?
It's well-organized and reasonably documented when compared to other boost libraries. Some type requirements could be better explained in the reference.
Stating your knowledge of redis and asio would be helpful for me.
I've used Redis a few times in the past, I understand what it does, how it's no useful but I'm no expert. I understand the Asio primitives well. I'm not an expert on coroutines.
Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost.
Also please indicate the time & effort spent on the evaluation and give
I recommend accepting the library. the reasons for your decision. I spent 1 1/2 days evaluating the library. Some decisions about the API led to discussion but the author has always had a reasonable rationale for them even when they are controversial. The library also uses Asio properly which, for better or worse, is not that easy to achieve. I mentioned reasons why I believe this kind of library is useful to the boost ecosystem. I'm not making the acceptance conditional, but I would like a commitment from the author to improve the most reasonable and critical points mentioned in other reviews by creating, tracking, and exploring issues. The repository also needs the boost boilerplate (j2, boost namespaces, etc...) but I assume that's implied unless I missed something. For now, I also see no reason not to honestly consider a "safe" API for at least a small subset of important and valid commands, or something more automatic and maintainable like Peter suggested. It seems like that could be doable with a python script. Thank you again for the submission, Em dom., 15 de jan. de 2023 às 12:17, Klemens Morgenstern via Boost < boost@lists.boost.org> escreveu:
The formal review of the aedis starts today (15th) and ends next week on Tuesday (24th).
The library was developed & submitted by Marcelo.
Aedis =================
Aedis is a Redis client library built on top of Boost.Asio that implements the latest version of the Redis communication protocol RESP3.
Some of its distinctive features are * A connection class that provides high-level functions to execute Redis commands and receive server pushes concurrently. * Automatic pipelining of commands. * Support for STL containers and serialization of user-defined data types. * Low-latency: Responses can be read in their final data-structure without creating copies.
Aedis links Github: https://github.com/mzimbres/aedis Homepage: https://mzimbres.github.io/aedis/ Git tag: v1.4.1 Tarball: https://github.com/mzimbres/aedis/archive/refs/tags/v1.4.1.tar.gz
Redis Links: Redis: https://redis.io/ RESP3: https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md Pipelines: https://redis.io/docs/manual/pipelining/ Redis commands: https://redis.io/commands/
Local Redis =========== Redis can be also easily installed on your local machine, for example, on Debian run "apt install redis". You might also want to use docker https://www.docker.com/blog/how-to-use-the-redis-docker-official-image/
Redis server for the Boost review ================================= We also have a public aedis server available for testing, please contact me or Marcelo for the details.
Review ========
Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost. Also please indicate the time & effort spent on the evaluation and give the reasons for your decision.
The review is open to anyone who is prepared to put in the work of evaluating and reviewing the library. Prior experience in contributing to Boost reviews is not a requirement.
Some questions to consider:
- Does this library bring real benefit to C++ developers for real world use-case? - Do you have an application for this library? - Does the API match with current best practices? - Is the documentation helpful and clear? - Did you try to use it? What problems or surprises did you encounter? - What is your evaluation of the implementation?
Additionally, stating your knowledge of redis and asio would be helpful for me.
More information about the Boost Formal Review Process can be found at: http://www.boost.org/community/reviews.html
Reviews can also be submitted privately to me, and I will not disclose who sent them. I might however cite passages from those reviews in my final decision.
Thanks to Marcelo for providing us with a new library & thanks for all the reviews in advance.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Alan Freitas https://alandefreitas.github.io/alandefreitas/ <https://github.com/alandefreitas>
participants (13)
-
Alan de Freitas
-
Christian Mazakas
-
Jakob Lövhall
-
Klemens Morgenstern
-
Marcelo Zimbres Silva
-
Mateusz Loskot
-
Niall Douglas
-
Peter Dimov
-
Ruben Perez
-
Sam Hartsfield
-
Vinnie Falco
-
Zach Laine
-
Дмитрий Архипов