On Tue, 12 Apr 2022 at 19:57, Vinícius dos Santos Oliveira
Em ter., 12 de abr. de 2022 às 05:11, Marcelo Zimbres Silva
escreveu: On Tue, 12 Apr 2022 at 06:28, Vinícius dos Santos Oliveira
wrote: This class detracts a lot from Boost.Asio's style. I'll borrow an explanation that was already given before:
[...] makes the user a passive party in the design, who only has to react to incoming requests. I suggest that you consider a design that is closer to the Boost.Asio design. Let the user become the active party, who asks explicitly for the next request.
-- https://lists.boost.org/Archives/boost/2014/03/212072.php
IMO, he is confusing *Boost.Asio design* with *High vs low level design*. Let us have a look at this example from Asio itself
https://github.com/boostorg/asio/blob/a7db875e4e23d711194bcbcb88510ee298ea29...
That's not a library. That's an application.
That is why I pointed you at the specific line in the file, where the chat_participant class is defined and not to the application as a whole.
A NodeJS application, for instance, will have a http.createServer() and a callback that gets called at each new request. How, then, do you answer questions such as "how do I defer the acceptance of new connections during high-load scenarios?".
That's a general question to a problem I am not trying to solve. Discussing this here will only make things more confusing.
Boost.Asio OTOH never suffered from such problems.
Of course, not even Boost.Beast that is built on top of Asio suffers from this problem as it provides only a low-level HTTP library.
And then we have DeNo (NodeJS's successor) that gave up on the callback model: https://deno.com/blog/v1#promises-all-the-way-down
Can't comment. I know nothing about Deno. The fact that they are giving up on callback doesn't mean anything to me.
It has nothing to do with high-level vs low-level. It's more like "policies are built-in and you can't change them".
I disagree.
The public API of the chat_session is
class chat_session { public: void start(); void deliver(const std::string& msg); };
One could also erroneously think that the deliver() function above is "not following the Asio style" because it is not an async function and has no completion token. But in fact, it has to be this way for a couple of reasons
That's an application, not a library. It has hidden assumptions (policies) on how the application should behave. And it's not even real-world, it's just an example.
Ditto. I pointed you at a specific line not to the application.
- At the time you call deliver(msg) there may be an ongoing write, in which case the message has to be queued and sent only after the ongoing write completes.
You can do the same with async_*() functions. There are multiple approaches. As an example: https://sourceforge.net/p/axiomq/code/ci/master/tree/include/axiomq/basic_qu...
Although I don't really want to comment I am not familiar with I will, as after a short glance I spotted many problems 1. It uses two calls to async_write whereas in Chris' example there is only one. 2. It tries to do the same thing as the deliver() function I pointed you at but poorly. You see, it has to call async_write again in the completion of the previous async_write when the queue is not empty. 3. It uses callback btw. which you were arguing against. 4. It doesn't seem to handle concurrency correctly by means of Asio primitives e.g. strands. It uses a lock free queue. I don't think any IO object should do this. It is doing the same thing that Chris is doing in his example, but poorly.
- Users should be able to call deliver from inside other coroutines. That wouldn't work well if it were an async_ function. Think for example on two chat-sessions, sending messages to one another
coroutine() // Session1 { for (;;) { std::string msg; co_await net::async_read(socket1, net::dynamic_buffer(msg), ...);
// Wrong. co_await session2->async_deliver(msg); } }
Now if session2 becomes unresponsive, so does session1, which is undesirable. The read operation should never be interrupted by others IO operations.
Actually, it *is* desirable to block.
I am chatting with my wife and my mother on a chatting app. My wife gets on a train and her connection becomes slow and unresponsive. As a result I can't chat with my mother because my session is blocked trying to deliver a message to my wife. Are you claiming this is a good thing?
I'll again borrow somebody else's explanation:
Basically, RT signals or any kind of event queue has a major fundamental queuing theory problem: if you have events happening really quickly, the events pile up, and queuing theory tells you that as you start having queueing problems, your latency increases, which in turn tends to mean that later events are even more likely to queue up, and you end up in a nasty meltdown schenario where your queues get longer and longer.
This is why RT signals suck so badly as a generic interface - clearly we cannot keep sending RT signals forever, because we'd run out of memory just keeping the signal queue information around.
-- http://web.archive.org/web/20190811221927/http://lkml.iu.edu/hypermail/linux...
I thought this is one reason why Asio executors are a good and necessary thing. You can implement a fair treatment of events. Even without executors you can impose a limit on how much queues are allowed to grow. This quotation is however misplaced as the *desirable to block* is already wrong.
However, if you wish to use this fragile policy on your application, an async_*() function following Boost.Asio style won't stop you. You don't need to pass the same completion token to every async operation. At one call (e.g. the read() call) you might use the Gor-routine token and at another point you might use the detached token: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/detached...
That's a misunderstanding of the problem I am trying to solve.
std::string msg; co_await net::async_read(socket1, net::dynamic_buffer(msg), use_awaitable); session2->async_deliver(msg, net::detached);
You only need to check whether async_deliver() clones the buffer (if it doesn't then you can clone it yourself before the call).
Right now you're forcing all your users to go through the same policy. That's the mindset of an application, not a library.
No. I don't have any police, don't really know what you are talking about. I am only presenting - async_connect - async_write - async_read - and two timer.async_wait as a single composed operation to the user. So that they don't have to do that themselves, each one of them.
Not at all. I just had to mention queueing theory problems which was one of the problems the aforementioned blog post touches on. There are more.
It is fine to bring up new topics. But at a certain time I was not sure whether it was a criticism of a specific part of my design or just a discussion about something you consider important. Marcelo