
Hi, If I use a asio::strand and post() there multiple function objects, are they executed in the same order or is the order of execution not specified? I think to use it as a work queue but I need them to be processed in sequence... Thx Georg

If they are posted from the same thread they will execute in order
Regards,
Vinnie
Follow me on GitHub: https://github.com/vinniefalco
On Fri, Dec 27, 2024 at 8:33 PM Georg Gast via Boost
Hi, If I use a asio::strand and post() there multiple function objects, are they executed in the same order or is the order of execution not specified?
I think to use it as a work queue but I need them to be processed in sequence...
Thx Georg
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Fri, Dec 27, 2024 at 8:33 PM Georg Gast via Boost
Hi, If I use a asio::strand and post() there multiple function objects, are they executed in the same order or is the order of execution not specified?
I think to use it as a work queue but I need them to be processed in sequence...
Depending on your definition of "processed in sequence", a strand may or may not be what you're looking for. Strands are first-in-first-out but if your function objects are async in the Asio sense, you'll run into problems here. Each item in the strand will run in post() order until either it completes or it calls a non-blocking Asio function, which means you can have interwoven function objects executing "out of sequence". Strands are also for the multi-threaded io_contexts, to give safe access to I/O objects. If you need just a plain FIFO work queue, Asio has some thread pool classes you can use. If you need to sequence a bunch of async function objects, you'll need something different. - Christian

Hi,
I would use asio for two things:
1. Classic asio operation like rs232 and tcp. They can be processed in the usual asio way.
2. I receive jobs from an external device. The device can only send one job at a time. The processing must be in the order of arrival. This would post function objects to a strand. Each device would have its own strand.
As I understand you and Vinnie that should work as expected.
Thanks
Georg
29.12.2024 16:27:53 Christian Mazakas via Boost
On Fri, Dec 27, 2024 at 8:33 PM Georg Gast via Boost
wrote: Hi, If I use a asio::strand and post() there multiple function objects, are they executed in the same order or is the order of execution not specified?
I think to use it as a work queue but I need them to be processed in sequence...
Depending on your definition of "processed in sequence", a strand may or may not be what you're looking for.
Strands are first-in-first-out but if your function objects are async in the Asio sense, you'll run into problems here.
Each item in the strand will run in post() order until either it completes or it calls a non-blocking Asio function, which means you can have interwoven function objects executing "out of sequence".
Strands are also for the multi-threaded io_contexts, to give safe access to I/O objects.
If you need just a plain FIFO work queue, Asio has some thread pool classes you can use. If you need to sequence a bunch of async function objects, you'll need something different.
- Christian
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Mon, 30 Dec 2024 at 06:37, Georg Gast via Boost
Hi, I would use asio for two things: 1. Classic asio operation like rs232 and tcp. They can be processed in the usual asio way. 2. I receive jobs from an external device. The device can only send one job at a time. The processing must be in the order of arrival. This would post function objects to a strand. Each device would have its own strand.
As I understand you and Vinnie that should work as expected.
Yes, although execution order is not part of the contract, this will work in practice. Unless the job itself yields execution (ie it is itself an asynchronous composed operation). In this case your jobs could run interleaved. If you want to run composed operations sequentially (with respect to each other) then you must coordinate this yourself via an “asynchronous mutex or semaphore”. Asio’s timer object can be used as a semaphore : - set to max timeout - use cancel_one() to release the next job. R
Thanks
Georg
29.12.2024 16:27:53 Christian Mazakas via Boost
: On Fri, Dec 27, 2024 at 8:33 PM Georg Gast via Boost < boost@lists.boost.org> wrote:
Hi, If I use a asio::strand and post() there multiple function objects, are they executed in the same order or is the order of execution not specified?
I think to use it as a work queue but I need them to be processed in sequence...
Depending on your definition of "processed in sequence", a strand may or may not be what you're looking for.
Strands are first-in-first-out but if your function objects are async in the Asio sense, you'll run into problems here.
Each item in the strand will run in post() order until either it completes or it calls a non-blocking Asio function, which means you can have interwoven function objects executing "out of sequence".
Strands are also for the multi-threaded io_contexts, to give safe access to I/O objects.
If you need just a plain FIFO work queue, Asio has some thread pool classes you can use. If you need to sequence a bunch of async function objects, you'll need something different.
- Christian
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Mon, Dec 30, 2024 at 2:04 AM Richard Hodges via Boost < boost@lists.boost.org> wrote:
...execution order is not part of the contract
Yes, it is: https://www.boost.org/doc/libs/1_87_0/doc/html/boost_asio/reference/io_conte... Thanks

Em seg., 30 de dez. de 2024 às 11:10, Vinnie Falco via Boost
On Mon, Dec 30, 2024 at 2:04 AM Richard Hodges via Boost < boost@lists.boost.org> wrote:
...execution order is not part of the contract
Yes, it is:
https://www.boost.org/doc/libs/1_87_0/doc/html/boost_asio/reference/io_conte...
I think I've found a violation to these rules (which I depend on).
However I've been failing to produce a minimal test case to send a
proper bug report. I've exhausted my ideas for the time being, so I've
come here to ask for help/new ideas that I could attempt.
Given I've failed to produce a minimal test case, I'll have to point
you guys to code which is larger.
So here I call strand.post(a):
https://gitlab.com/emilua/emilua/-/blob/v0.11.0/src/actor.ypp#L1038
And here I call strand.post(b):
https://gitlab.com/emilua/emilua/-/blob/v0.11.0/include/emilua/core.hpp#L120...
strand.post(a) happens before strand.post(b) (I even inserted printf()
statements locally just to make sure they really do). Therefore a()
should happen before b(), but that's not what I've been observing. I
observed b() happening before a() on Windows and Linux (both epoll and
io_uring). On FreeBSD a() always happens before b(). I don't know what
ASIO does differently in FreeBSD. Sometimes on Linux I get the desired
behavior as well, but almost always I get the undesired behaviour. I
think when the cache is hot I always get the undesired behavior. So
that's the minimal test case I wrote:
#include

On Tue, Feb 11, 2025 at 10:18 AM Vinícius dos Santos Oliveira < vini.ipsmaker@gmail.com> wrote:
I think I've found a violation to these rules (which I depend on).
Keep in mind, that you can always implement your own strand which offers any guarantees you need. There's nothing magical about Asio's strand implementation. This is the benefit of Asio's executor model. One approach, is to implement your own strand and see if the problem still exists. Thanks

I found the mistake. ASIO is *not* at fault. You guys can disregard my
previous email. There's no bug to report in ASIO's bug tracker this
time.
Em ter., 11 de fev. de 2025 às 15:17, Vinícius dos Santos Oliveira
Em seg., 30 de dez. de 2024 às 11:10, Vinnie Falco via Boost
escreveu: On Mon, Dec 30, 2024 at 2:04 AM Richard Hodges via Boost < boost@lists.boost.org> wrote:
...execution order is not part of the contract
Yes, it is:
https://www.boost.org/doc/libs/1_87_0/doc/html/boost_asio/reference/io_conte...
I think I've found a violation to these rules (which I depend on). However I've been failing to produce a minimal test case to send a proper bug report. I've exhausted my ideas for the time being, so I've come here to ask for help/new ideas that I could attempt.
Given I've failed to produce a minimal test case, I'll have to point you guys to code which is larger.
So here I call strand.post(a): https://gitlab.com/emilua/emilua/-/blob/v0.11.0/src/actor.ypp#L1038
And here I call strand.post(b): https://gitlab.com/emilua/emilua/-/blob/v0.11.0/include/emilua/core.hpp#L120...
strand.post(a) happens before strand.post(b) (I even inserted printf() statements locally just to make sure they really do). Therefore a() should happen before b(), but that's not what I've been observing. I observed b() happening before a() on Windows and Linux (both epoll and io_uring). On FreeBSD a() always happens before b(). I don't know what ASIO does differently in FreeBSD. Sometimes on Linux I get the desired behavior as well, but almost always I get the undesired behaviour. I think when the cache is hot I always get the undesired behavior. So that's the minimal test case I wrote:
#include
#include <iostream> #include <thread> #include <memory> namespace asio = boost::asio;
struct actor { actor(asio::io_context& ioc, int nsenders) : work_guard{ioc.get_executor()} , s{ioc} , nsenders{nsenders} {}
const asio::io_context::strand& strand() { return s; }
asio::executor_work_guardasio::io_context::executor_type work_guard; asio::io_context::strand s; int nsenders; };
int main() { std::thread t; std::shared_ptr<actor> a; { auto ioc = std::make_sharedasio::io_context(); a = std::make_shared<actor>(*ioc, 2); t = std::thread{[ioc]() mutable { ioc->run(); ioc.reset(); }}; }
std::cout << "1\n"; a->strand().post([a]{ std::cout << "2\n"; if (--a->nsenders == 0) { a->work_guard.reset(); } }, std::allocator<void>{});
std::cout << "a\n"; a->strand().post([a]{ std::cout << "b\n"; if (--a->nsenders == 0) { a->work_guard.reset(); } }, std::allocator<void>{});
a.reset(); t.join(); }
That's the same algorithm I use in Emilua, but now I cannot observe the undesired result. I've tried to insert sleep_for() in a few spots in an attempt to mimic the delays/overhead from LuaJIT, but they were not enough to reproduce the behavior I observed in Emilua. So... ideas on how I can make this minimal test case stress more code branches from ASIO?
If you want to reproduce the problem locally, you can attempt the Lua code below:
if _CONTEXT ~= 'main' then local inbox = require 'inbox' print(inbox:receive()) return end
local actor2 = spawn_vm{ module = '.',
-- comment/remove inherit_context=false to make the code work inherit_context = false }
actor2:send('hello')
Just run the program with:
emilua path/to/program.lua
The desired output would be the message "hello" printed in stdout (which happens very rarely on Linux, and happens every time on FreeBSD). The undesired output would be in the likes of:
Main fiber from VM 0x796707f86380 panicked: 'Broadcast the address before attempting to receive on it' stack traceback: [string "?"]: in function 'receive' /home/vinipsmaker/t5.lua:3: in main chunk [C]: in function '' [string "?"]: in function <[string "?"]:0>
-- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/
-- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/

On Mon, Dec 30, 2024 at 2:04 AM Richard Hodges via Boost < boost@lists.boost.org> wrote:
If you want to run composed operations sequentially (with respect to each other) then you must coordinate this yourself via an “asynchronous mutex or semaphore”.
Asio’s timer object can be used as a semaphore : - set to max timeout - use cancel_one() to release the next job.
Asio seems to have thread-safe channels in its experimental namespace, these would work as well. To Richard's point, you're usually better off reaching for the channel abstraction here because it actually does what you want, just more straightforwardly. Something like this: https://godbolt.org/z/8zxTPsbG4 You can emulate this using the techniques Richard alluded to as well. You can use a timer like a condition variable and have your loop asynchronously block on it via `.async_wait()` with the maximum timeout. Wakeups happen by cancellation, which you'd ignore in your processing loop. This enables you to trivially handle the case where processing work items is an inherently asynchronous task composed of multiple I/O ops. - Christian
participants (5)
-
Christian Mazakas
-
Georg Gast
-
Richard Hodges
-
Vinnie Falco
-
Vinícius dos Santos Oliveira