[MSM] Stack blown with deferred events (1.59.0)
Hi Boost-Users and Christophe, I'm having an issue when using MSM's deferred events feature. I've written a very basic example of a state machine, simulating batching upon receipt of events using the deferred message queue as a buffer, and then handling the events in another state once a watermark has been hit. This however, is not working as well as I'd hoped. I've made the batch size configurable with a constructor variable on the state machine, and am using guards in the transition table to do the checking. Whilst functionally this works, when I set a reasonably large batch size (10,000), the process cores. I've done a bit of digging in gdb, and it appears that the stack is being blown. `ulimit -s shows 8192` as my stack limit, raising this to 16384 with a 10k batch size works, although the performance is pretty bad. Raising the batch size to beyond the stack limit once again breaks MSM. I've attached my test program, and am compiling and running locally with: ``` clang++ -std=c++11 -o batcher -g batch_test.cpp ./batcher <batch size> [total events=100000] ``` To summarize, I've a few questions with my scenario. 1. Am I using the deferred events feature correctly? I believe I've followed the documentation, but it is entirely possible there's something I'm missing! 2. Is this a valid use case for the deferred events feature, or something that wouldn't be recommended? Handling of deferred events seems to be fine normally, but with a large number of events, the performance does seem to suffer significantly. For example, `time` shows a batch size of 10 taking 0.470s, with a batch size of 5000 taking 24.415s. This gap is closed significantly when built with with -02 due to inlining, but its still significant. 3. The stack blowing to me implies recursion when items are processed from the deferred events queue (rather than a linear iteration over the container (deque in my case)). I've tried to follow this in the MSM code, but it's heavy stuff. In this scenario, is it possible I would be better to implement my own queuing mechanism? Thanks, Tom
participants (1)
-
Tom Gibson