Re: [boost] asio queue OOM stress test -- Windows deadlock

template <typename Allocator = std::allocator<void> > class limits_service { ... };
template <typename Service> class basic_limits { public: basic_limits(demuxer_type& d); void post_queue(std::size_t value); std::size_t post_queue() const; ... and so on ... };
typedef basic_limits<limits_service<> > limits;
Usage:
boost::asio::demuxer d; boost::asio::limits l(d); l.post_queue(42);
I thought about this some more, and I think it might be better to pass the limits to the demuxer constructor. You have to forgive me. I'm pretty anal about memory management. I tend to allocate my memory at startup on Linux servers because of the way the OOM-killer works. I'm working on a patch to reactor_op_queue.hpp to pool as many of your internal data structures as possible. If I had the limits at construction, I could allocate the necessary internal structures when the demuxer is instantiated. I think this the right thing to do for server applications because if the admin configures a server to handle n number of connections I want to make my best attempt to handle those connections. If the system doesn't have enough resources I think the admin should know about it at startup. This allocation strategy should probably be policy based, because for clients allocating from the heap is more appropriate. I wrote a bit more about this a while back here: http://baus.net/memory-management Thanks again, Christopher

christopher baus wrote:
template <typename Allocator = std::allocator<void> > [--snip--] If I had the limits at construction, I could allocate the necessary internal structures when the demuxer is instantiated. I think this the right thing to do for server applications because if the admin configures a server to handle n number of connections I want to make my best attempt to handle those connections. If the system doesn't have enough resources I think the admin should know about it at startup.
This allocation strategy should probably be policy based, because for clients allocating from the heap is more appropriate.
I wrote a bit more about this a while back here: http://baus.net/memory-management
You're allocating + forcing initialization of memory at startup, yes? As I understand it, the Linux OOM killer may *still* kill your application even if your application is "well behaved" (in the sense that it isn't actually responsible for the allocation that causes the OOM condition). It certainly doesn't always pick the application that caused the allocation failure. The only reasonable thing to do is to turn off overcommit, or to add lots of swap and *pray* that your working set isn't larger than your physical memory. Cheers, -- Bardur Arantsson <bardurREMOVE@THISimada.sdu.dk> <bardurREMOVE@THISscientician.net> Sticks and stones may break my bones, but hollow-points expand on impact.

As I understand it, the Linux OOM killer may *still* kill your application even if your application is "well behaved" (in the sense that it isn't actually responsible for the allocation that causes the OOM condition). It certainly doesn't always pick the application that caused the allocation failure.
That's true. My test program killed mysqld yesterday that was minding its own business. But it is the best you can do. I don't want to be the application that triggers the OOM-killer. Imagine a server that does dedicated proxying, that all of a sudden takes a heavy load. That load shouldn't trigger the OOM-killer. If all your critical processes are running, they are all pre-alloced, and you have some memory to spare, you are basically are good. This is getting off topic, but pre-allocation is good for many server apps and leave it at that.
participants (2)
-
Bardur Arantsson
-
christopher baus