On Tue, May 10, 2016 at 9:29 AM, Norman
For instance i connected 30.000 dummy clients to my server, so the memory consumption went up to ~ 50% of my available memory. Then i disconnected them (leading to a lot of destructed objects), which should lead to a lot of freed memory. But my application still claims 50% of my RAM, according to RSS. If i now connect another 30.000 clients, my RSS changes only marginally. But i cannot understand, why my application never releases any of the claimed memory to the OS.
You're interpreting the data wrong. Your app needed the memory; your app asked the heap allocator for memory; the heap allocator asked the OS; the OS gave it the memory. Your app told the heap allocator it didn't need the memory. What did the heap allocator do? We don't know, but it probably held on to some of it in a free pool, or maybe all of it. But, let's suppose it surrendered everything back (which would be silly and lead to thrash, but let's suppose)...it has no control over what the OS sets aside for your process. At this point, it's up to the OS to decide what to do with the pages in the RSS. If nobody else needs them, then why should it pull them back? If your app needed them at some point, and nobody else has needed them so far, then the OS is smart to just leave them in core on behalf of your process just in case it still needs them.
It feels like there is some kind of a capacity inside the application (a map-like container for instance), which can only increase and will never decrease. Since its capacity does never decrease, the claimed memory won't be released.
Your expectation is wrong. -- Chris Cleeland