On Fri, May 30, 2008 at 09:56:45AM -0400, Jason Sachs wrote:
usage, each process's "virtual KB" usage goes up by 512MB, but its "working set KB" usage doesn't increase until I actually allocate memory within the shared memory segment.
Excellent :-) Working set is the actual amount of RAM used, while virtual memory is just the size of the virtual address space which might not have yet entered into the working set, or might not have been "committed" at all. (I believe that "commit" is the NT's technical term for first-time faulting in a page and thus also reserving physical RAM.) Is there a separate column for "committed memory"? Virtual is, well, just reserved; committed is actually allocated; working set is what is currently in RAM (usually less than committed). Again, I'm not an NT expert -- plese cross-check the above paragraph(s) with other sources.
Sounds good, but the 6th one of these failed and I got a warning saying my system was low on virtual memory. So it sounds like there is a 4GB total
I'd rather say that you're low on swap. Each SHM segment needs a corresponding amount of swap space which can be used as backing store, should you decide to really use all of the reserved memory. I.e. when the total working set size of all programs exceeds the total amount of physical memory (minus kernel memory), some of the pages need to be swapped out to backing store -- in this case swap. Also note that the swap space is also just _reserved_ -- the kernel needs to ensure that it's there before it hands you out the SHM segment, but it will not be used unless you become short on physical memory. I.e. a mere swap space _reservation_ will not slow down your system or program. Try increasing the amount of swap space (so that it's [64MB * # of programs] larger[*] than the [SHM segment size * # of programs]), repeat the experiment and see what happens. 6 programs x 512MB, so you should be safe at 3GB + amount of physical RAM + extra ~1GB for everything else on the system. [*] Rule of thumb. Every process needs additional VM for stack, data, code, etc.
which seems silly since each process should have its own address space and
it does.
the per-process limit is), I should be able to reserve an unlimited amount of total address space. No can do. :(
what do you mean by "total address space"? total address space == RAM + swap (and that is, I guess, what NT calls "virtual memory"), so it is not unlimited. it is very reasonable that the kernel refuses to overcommit memory (i.e. does not allow you to reserve more than the "total address space"); simulation of truly unlimited memory quickly leads to nasty situations (read about linux's out-of-memory killer).
So strategy #1 of being profligate in choosing shared memory segment size fails on WinXP; there's a significant resource cost even if you don't actually allocate any memory. Drat.
Well, the only resource cost that I can see is disk space reserved for swap. Given today's disks, I don't see that being a problem if it buys you a simpler programming model. (And to make it clear, just in case: this is my comment on your particular application; I do *not* recommend this approach as a general programming practice!)