
On 3/31/07, Richard Day <richardvday@verizon.net> wrote:
Basically I would gather from this that a 10gb file for example would take 30gb of disk space to sort it ? That would appear to be what your suggesting here. And why 1/16 specifically ? Wouldnt it be better to use available memory (Free real not paged) if possible instead of specifically doing 1/16'th ? 10gb is more then most(Non-Server) machines have now that is true but 2-4gb is becoming more and more common now and of course servers may have way more then 10gb.
Seems to me that if your going to do this much writing(2 X Original size) wouldnt it be simpler to just use a memory mapped file. No, it is not 2x prignal size on disk. This illustration should help o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o beomes (I leave out much data for simplicity) o o o o o o o o o o o o o o o o o o o o o o o o o which becomes ooooo ooooo ooooo ooooo ooooo all on one big file in memory. That is what I am trying to say.