On Wed, Mar 29, 2017 at 5:04 PM, Niall Douglas via Boost
as that paper Lee linked to points out, everybody writing storage algorithms - even the professionals - consistently gets sudden power loss wrong without fail.
That paper found power loss bugs in the OS, filing systems, all the major databases and source control implementations and so on. This is despite all of those being written and tested very carefully for power loss correctness. They ALL made mistakes.
I have to agree. For now I am withdrawing NuDB from consideration - the paper that Lee linked is very informative. However just to make sure I have the scenario that Lee pointed out in my head, here's the sequence of events: 1. NuDB makes a system call to append to the file 2. The file system increases the metadata indicating the new file size 3. The file system writes the new data into the correct location Lee, and I think Niall (hard to tell through the noise), are saying that if a crash occurs after 2 but before or during 3, the contents of the new portion of the file may be undefined. Is this correct? If so, then I do need to go back and make improvements to prevent this. While its true that I have not seen a corrupted database despite numerous production deployments and over 2TB data file, it would seem this case is sufficiently rare (and data-center hardware sufficiently reliable) that it is unlikely to have come up.