If you implemented NuDB as a simple data file and a memory mapped key file and always atomic appended transactions to the data file when inserting items, then after power loss you could check if the key file mentions extents not possible given the size of the data file. You then can rebuild the key file simply by replaying through the data file, being careful to ignore any truncated final append.
I think this is trading an atomic `truncate(0)` assumption with an atomic multi-block overwrite assumption. So this seems like something that is more likely to have a torn write that is hard to notice.
Implied in my proposal was that every record appended to the data file would be hashed. You need this as atomic append does not send those appends to storage in order, so appends could be torn.
That would be a reasonable power loss recovery algorithm. A little slow to do recovery for large databases, but safe, reliable, predictable and it would only run on a badly closed database. You can also turn off fsync entirely, and let the atomic appends land on storage in an order probably close to the append order. Ought to be quicker than NuDB by a fair bit, much fewer i/o ops, simpler design.
How would it notice that a bucket was partially overwritten though? Wouldn't it have to _always_ inspect the entire key file?
My memory mapped key file would be something like:
struct key_file
{
std::atomic