
On Thu, 10 Mar 2011 17:05:56 -0800 Scott McMurray <me22.ca+boost@gmail.com> wrote:
Can you elaborate on why someone would want to clear the memory, but not want to actually be secure?
Barring extremely sensitive information like government-level secrets, there are generally only two things that a developer needs to worry about: [...]
[...] Clearing the memory from the bigint doesn't help when iostream cached the bytes of the file from which it was read, nor does it protect the information that the NIH implementation of RSA was used to decrypt.
If you're reading and writing sensitive values to disk or across the network, you've got bigger security concerns than the library can deal with. If you're sending them through another library, obviously the other library must also have measures in place to keep them secure. As I said, airtight security is a hard problem.
Any useful attempt at security will involve more than a single number, so any number that wants to be used securely should have a way to hook into an existing system. An allocator might be a reasonable way to do this, since it could handle clearing, telling the OS not to swap the memory, or whatever the user decides is important enough, and be applied to the xint, to the vector used in a custom streambuf, etc.
Certainly. But forcing anyone who wants even low-level security to write an allocator, when the library itself can handle that much very easily, seems foolish.
Still, I think that the idea of even implying that doing home-grown security is an acceptable idea is a terrible one. Even if someone doesn't need NSA-resistent security, why would doing custom RSA with a big number library ever be a better idea than using a proper crypto toolkit?
XInt's predecessor was written because my company had a commercial project that needed public-key cryptography. For reasons I won't go into, it couldn't use non-system DLLs. This was around 1998; Microsoft's crypto API didn't exist at that point (so far as I know), commercial crypto libraries were well beyond our shoe-string budget, and open-source software was almost universally full-GPL and unavailable for our purposes. On top of that, we didn't need strong security in the implementation. The part of the program that would run on the end user's machine didn't have any sensitive information -- that was the whole point of using public-key encryption and signatures -- and the part that did would run on machines that we controlled. I'd also previously written such a library for research purposes. The decision would probably be different today, but at the time, writing our own was only logical. My point is that you can't predict what needs a program might have that would lead its developers to want to implement their own crypto code. Yes, it's a really bad idea in general, because it's extremely hard to get right, but sometimes the hard parts aren't needed, and there are other legitimate reasons for doing it. -- Chad Nelson Oak Circle Software, Inc. * * *