
I have been doing profiling of my SHA implementations, and copying the data around doesn't even show up on the radar -- the vast majority of the time, by far, is in the actual digest routine. The profile for MD5 may look different, but at least in the case of SHA I don't know how much the straight-buffer-processing gains you.
Well, earlier I was talking about resource-critical microcontrollers.For a micro with, say, a few kilo-bytesof RAM, saving 64 of them
by eliminating redundant internal buffering isa real gain.
Personally, I see this as more of a design choice. But I think I will
try to optimize my own hashes accordingly.
Good stuff!
Sincerely, Chris.
On Thursday, November 14, 2013 7:54 PM, foster brereton
all the code but the call to perform_algorithm() deal with buffering, which ends up being redundant with the buffering typically provided by client code. The boost vault hash api exposes a process_block() method avoiding redundant buffering. This allows me to do something like:
boost::crypto::md5 h;
boost::for_each (block_range , [h&](const block& b){ h.process_block(b); other(b); });
h.input(tail_data_ptr, tail_data_count);
h.digest();
[snip] I see what you mean now about the double-buffering, and my hash code is guilty of the same offense. I can look into modifying it to expose the process_block routine, however... I have been doing profiling of my SHA implementations, and copying the data around doesn't even show up on the radar -- the vast majority of the time, by far, is in the actual digest routine. The profile for MD5 may look different, but at least in the case of SHA I don't know how much the straight-buffer-processing gains you. -foster _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost