
On 11/14/2013 1:54 PM, foster brereton wrote:
On Thu, Nov 14, 2013 at 9:24 AM, Jeff Flinn
wrote: [snip]
all the code but the call to perform_algorithm() deal with buffering, which ends up being redundant with the buffering typically provided by client code. The boost vault hash api exposes a process_block() method avoiding redundant buffering. This allows me to do something like:
boost::crypto::md5 h;
boost::for_each (block_range , [h&](const block& b){ h.process_block(b); other(b); });
h.input(tail_data_ptr, tail_data_count);
h.digest();
[snip]
I see what you mean now about the double-buffering, and my hash code is guilty of the same offense. I can look into modifying it to expose the process_block routine, however...
I have been doing profiling of my SHA implementations, and copying the data around doesn't even show up on the radar -- the vast majority of the time, by far, is in the actual digest routine. The profile for MD5 may look different, but at least in the case of SHA I don't know how much the straight-buffer-processing gains you.
It may not show up in profiling in isolation. My actual case defines
h.process_block() as
CompositeHash