
On Tue, Aug 31, 2004 at 09:55:19AM -0600, Jonathan Turkanis wrote:
file > bzip2 (binary) > text > (process) > bzip2 (binary) > file
Line-ending conversions can be done by sticking a newline filter in between the binary and text filters. When converting_stream is up and running, code conversion will be inserted at the appropriate place in a filter chain consisting of mixed narrow- and wide-character components.
Hiya, me again :) I suppose I should really just download the code and start playing with it in order to be able to review it - but I have no time for that. So, don't take my questions as formal critics please; I am just being interested as a possible (future) user (and possibly even contributor). My concern, when I see those filter chains, remains performance as a result of unnecessary copying of data. Can you tell me; would the average filter copy the data (from one place in memory to another)? I'd suppose so, because it is unlikely that an arbitrary filter can be trusted to use the streambuf as a 'work space'. And if so, does this mean that if I put ten filters in a chain that then the data is copied ten times? Please enlighten me :). Thanks for your time in advance! -- Carlo Wood <carlo@alinoe.com>