GIL: Buffer overflow when reading tiffs

When reading RGBA tiffs, there is a buffer overflow happening in tiff_io.hpp. void apply(const View& view) { ... std::vector<pixel<typename View::channel_t, typename View::color_space_t::base>> row(view.width()); for (int y=0;y<view.height();++y) { io_error_if(TIFFReadScanline(_tp,&row.front(), y)!=1); std::copy(row.begin(),row.end(),view.row_begin(y)); } TIFFReadScanline() can return up to TIFFScanlineSize() bytes of data, in the case of RGBA images the scanline size is larger than the size of row resulting in a buffer overflow (it may happen in other cases as well). A quick fix for that would be to do a row.reserve(TIFFScanlineSize(_tp)); before the loop. \\ The RGBA images will come out corrupted anyway (since there is a mismatch in the number of channels), is there any plan for supporting them? Adding a new type to tiff_io.hpp like: template <> struct tiff_read_support_private<bits16,rgba_t> { BOOST_STATIC_CONSTANT(bool,is_supported=true); BOOST_STATIC_CONSTANT(int,bit_depth=16); BOOST_STATIC_CONSTANT(int,color_type=PHOTOMETRIC_RGB); }; will work fine for static image types. Of course, that won't do for dynamic types. -- Tilo ____________________________________________________________________________________ Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now.

Tilo Nitzsche wrote:
A quick fix for that would be to do a row.reserve(TIFFScanlineSize(_tp));
Most definitely not. The correct fix is to pass TIFFScanlineSize(_tp) to the vector's constructor. Writing more bytes than the vector's size, even if they have been reserved, is undefined behaviour and just might mess up the container's internal structure. OK, so I can't think of any implementation where it would ever do so, but it's the principle that matters: you do not write beyond a vector's size, even if the space has been allocated. Sebastian Redl
participants (2)
-
Sebastian Redl
-
Tilo Nitzsche