
Lubomir, Lubomir Bourdev wrote:
Stefan Seefeld wrote:
* It should be possible to create row-major, as well as column-major images, to speed up column-based algorithms. As pixels have 'channels' as well as 'semantic channels', images may have 'rows' as well as 'semantic rows'.
Can't you just use transposed_view ?
As you point out in your presentation, an y-gradient can be implemented by combining a transpose with an x-gradient. However, due to all the cache misses performance will suffer terribly. Therefor, if I know that I'm going to run a number of column-wise operations (FFT's come to mind), I may want to corner-turn my image first. I know I can do this explicitely, but sometimes it is good to push this into the type, and the transpose into the type-conversion. This has better potential for optimization.
* It may be worthwhile generalizing some of the API in parametrizing dimensions (for example to have a 'size(axis)' method instead of 'width()' / 'height()', or having
"get_dimensions(view)[n]" returns the size along n-th dimension
If your dimensions have different types, you could use: get_dimensions(view).template axis_value<N>();
I see. Sorry I missed that !
a stride(axis) method returning the step size along rows and columns...
The step in bytes is only applicable to memory-based image views. For those, you could say:
"view.pixels().row_bytes()" for byte-step in y "view.pixels().pix_bytestep()" for byte-step in x
For a generic N-dimensional memory-based image you could use the more generic accessors:
"byte_step(view.template axis_iterator<N>())"
OK.
Implementation ==============
[...]
I think it would be very helpful to add sections to the tutorial that describe how to introduce custom pixel and image (well, locator) types. That would demonstrate the reasoning behind lots of the chosen concepts.
Doesn't the Mandelbrot set example show how to introduce new locator/image types?
Right. That's very useful ! But I think there should be more such examples, notably to show how to bind existing code / types to GIL types.
As for custom pixels, perhaps even better is to make packed pixels part of GIL. Once this is done, making custom pixel types would be needed very rarely if at all...
Right, and I agree packed pixels should become part of GIL core. However, this argument isn't what I'm arguing for. :-) My point is that showing how to write custom pixel types is an excellent device to explain the design. I think I understand the relevant part of GIL much better now simply because I was interested into how to write a (5,6,5) Pixel type to be binary-compatible with IPP. (I could have picked an arbitrary other type, this was really just a way to get my feed wet.)
The library should contain example programs to demonstrate all the individual aspects of the library.
You mean code snippets?
Code snippets would be good, but complete minimal programs would be even better. What a better way to demonstrate an algorithm than by showing how it transforms an input image into an output image. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...