
Lubomir Bourdev wrote:
But how specific image (or view) organized in memory (or even synthetic like in you example), that's what I call layout. But you mix this from the very beginning (when defined RGB class) till the very end (member functions of a templated class image, with arguments that specify each row alignment). I think it's no good at all.
There is one important point that I keep repeating but for some reason have failed to get through to you. Let me try again: Seems I failed do the same job too ;)
GIL defines abstract concepts and concrete models of those concepts.
Our design principles are that the concepts are as general as possible, to the degree that does not affect the performance and usability of the models. For example, take a look at the RandomAccessNDImageConcept. Does it say anything about the order of channels, or the color space, or that it even operates on pixels? Does it even say that the image is two dimensional? No. For what it's worth, that level of abstraction allows you to create a model of a video clip or a synthetic 3D volume using voxels if you'd like.
Then there are refinements of RandomAccessNDImageConcept that allow for 2D images, and further refinements dealing with concrete, memory-based images, with the corresponding additional required functionality. But there you have row alignment support - Why?
As for the models, you are correct to point out that the models are very much dependent on the particular representation of the image in memory (what you call the "layout"). This is not a design flaw; it is very much intentional, because the models HAVE to know what they are supposed to model. The pixel iterator concept doesn't say anything about planar vs interleaved representation. But there is a planar model of pixel iterator (planar_ptr) and an interleaved model, and they are different and very much dependent on the memory organization they are representing.
The users of the library don't have to deal directly or know the specifics of the underlying models.
What's the point of BGR class then?
VIEW::x_iterator will return a raw pointer for interleaved images, planar pointer for planar images, step iterator for subsampled images, or whatever is the optimal model that will do the job.
BTW, I would reject "view" concept at all, since it's actually same as "image" concept because every subset of original image (such as subsampled one) is still an image :)
I always wanted to implement most of that algorithms, but right now I just don't feel GIL is the platform where I want to do it, sorry :( Pavel - you may have given up on us, but we haven't given up on you. Some of the points you make (decoupling the ordering of the channels and more automatic pixel creating) are worth looking into further. Since you obviously have spent a lot of time thinking about and working on a very similar area your input is invaluable. If our submission is successful, we will be sure to give you credit for your suggestions.
Thanks for your patience and politeness to expain all your points to such a bad listener as I am, and I think I shouldn't have acted that way. Yeah it's still true that I like my way more, but my library is nowhere near what your GIL already have now, so I urge you to ask for a formal review. And I'm not planning for a submission at least for a while, and if your library is accepted by that time, I actually see no reason to submit my library then. P.S. I think you should add support for sRGB too ;), of course if you haven't done that already. -- Pavel Chikulaev