
Hi Joel, Thanks for the time you spent reviewing GIL and for your vote! The AGG integration example is certainly very interesting, and we will add it to the list of GIL example files. Thanks for doing all this proof-of-concept work! The Fusion integration is a neat idea and I see no immediate obstacles to doing this. The less code in GIL, the better :-) Let's discuss details off-line.
- What is your evaluation of the implementation?
B+. I thought I'd give some negative marks here because I was not able to see any regression tests, but Lubomir quickly pointed out that there are some in the ASL repository. Indeed there are tests. However, I give this a B+ because the tests are not extensive and do not cover a lot. I see only 6 tests. A lot more work should be invested in this area.
I am not sure if you found the regression tests. Our regression suite is fairly extensive, though the code is not very well organized and we don't test all aspects systematically. I found it _really_ hard to have comprehensive testing generic components, especially of this size. But we cover images and image views, including planar and interleaved, gray and RGB, images with channel permutations (BGR vs RGB), subsampled, color_converted views, and piping them with all possible view transformations (rotation, nth_channel, etc). We run this with both memory-based and synthetic images (the mandelbrot set). We invoke the histogram and the gradient algorithm, also some STL algorithms, reverse iterators, etc. Running our tests generates 98 image files that result in applying various combinations of the algorithms and image views. Our regression code compares each of these with reference images and throws an exception if there are differences. We also have some compile-time tests that use concept checks. We also have some basic performance tests, which consist of comparing GIL loops with hand-written C loops, but the tests are very rudimentary and not very thorough.
I also notice that GIL does not follow Boost conventions. I expected something like this directory structure:
+ boost + gil + libs + gil + test + doc + example
I expect the authors to follow this convention if/when GIL is accepted.
Of course!
- What is your evaluation of the documentation?
B-. I enjoyed the tutorial and the Design guide a lot. The Breeze presentation is a first in boost. It is entertaining and insightful. Like other people, I too noticed some rough edges and inconsistencies. For example, the author at the start of the presentation says "if you don't know about STL iterators, think of them like pointers". (or something to that effect). Then later, he shows of some MPL and Lambda snippets without hesitation. I think the author is not consistent with the way he thinks of his target audience. Is this for newbies with no knowledge of STL iterators or for advanced folks with MPL/Lambda background? Make up your mind. Be consistent.
yes, good point!
In any case, a brief bakgrounder into MPL and Lambda would be in order.
The doxygen documentation, I have to be blunt, was not useful to me at all. I ended up reading header files which were more informative. Let's take an example:
http://opensource.adobe.com/gil/html/group___channel.html
IMO, this page is totally useless. Example:
Detailed Description:
Channel and channel operations.
C'mon, you can do better than that! Your docs do not do justice to your otherwise elegant design! I'd say rework the docs from scratch with more care and attention and love :)
I feel that if we hadn't provided Doxygen documentation at all we would have gotten higher reviews on documentation :-) The Doxygen documentation is not even required part of the submission, so it comes extra... Those concepts are described better in the design guide.
- What is your evaluation of the potential usefulness of the library?
Oh man, do I have to answer this? :-)
Well, a few people like Jose and Rasmus were not quite convinced, so a straight answer would be nice, though you implied one. Thanks again for the extensive work you put in this review! Lubomir