GIL Review - Integration with other graphics libraries, such as Cairo.

I'm a not so active contributer to the Cairo Graphics Library. Cairo has alot of support in the GTK opensource community. Its not perfect, but it does most of the basic things that people need. Off the top of my head, Here is a summary of the most popular Cairo capabilities: 1) Support for multiple output devices, such as X Window System, Win32, image buffers, and PostScript, PDF, SVG, OpenGL, glitz, Quartz, and XCB. 2) Produces consistent output on all output media, taking advantage of display hardware acceleration when available. 3) Drawing operations such as: A) stroking and filling B) cubic Bézier splines C) transforming and compositing translucent images D) antialiased text rendering. E) antialiased line rendering E) Drawing operations can be transformed by any affine transformation (scale, rotation, shear, etc.)
From my perspective, the disadvantage of Cairo is that it is a typical 'C' library. While it true that many language wrappers have been written for Cairo, it will always be at its core a 'C' library.
A graphics library that was designed from the start to be a modern C++ library, that has an interface that is similar to STL is very appealing to me, such as what GIL does. Unfortunately, its probably fair to say that most graphics developers already using their own favorite graphics library. While some of these graphics libraries are forced upon the developer, such as if your using the Qt or KDE frameworks, it seems to me that easy integration with these other graphics libraries will probably be paramount to GIL's success. Any thoughts on how GIL will integrate with these other graphics libraries? Tom Brinkman

Tom Brinkman wrote:
Any thoughts on how GIL will integrate with these other graphics libraries?
There are many imaging/graphics libraries out there. People have invested huge efforts to provide efficient and comprehensive set of algorithms to do image processing, vector graphics and vision. GIL's strategy is not to compete with all other libraries (or we would fail miserably!) but to embrace them and allow for efficient integration. Betting on GIL doesn't mean you have to give up the speed of Intel's IPP library or the high quality rendering of Anti-Grain. The vast majority of image libraries out there support a fixed set of image representations. This means that all of the goods they provide are limited by the set of images they can target. If you want to use features of multiple such libraries you will end up having to convert back and forth between image formats. You can forget about performance and your image quality may suffer too. GIL, on the other hand, imposes no restrictions on the way the image is represented, and it comes with a set of models that out-of-the-box cover the vast majority of common image representations. Because it works natively with the way your pixels are represented in memory, GIL can be used with all these other libraries with virtually no performance overhead. There are degrees of GIL integration, varying on the amount of work you have to put in and the results you get: 1. Using GIL Algorithms with your favorite library This is the simplest integration. If you want to use your favorite library but want to invoke GIL algorithms on occasion, all you have to do is get pointers to the raw data (which any reasonable library should allow you to get), create GIL image view from it and invoke the GIL algorithm on it. This is shown in the tutorial and the video presentation. 2. Providing GIL interface to algorithms from another library. It would be nice to put a GIL interface on the algorithms this other library provides, so they could be called by other GIL algorithms. Of course, such adapted algorithms will not be fully generic; they will only work with the set of image representations supported by your library. So rather than: template <typename View> void draw_line(const View& view, const point2<int>& from, const point2<int>& to); Your interface may look like: void draw_line(const rgb8_view_t& view, const point2<int>& from, const point2<int>& to); (Or maybe it will take a generic view but require that it be interleaved and 8 or 16-bit for example.) As long as you stay within the image formats supported by this other library you will be fine. 3. Full GIL-ification of the algorithms This integration requires the most work, but has the benefit that this algorithm will now become fully image representation independent, i.e. a first class GIL algorithm. It requires changing the internals of the other library algorithm to use GIL's image view concepts. The algorithm may be fully optimized to use the specific image format; in this case you may have to create a default generic GIL equivalent of the algorithm and invoke the other library one via specialization. An excellent example is Intel's IPP. If you have a favorite library, we encourage you to provide a GIL extension for it! Lubomir

On Sat, 07 Oct 2006 17:47:52 -0200, Lubomir Bourdev <lbourdev@adobe.com> wrote:
There are many imaging/graphics libraries out there. People have invested huge efforts to provide efficient and comprehensive set of algorithms to do image processing, vector graphics and vision. GIL's strategy is not to compete with all other libraries (or we would fail miserably!) but to embrace them and allow for efficient integration. Betting on GIL doesn't mean you have to give up the speed of Intel's IPP library or the high quality rendering of Anti-Grain.
Is there a comparison with Anti-Grain? IIRC it also was representation independent. Bruno

Bruno Martínez wrote:
Is there a comparison with Anti-Grain? IIRC it also was representation independent.
No, I only learned about Anti-Grain recently. To do a library justice requires studying it in depth to make a good comparison. There are many libraries out there, and they keep evolving, so doing comparisons can be difficult and time consuming. If Anti-Grain is truly representation-independent, then it should be easier to adapt its algorithms into fully generic GIL algorithms. Lubomir

From my brief viewing of GIL, I would see it replacing Anti-Grains rasterizer, this would allow Anti-Grain to render into any format directly.
Gordon. "Lubomir Bourdev" <lbourdev@adobe.com> wrote in message news:B55F4112A7B48C44AF51E442990015C04C9950@namail1.corp.adobe.com... Bruno Martínez wrote:
Is there a comparison with Anti-Grain? IIRC it also was representation independent.
No, I only learned about Anti-Grain recently. To do a library justice requires studying it in depth to make a good comparison. There are many libraries out there, and they keep evolving, so doing comparisons can be difficult and time consuming. If Anti-Grain is truly representation-independent, then it should be easier to adapt its algorithms into fully generic GIL algorithms. Lubomir --------------------------------------------------------------------------------
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Is there a comparison with Anti-Grain? IIRC it also was representation independent.
Bruno
Question for Bruno (or anyone familiar with Anti-Grain): 1. How does Anti-Grain deal with planar images? Let's take the simplest possible example: Fill an image with the color red. Here is a possible code in GIL to do that: template <typename View, typename Pixel> void fill_pixels(const View& view, const Pixel& color) { typename View::pixel_t value; color_convert(color, value); for (typename View::iterator it=view.begin(); it!=view.end(); ++it) *it = value; }; Here is how to call it: rgb8_pixel_t red(255,0,0); // Channel ordering: RGBRGBRGBRGB... rgb8_image_t interleaved_image(100,100); fill_pixels(view(interleaved_image), red); // Channel ordering: RRR... GGG... BBB... rgb8_planar_image_t planar_image(100,100); fill_pixels(view(planar_image), red); How would this example look with Anti-Grain? __________________ Second question, how does AGG deal with images whose type is instantiated at run-time? (For example, suppose you read the image from a file and want to keep it in native form). Here is the extra GIL code to do this: template <typename Pixel> struct fill_pixels_op { typedef void result_type; Pixel _color; fill_pixels_op(const Pixel& color) : _color(color) {} template <typename View> void operator()(const View& view) const { fill_pixels(view, _color); } }; template <typename ViewTypes, typename Pixel> void fill_pixels(const any_image_view<ViewTypes>& view, const Pixel& color) { apply_operation(view, fill_pixels_op<Pixel>(color)); } Here is how to call it: typedef mpl::vector<rgb8_view_t, rgb8_planar_view_t> my_views_t; any_image_view<my_views_t> runtime_view; runtime_view = view(interleaved_image); fill_pixels(runtime_view, red); runtime_view = view(planar_image); fill_pixels(runtime_view, red); Again, my question is, how would this look in Anti-Grain? For the Vigra fans out there... How would the above two examples look in Vigra? For the Cairo fans... same question. Thanks, Lubomir

Lubomir Bourdev wrote:
template <typename View, typename Pixel> void fill_pixels(const View& view, const Pixel& color) { typename View::pixel_t value; color_convert(color, value); for (typename View::iterator it=view.begin(); it!=view.end(); ++it) *it = value; };
Here is how to call it:
rgb8_pixel_t red(255,0,0);
// Channel ordering: RGBRGBRGBRGB... rgb8_image_t interleaved_image(100,100); fill_pixels(view(interleaved_image), red);
// Channel ordering: RRR... GGG... BBB... rgb8_planar_image_t planar_image(100,100); fill_pixels(view(planar_image), red);
Hi Lubomir, sorry for asking this in the wrong thread, but: is it possible to write fill_pixels so that it produces the equivalent of three memset calls in the planar case, one per channel?

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Peter Dimov Sent: Thursday, October 12, 2006 2:52 PM To: boost@lists.boost.org Subject: Re: [boost] GIL Review - Integration with other graphics libraries,such as Cairo.
Lubomir Bourdev wrote:
template <typename View, typename Pixel> void fill_pixels(const View& view, const Pixel& color) { typename View::pixel_t value; color_convert(color, value); for (typename View::iterator it=view.begin(); it!=view.end(); ++it) *it = value; };
Here is how to call it:
rgb8_pixel_t red(255,0,0);
// Channel ordering: RGBRGBRGBRGB... rgb8_image_t interleaved_image(100,100); fill_pixels(view(interleaved_image), red);
// Channel ordering: RRR... GGG... BBB... rgb8_planar_image_t planar_image(100,100); fill_pixels(view(planar_image), red);
Hi Lubomir, sorry for asking this in the wrong thread, but: is it possible to write fill_pixels so that it produces the equivalent of three memset calls in the planar case, one per channel?
Absolutely! GIL already has similar performance specializations for some STL algorithms. For example, here is a simple way to write copy_pixels: template <typename SrcView, typename DstView> void copy_pixels(const SrcView& src, const DstView& dst) { assert(get_dimensions(src)==get_dimensions(dst)); for (typename SrcView::iterator it=src.begin(); it!=src.end(); ++it) { *dst++ = *src++; } } Here is a slightly faster version: template <typename SrcView, typename DstView> void copy_pixels(const SrcView& src, const DstView& dst) { for (int y=0; y<dst.height(); ++y) { typename SrcView::x_iterator srcIt=src.row_begin(y); typename DstView::x_iterator dstIt=dst.row_begin(y); for (int x=0; x<src.width(); ++x) dstIt[x]=srcIt[x]; } } (It is faster because operator++ for x_iterator does less work, as it doesn't have to deal with skipping potential padding at the end of each row. In fact, x_iterator is often a raw C pointer) But GIL doesn't use any of these. It uses performance specializations as follows: 1. If both images are interleaved and have no padding at the end of rows, invokes a single memmove 2. If both images are planar and have no padding at the end of rows, invokes a memmove for each channel 3. If they are of the same type but have padding at the end of rows, invokes memmove for each row (or K-memmoves per row in the case of K-channel planar images) Only in the worst-case scenario where they have a different layout or one/both are virtual, it will fallback to an explicit loop (the second version above). See the end of the video presentation for this. That allows for writing GIL algorithms that are both fully generic and down to the metal efficient. GIL provides metafunctions to query the properties of its constructs. For example, view_is_planar<View> is an MPL predicate that you can use to help write performance specializations. Lubomir

Lubomir Bourdev wrote:
Is there a comparison with Anti-Grain? IIRC it also was representation independent.
Bruno
Question for Bruno (or anyone familiar with Anti-Grain):
IMO, Anti-grain and GIL serves different purposes. Anti-grain (http://www.antigrain.com/) is more about vector graphics while GIL is about Image manipulation and pixel data representations. While there is some overlap, of course, I think it's unwise to compare apples and oranges. IMO, these libraries complement each other rather than compete with each other. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman wrote:
Lubomir Bourdev wrote:
Is there a comparison with Anti-Grain? IIRC it also was representation independent.
Bruno
Question for Bruno (or anyone familiar with Anti-Grain):
IMO, Anti-grain and GIL serves different purposes. Anti-grain (http://www.antigrain.com/) is more about vector graphics while GIL is about Image manipulation and pixel data representations. While there is some overlap, of course, I think it's unwise to compare apples and oranges. IMO, these libraries complement each other rather than compete with each other.
Joel: We totally agree with you. We looked again at AGG. It is very, very nice, and in many ways complementary to GIL. However, I felt obliged to respond because people are asking for comparison and one reviewer already voted for rejecting GIL, in part because "Anti-Grain is richer". It would be really nice if you (or another AGG fan?), being familiar with AGG, look into and perhaps post an example showing how GIL could work with AGG. For example, how do we draw a line directly inside a GIL view. It doesn't have to be a fully-generic view. Let's start with something simple like: void draw_line(const rgb8_view_t& view, int x0, int y0, int x1, int y1) { // AGG magic here. } You can get the dimensions, stride and pointer to pixels of a GIL view. Can you make a shallow AGG rendering buffer from them and draw a line into it? The next question is, can we make draw_line more generic, and how much more generic (i.e. what kinds of images are possible) Thanks, Lubomir

The next question is, can we make draw_line more generic, and how much more generic (i.e. what kinds of images are possible)
I don't know about AGG but I think a generic draw line approach can be implemented as an image iterator. For each step the iterator would move to the next line's pixel. A line segment is defined by start and end point. The iterator would need to calculate the slope and can then decide what the next pixel is. The underlying algorithm, like Bresenham, might be a policy of the line iterator class. I'm suggesting an iterator since one might to some extra work at each line's pixel. Think about antialiased lines. Christian

Christian Henning said: (by the date of Sat, 14 Oct 2006 21:39:41 -0400)
The next question is, can we make draw_line more generic, and how much more generic (i.e. what kinds of images are possible)
I'm suggesting an iterator since one might to some extra work at each line's pixel. Think about antialiased lines.
Hi, some time ago I have written a small graphical library for simple use under X, I needed to draw a line. But in X libraries I only found drawing a line on the screen, while I wanted to draw it in the memory. I was too lazy to dig manuals for more. Instead I started searching for a nice line drawing algorithm. I have found some benchmarks, etc.. So Here I copy/paste the algorithm that according to those benchmarks is the fastest one.. // screw this, I'm too lazy to dig xlib manual to find line // drawing function different than // XDrawLine(display, d, gc, x1, y1, x2, y2) // which draws on the Display when I *need* to draw on XImage. // // Xaolin Wu's (public domain) algorithm void lix::line(int x0, int y0, int x1, int y1, int color) { // cout << "Xaolin Wu's alghoritm\n"; int dy = y1 - y0; int dx = x1 - x0; int stepx, stepy; if (dy < 0) { dy = -dy; stepy = -1; } else { stepy = 1; } if (dx < 0) { dx = -dx; stepx = -1; } else { stepx = 1; } putpixel( x0, y0, color); putpixel( x1, y1, color); if (dx > dy) { int length = (dx - 1) >> 2; int extras = (dx - 1) & 3; int incr2 = (dy << 2) - (dx << 1); if (incr2 < 0) { int c = dy << 1; int incr1 = c << 1; int d = incr1 - dx; for (int i = 0; i < length; i++) { x0 += stepx; x1 -= stepx; if (d < 0) { // Pattern: putpixel( x0, y0, color); // putpixel( x0 += stepx, y0, color); // x o o putpixel( x1, y1, color); // putpixel( x1 -= stepx, y1, color); d += incr1; } else { if (d < c) { // Pattern: putpixel( x0, y0, color); // o putpixel( x0 += stepx, y0 += stepy, color);// x o putpixel( x1, y1, color); // putpixel( x1 -= stepx, y1 -= stepy, color); } else { putpixel( x0, y0 += stepy, color); // Pattern: putpixel( x0 += stepx, y0, color); // o o putpixel( x1, y1 -= stepy, color); // x putpixel( x1 -= stepx, y1, color); // } d += incr2; } } if (extras > 0) { if (d < 0) { putpixel( x0 += stepx, y0, color); if (extras > 1) putpixel( x0 += stepx, y0, color); if (extras > 2) putpixel( x1 -= stepx, y1, color); } else if (d < c) { putpixel( x0 += stepx, y0, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1, color); } else { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0, color); if (extras > 2) putpixel( x1 -= stepx, y1 -= stepy, color); } } } else { int c = (dy - dx) << 1; int incr1 = c << 1; int d = incr1 + dx; for (int i = 0; i < length; i++) { x0 += stepx; x1 -= stepx; if (d > 0) { putpixel( x0, y0 += stepy, color); // Pattern: putpixel( x0 += stepx, y0 += stepy, color); // o putpixel( x1, y1 -= stepy, color); // o putpixel( x1 -= stepx, y1 -= stepy, color); // x d += incr1; } else { if (d < c) { putpixel( x0, y0, color); // Pattern: putpixel( x0 += stepx, y0 += stepy, color); // o putpixel( x1, y1, color); // x o putpixel( x1 -= stepx, y1 -= stepy, color); // } else { putpixel( x0, y0 += stepy, color); // Pattern: putpixel( x0 += stepx, y0, color); // o o putpixel( x1, y1 -= stepy, color); // x putpixel( x1 -= stepx, y1, color); // } d += incr2; } } if (extras > 0) { if (d > 0) { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1 -= stepy, color); } else if (d < c) { putpixel( x0 += stepx, y0, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1, color); } else { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0, color); if (extras > 2) { if (d > c) putpixel( x1 -= stepx, y1 -= stepy, color); else putpixel( x1 -= stepx, y1, color); } } } } } else { int length = (dy - 1) >> 2; int extras = (dy - 1) & 3; int incr2 = (dx << 2) - (dy << 1); if (incr2 < 0) { int c = dx << 1; int incr1 = c << 1; int d = incr1 - dy; for (int i = 0; i < length; i++) { y0 += stepy; y1 -= stepy; if (d < 0) { putpixel( x0, y0, color); putpixel( x0, y0 += stepy, color); putpixel( x1, y1, color); putpixel( x1, y1 -= stepy, color); d += incr1; } else { if (d < c) { putpixel( x0, y0, color); putpixel( x0 += stepx, y0 += stepy, color); putpixel( x1, y1, color); putpixel( x1 -= stepx, y1 -= stepy, color); } else { putpixel( x0 += stepx, y0, color); putpixel( x0, y0 += stepy, color); putpixel( x1 -= stepx, y1, color); putpixel( x1, y1 -= stepy, color); } d += incr2; } } if (extras > 0) { if (d < 0) { putpixel( x0, y0 += stepy, color); if (extras > 1) putpixel( x0, y0 += stepy, color); if (extras > 2) putpixel( x1, y1 -= stepy, color); } else if (d < c) { putpixel( stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1, y1 -= stepy, color); } else { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1 -= stepy, color); } } } else { int c = (dx - dy) << 1; int incr1 = c << 1; int d = incr1 + dy; for (int i = 0; i < length; i++) { y0 += stepy; y1 -= stepy; if (d > 0) { putpixel( x0 += stepx, y0, color); putpixel( x0 += stepx, y0 += stepy, color); putpixel( x1 -= stepx, y1, color); putpixel( x1 -= stepx, y1 -= stepy, color); d += incr1; } else { if (d < c) { putpixel( x0, y0, color); putpixel( x0 += stepx, y0 += stepy, color); putpixel( x1, y1, color); putpixel( x1 -= stepx, y1 -= stepy, color); } else { putpixel( x0 += stepx, y0, color); putpixel( x0, y0 += stepy, color); putpixel( x1 -= stepx, y1, color); putpixel( x1, y1 -= stepy, color); } d += incr2; } } if (extras > 0) { if (d > 0) { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1 -= stepy, color); } else if (d < c) { putpixel( x0, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1, y1 -= stepy, color); } else { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0, y0 += stepy, color); if (extras > 2) { if (d > c) putpixel( x1 -= stepx, y1 -= stepy, color); else putpixel( x1, y1 -= stepy, color); } } } } } } -- Janek Kozicki |

Hi Janek, I was talking about how you might want to design line drawing algorithms into GIL. I like the line iterator idea, since it gives you some flexibility for doing processing while you are stepping over the line. Anti aliasing is just one example. Stepping over a line does not necessarily mean stepping in single pixel steps. But you can also do step over a collection of pixels that are consecutive. Just take a nearly horizontal line. There you have a long runs of pixels that one can drawn or processed in one step. Even more interesting is that you can not only combine single pixels into a run of pixels, you can even combine runs of pixel into runs of runs of pixels, and so forth. I did that during my thesis. If you're interested I can send you a paper of where this algorithm is being described. Cool stuff. The papers name is: "Why Step When You Can Run? Iterative Line Digitization Algorithms Based On Hierarchies Of Runs" by Peter Stephenson, etc... On 10/15/06, Janek Kozicki <janek_listy@wp.pl> wrote:
Christian Henning said: (by the date of Sat, 14 Oct 2006 21:39:41 -0400)
The next question is, can we make draw_line more generic, and how much more generic (i.e. what kinds of images are possible)
I'm suggesting an iterator since one might to some extra work at each line's pixel. Think about antialiased lines.
Hi, some time ago I have written a small graphical library for simple use under X, I needed to draw a line. But in X libraries I only found drawing a line on the screen, while I wanted to draw it in the memory. I was too lazy to dig manuals for more. Instead I started searching for a nice line drawing algorithm. I have found some benchmarks, etc.. So Here I copy/paste the algorithm that according to those benchmarks is the fastest one..
// screw this, I'm too lazy to dig xlib manual to find line // drawing function different than // XDrawLine(display, d, gc, x1, y1, x2, y2) // which draws on the Display when I *need* to draw on XImage. // // Xaolin Wu's (public domain) algorithm
void lix::line(int x0, int y0, int x1, int y1, int color) { // cout << "Xaolin Wu's alghoritm\n"; int dy = y1 - y0; int dx = x1 - x0; int stepx, stepy;
if (dy < 0) { dy = -dy; stepy = -1; } else { stepy = 1; } if (dx < 0) { dx = -dx; stepx = -1; } else { stepx = 1; }
putpixel( x0, y0, color); putpixel( x1, y1, color); if (dx > dy) { int length = (dx - 1) >> 2; int extras = (dx - 1) & 3; int incr2 = (dy << 2) - (dx << 1); if (incr2 < 0) { int c = dy << 1; int incr1 = c << 1; int d = incr1 - dx; for (int i = 0; i < length; i++) { x0 += stepx; x1 -= stepx; if (d < 0) { // Pattern: putpixel( x0, y0, color); // putpixel( x0 += stepx, y0, color); // x o o putpixel( x1, y1, color); // putpixel( x1 -= stepx, y1, color); d += incr1; } else { if (d < c) { // Pattern: putpixel( x0, y0, color); // o putpixel( x0 += stepx, y0 += stepy, color);// x o putpixel( x1, y1, color); // putpixel( x1 -= stepx, y1 -= stepy, color); } else { putpixel( x0, y0 += stepy, color); // Pattern: putpixel( x0 += stepx, y0, color); // o o putpixel( x1, y1 -= stepy, color); // x putpixel( x1 -= stepx, y1, color); // } d += incr2; } } if (extras > 0) { if (d < 0) { putpixel( x0 += stepx, y0, color); if (extras > 1) putpixel( x0 += stepx, y0, color); if (extras > 2) putpixel( x1 -= stepx, y1, color); } else if (d < c) { putpixel( x0 += stepx, y0, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1, color); } else { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0, color); if (extras > 2) putpixel( x1 -= stepx, y1 -= stepy, color); } } } else { int c = (dy - dx) << 1; int incr1 = c << 1; int d = incr1 + dx; for (int i = 0; i < length; i++) { x0 += stepx; x1 -= stepx; if (d > 0) { putpixel( x0, y0 += stepy, color); // Pattern: putpixel( x0 += stepx, y0 += stepy, color); // o putpixel( x1, y1 -= stepy, color); // o putpixel( x1 -= stepx, y1 -= stepy, color); // x d += incr1; } else { if (d < c) { putpixel( x0, y0, color); // Pattern: putpixel( x0 += stepx, y0 += stepy, color); // o putpixel( x1, y1, color); // x o putpixel( x1 -= stepx, y1 -= stepy, color); // } else { putpixel( x0, y0 += stepy, color); // Pattern: putpixel( x0 += stepx, y0, color); // o o putpixel( x1, y1 -= stepy, color); // x putpixel( x1 -= stepx, y1, color); // } d += incr2; } } if (extras > 0) { if (d > 0) { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1 -= stepy, color); } else if (d < c) { putpixel( x0 += stepx, y0, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1, color); } else { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0, color); if (extras > 2) { if (d > c) putpixel( x1 -= stepx, y1 -= stepy, color); else putpixel( x1 -= stepx, y1, color); } } } } } else { int length = (dy - 1) >> 2; int extras = (dy - 1) & 3; int incr2 = (dx << 2) - (dy << 1); if (incr2 < 0) { int c = dx << 1; int incr1 = c << 1; int d = incr1 - dy; for (int i = 0; i < length; i++) { y0 += stepy; y1 -= stepy; if (d < 0) { putpixel( x0, y0, color); putpixel( x0, y0 += stepy, color); putpixel( x1, y1, color); putpixel( x1, y1 -= stepy, color); d += incr1; } else { if (d < c) { putpixel( x0, y0, color); putpixel( x0 += stepx, y0 += stepy, color); putpixel( x1, y1, color); putpixel( x1 -= stepx, y1 -= stepy, color); } else { putpixel( x0 += stepx, y0, color); putpixel( x0, y0 += stepy, color); putpixel( x1 -= stepx, y1, color); putpixel( x1, y1 -= stepy, color); } d += incr2; } } if (extras > 0) { if (d < 0) { putpixel( x0, y0 += stepy, color); if (extras > 1) putpixel( x0, y0 += stepy, color); if (extras > 2) putpixel( x1, y1 -= stepy, color); } else if (d < c) { putpixel( stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1, y1 -= stepy, color); } else { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1 -= stepy, color); } } } else { int c = (dx - dy) << 1; int incr1 = c << 1; int d = incr1 + dy; for (int i = 0; i < length; i++) { y0 += stepy; y1 -= stepy; if (d > 0) { putpixel( x0 += stepx, y0, color); putpixel( x0 += stepx, y0 += stepy, color); putpixel( x1 -= stepx, y1, color); putpixel( x1 -= stepx, y1 -= stepy, color); d += incr1; } else { if (d < c) { putpixel( x0, y0, color); putpixel( x0 += stepx, y0 += stepy, color); putpixel( x1, y1, color); putpixel( x1 -= stepx, y1 -= stepy, color); } else { putpixel( x0 += stepx, y0, color); putpixel( x0, y0 += stepy, color); putpixel( x1 -= stepx, y1, color); putpixel( x1, y1 -= stepy, color); } d += incr2; } } if (extras > 0) { if (d > 0) { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1 -= stepx, y1 -= stepy, color); } else if (d < c) { putpixel( x0, y0 += stepy, color); if (extras > 1) putpixel( x0 += stepx, y0 += stepy, color); if (extras > 2) putpixel( x1, y1 -= stepy, color); } else { putpixel( x0 += stepx, y0 += stepy, color); if (extras > 1) putpixel( x0, y0 += stepy, color); if (extras > 2) { if (d > c) putpixel( x1 -= stepx, y1 -= stepy, color); else putpixel( x1, y1 -= stepy, color); } } } } } }
-- Janek Kozicki | _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Christian Henning said: (by the date of Sun, 15 Oct 2006 13:29:50 -0400)
Even more interesting is that you can not only combine single pixels into a run of pixels, you can even combine runs of pixel into runs of runs of pixels, and so forth. I did that during my thesis. If you're interested I can send you a paper of where this algorithm is being described. Cool stuff.
Please send it offlist to me. I'm curious to see that :) Thanks in advance! -- Janek Kozicki |

Lubomir Bourdev wrote:
It would be really nice if you (or another AGG fan?), being familiar with AGG, look into and perhaps post an example showing how GIL could work with AGG. For example, how do we draw a line directly inside a GIL view. It doesn't have to be a fully-generic view. Let's start with something simple like:
void draw_line(const rgb8_view_t& view, int x0, int y0, int x1, int y1) { // AGG magic here. }
You can get the dimensions, stride and pointer to pixels of a GIL view. Can you make a shallow AGG rendering buffer from them and draw a line into it?
The next question is, can we make draw_line more generic, and how much more generic (i.e. what kinds of images are possible)
Hi Lubomir, I have something better than that :) I'll provide code and details in my review. For now, here's something to look at: http://spirit.sourceforge.net/dl_more/gil/lion.png Those were rendered using GIL backend renderer with AGG front end rasterizer (sub-pixel anti-aliased). GIL is cool! So is AGG, of course :-) Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

To demonstrate how GIL can be integrated with other imaging libraries, we decided to create two wrappers of imaging algorithms of another library and invoke them from GIL. We picked Vigra as a test case. Here is the full sample file: http://opensource.adobe.com/gil/vigra_integration.cpp Synopsis: The first step is to provide metafunctions that map GIL view types to Vigra view types, as much as compatibility allows. Then provide functions that create Vigra views from GIL views: template <typename View> . .. vigra_src_view(const View& view) {...} template <typename View> ... vigra_dst_view(const View& view) {...} Then provide wrappers of Vigra algorithms: //////////////////////////////////////////// /// Vigra Algorithm wrappers //////////////////////////////////////////// template <typename SrcView, typename DstView, typename GradValue, typename DestValue> void canny_edge(const SrcView& src_view, const DstView& dst_view, double scale, GradValue gradient_threshold, DestValue edge_marker) { vigra::cannyEdgeImage(vigra_src_view(src_view), vigra_dst_view(dst_view), scale, gradient_threshold, edge_marker); } template <typename SrcView, typename DstView> void gaussian_convolve_x(const SrcView& src_view, const DstView& dst_view, double std_dev) { vigra::Kernel1D<double> gauss; gauss.initGaussian(std_dev); separableConvolveX(vigra_src_view(src_view), vigra_dst_view(dst_view), kernel1d(gauss)); } Finally this is how to invoke Vigra from GIL: //////////////////////////////////////////// /// main //////////////////////////////////////////// using namespace gil; typedef gray8_image_t image_t; int main() { image_t im_gray; jpeg_read_and_convert_image("test.jpg",im_gray); image_t result(get_dimensions(im_gray)); image_t::pixel_t gray(channel_convert<image_t::view_t::channel_t>(0.5f)); fill_pixels(view(result),gray); canny_edge(const_view(im_gray), view(result), 3.0, 5.0, 0); gaussian_convolve_x(const_view(result), view(result), 10); jpeg_write_view("test_out.jpg",color_converted_view<gray8_pixel_t>(view(result))); return 0; } ///////////////////////////////////////// Things work well only for 8-bit grayscale values. 16-bit gray sort of works ok, perhaps the only difference is the need to use different settings. If I use RGB values, however, canny_edge does not compile, and the Gaussian convolution returns the result in grayscale. Other color spaces, channel depths and planar images I am not sure how to set up. Perhaps I am not using Vigra properly. Prof. Köthe could you take a look at my source file (link above) and let me know if it looks OK? Thanks, Lubomir

Lubomir Bourdev wrote:
To demonstrate how GIL can be integrated with other imaging libraries, we decided to create two wrappers of imaging algorithms of another library and invoke them from GIL. We picked Vigra as a test case.
Perhaps I am not using Vigra properly. Prof. Köthe could you take a look at my source file (link above) and let me know if it looks OK?
It's going into the right direction. Probably some traits classes and utility functions (e.g. magnitude() of a coordinate) are missing. I'll look into details later. Regards Ulli -- ________________________________________________________________ | | | Ullrich Koethe Universitaet Hamburg / University of Hamburg | | FB Informatik / Dept. of Informatics | | AB Kognitive Systeme / Cognitive Systems Group | | | | Phone: +49 (0)40 42883-2573 Vogt-Koelln-Str. 30 | | Fax: +49 (0)40 42883-2572 D - 22527 Hamburg | | Email: u.koethe@computer.org Germany | | koethe@informatik.uni-hamburg.de | | WWW: http://kogs-www.informatik.uni-hamburg.de/~koethe/ | |________________________________________________________________|

Lubomir Bourdev wrote:
To demonstrate how GIL can be integrated with other imaging libraries, we decided to create two wrappers of imaging algorithms of another library and invoke them from GIL. We picked Vigra as a test case.
Things work well only for 8-bit grayscale values. 16-bit gray sort of works ok, perhaps the only difference is the need to use different settings. If I use RGB values, however, canny_edge does not compile, and the Gaussian convolution returns the result in grayscale. Other color spaces, channel depths and planar images I am not sure how to set up.
Perhaps I am not using Vigra properly. Prof. Köthe could you take a look at my source file (link above) and let me know if it looks OK?
I got your program to work. The 8-bit grayscale case does indeed give the correct result. In case of 16-bit grayscale, I found that the input grayvalues were shifted by 8 bit. I.e. instead of the value 188, the input image contained 48316 == 188 << 8 (the input file was the same). The edges changed accordingly. This is perhaps a byte order problem, or a bad reinterpret_cast? It failes to compile with 8-bit RGB because cannyEdgelList() is only applicable to scalar images at the moment (the docu says "SrcAccessor::value_type must be convertible to float"). But the convolution example works just fine. Compilation with 16-bit signed int also failes, because GIL doesn't define a conversion from gray8 to gray16s (it should). So VIGRA and GIL seem to be fairly interoperable. You should have contributed to VIGRA instead of starting your own library ;-) Regards Ulli -- ________________________________________________________________ | | | Ullrich Koethe Universitaet Hamburg / University of Hamburg | | FB Informatik / Dept. of Informatics | | AB Kognitive Systeme / Cognitive Systems Group | | | | Phone: +49 (0)40 42883-2573 Vogt-Koelln-Str. 30 | | Fax: +49 (0)40 42883-2572 D - 22527 Hamburg | | Email: u.koethe@computer.org Germany | | koethe@informatik.uni-hamburg.de | | WWW: http://kogs-www.informatik.uni-hamburg.de/~koethe/ | |________________________________________________________________|
participants (9)
-
Bruno Martínez
-
Christian Henning
-
Gordon Smith
-
Janek Kozicki
-
Joel de Guzman
-
Lubomir Bourdev
-
Peter Dimov
-
Tom Brinkman
-
Ullrich Koethe