[rfc] GIL, draw operations, GUI [and maybe AGG?]
Hi, I'm creating a gui lib and am designing the basic drawing operations. I want the interface to be implementable as thin layers over most UIs (win32/gtk+/qt/etc). I wanted most of all have good interoperability with GIL. I found that it was actually very inconvenient defining windows as GIL views. A win32 window for example can't model a view without sacrificing reliability or efficiency. It also fixes operations as to be manipulations of pixels. But this definition for drawing operations lacks flexibility to create generic code to work with different units (not only pixels). This would allow a scalable GUI implementation where the units could be defined as millimeters and the drawing operations done with advanced anti-aliasing for example, without compromising straightforward implementations. So these drawing operations would actually define a concept. These operations could then be defined for GIL views, GUI windows and maybe more :). Do you think this could actually be useful? Comments are welcome, -- Felipe Magno de Almeida
Hi Felipe, at one point I have created an extension for gil for
displaying a window using SDL ( Simple DirectMedia Layer ) . What I
like about SDL is that it's portable and also very fast. As far as I
remember one can only open one window at the time. This might have
changed with a newer version, but I don't know for sure.
In conjunction with my most recent extension ( opencv ) it's very
fairly easy to draw primitives inside a gil view and than display it
on the monitor. Although I must admit I haven't tried it out yet. ;-(
Anyway, if you're interested have a look at my subversion repository at:
http://gil-contributions.googlecode.com/svn/trunk/gil_2
Let me know what you think.
Christian
On Wed, Jul 9, 2008 at 10:38 AM, Felipe Magno de Almeida
Hi,
I'm creating a gui lib and am designing the basic drawing operations. I want the interface to be implementable as thin layers over most UIs (win32/gtk+/qt/etc).
I wanted most of all have good interoperability with GIL. I found that it was actually very inconvenient defining windows as GIL views. A win32 window for example can't model a view without sacrificing reliability or efficiency. It also fixes operations as to be manipulations of pixels.
But this definition for drawing operations lacks flexibility to create generic code to work with different units (not only pixels). This would allow a scalable GUI implementation where the units could be defined as millimeters and the drawing operations done with advanced anti-aliasing for example, without compromising straightforward implementations.
So these drawing operations would actually define a concept. These operations could then be defined for GIL views, GUI windows and maybe more :).
Do you think this could actually be useful?
Comments are welcome, -- Felipe Magno de Almeida _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Felipe Magno de Almeida wrote: ...
So these drawing operations would actually define a concept. These operations could then be defined for GIL views, GUI windows and maybe more :).
Do you think this could actually be useful?
I think I would like a 'surface' concept, I don't think a GIL view is enough to satisfy it though. I think you will need to be able to associate some kind of context with the surface right? A current coordinate-transform for instance. --John
On Wed, Jul 9, 2008 at 6:42 PM, John Femiani
Felipe Magno de Almeida wrote:
[sip]
I think I would like a 'surface' concept,
I think that's what I want too.
I don't think a GIL view is enough to satisfy it though.
Yes, it probably isn't it.
I think you will need to be able to associate some kind of context with the surface right? A current coordinate-transform for instance.
I think we could do something like this:
using namespace surfaces;
using namespace gui;
using namespace gil;
no_anti_aliasing_surface
--John
Regards, -- Felipe Magno de Almeida
Felipe wrote:
I think you will need to be able to associate some kind of context with the surface right? A current coordinate-transform for instance.
I think we could do something like this:
using namespace surfaces; using namespace gui; using namespace gil;
no_anti_aliasing_surface
image_surface(interleaved_view(w, h, pointer_to_raw)); wnd<> w = create<>( _title = "Titulo" );
I would not make wnd<> part of the lib.
surface_create_line(*w, w->origin(), w->origin() + gui::make_point(10, 10));
I guess a coordinate-transform would only be used when projecting a surface on another right? When for example drawing an image.
Whenever a point is drawn. I think OpenGL gives a good example; The surface should probably keep a transform operation and internally multiply each point by it. Have you researched any other API's to see how they do it? I just know OpenGL, Java2D, and windows Device Contexts.
struct my_coordinate_transform { gui::point operator()(surface::pixel_pointstd::size_t pos) const; };
surface_project(*w, image_surface, my_coordinate_transform());
What do you think?
That's right, but especially the linear or projective transforms formed by 2x2, 3x3, or 4x4 matrices need to be handled, since they are most common for affine transforms and they usually have some hardware support. I am thinking: image_surface canvas(m_device, my_image); canvas.push_transform(rotates(3)); ... ASSERT(!canvas.locked()); image_surface::buffer_image buf(canvas, 0, 0, w, h); ASSERT(canvas.locked()); This way the canvas can provide drawing operations, and also a buffer_image type. The canvas should be 'locked' during the lifetime of the buffer_image, becase most of The drawing and even display operations can happen inside the video card, and the pixels Have to be copied in and out before you can access them via GIL. Here is another approach: You could just try a simple scene graph: scene_graph g; rot3 = g.push(rotate(3*degrees())) line = rot3.push(line(0,0, 100, 100)) box = rot3.push(rectangle(10, 10, 20, 10)) rgb8_image_t img; g.flatten(view(img); I actually think the second approach is more flexible.
On Thu, Jul 10, 2008 at 4:14 PM, John Femiani
Felipe wrote:
[snip]
wnd<> w = create<>( _title = "Titulo" );
I would not make wnd<> part of the lib.
wnd<> is part of the GUI library. It represents a smart pointer to a window.
surface_create_line(*w, w->origin(), w->origin() + gui::make_point(10, 10));
I guess a coordinate-transform would only be used when projecting a surface on another right? When for example drawing an image.
Whenever a point is drawn. I think OpenGL gives a good example; The surface should probably keep a transform operation and internally multiply each point by it. Have you researched any other API's to see how they do it? I just know OpenGL, Java2D, and windows Device Contexts.
I have worked with OpenGL, Device Contexts and Direct3D. But I was thinking the user would usually use the surface's coordinate directly, so no transformation would be needed when doing basic drawing. After all, I want this to be as easy as possible for simple use cases.
struct my_coordinate_transform { gui::point operator()(surface::pixel_pointstd::size_t pos) const; };
surface_project(*w, image_surface, my_coordinate_transform());
What do you think?
That's right, but especially the linear or projective transforms formed by 2x2, 3x3, or 4x4 matrices need to be handled, since they are most common for affine transforms and they usually have some hardware support.
Ok. But this transformation only occurs when different set of coordinates are being used right? Why should the transformation be part of the surface concept?
I am thinking:
image_surface canvas(m_device, my_image);
I don't get why pass a image, the device should be enough.
canvas.push_transform(rotates(3));
Shouldn't the rotation be done on the coordinates? Aren't this overloading the surface abstraction?
... ASSERT(!canvas.locked()); image_surface::buffer_image buf(canvas, 0, 0, w, h); ASSERT(canvas.locked());
Can't this just be another projection from one surface to another? Where the destination surface would be an image? It should accomplish the same thing, wouldn't it?
This way the canvas can provide drawing operations, and also a buffer_image type.
How about my projection suggestion?
The canvas should be 'locked' during the lifetime of the buffer_image, becase most of The drawing and even display operations can happen inside the video card,
And that's usually where they stay forever. I don't see why lock where nobody usually would even read it. Unless you fear multithreading issues, but I think these should be protected by the user.
and the pixels Have to be copied in and out before you can access them via GIL.
*If* they are ever accessed.
Here is another approach: You could just try a simple scene graph:
scene_graph g; rot3 = g.push(rotate(3*degrees())) line = rot3.push(line(0,0, 100, 100)) box = rot3.push(rectangle(10, 10, 20, 10))
I don't think I understand this. Can you explain it more?
rgb8_image_t img; g.flatten(view(img);
I actually think the second approach is more flexible.
I want to support both simple GUI drawings in normal windows as in-memory drawing and graphic hardware accelerated operations with this concept. So I think there should be a compromise between straightforward drawing to a window and more complicated transformations. Do you agree? -- Felipe Magno de Almeida
Ok. But this transformation only occurs when different set of coordinates are being used right? Why should the transformation be part of the surface concept?
Myabe you are right; It is just that's the way the existing API's I have seen do it. I think that is because the transfomation in done in hardware.
I am thinking:
image_surface canvas(m_device, my_image);
I don't get why pass a image, the device should be enough.
canvas.push_transform(rotates(3));
Shouldn't the rotation be done on the coordinates? Aren't this overloading the surface abstraction?
Not if the transform is done on the graphics card -- in that case the current transform is part of the card's state.
... ASSERT(!canvas.locked()); image_surface::buffer_image buf(canvas, 0, 0, w, h); ASSERT(canvas.locked());
Can't this just be another projection from one surface to another? Where the destination surface would be an image? It should accomplish the same thing, wouldn't it?
Well, I am imagining that you want to operate on the image in memory (using gil), in order to do something on the pixels that you cant do through the surface. I was trying to show how that might work, the 'LockBits'/'UnLockBits' approach comes from a long time ago when I played with CImage or Cbitmap I think (microsoft gdiplus IIRC, maybe .NET). Anyhow Java also has 'BufferedImage'. I think that the surface API should provide a way to do the same. It _might_ involve copying, or it might not. For instance, if the surface just uses software (no GPU) to render to the image passed in the constructor, buffered_image might just be a proxy of the original image.(or something)
This way the canvas can provide drawing operations, and also a buffer_image type.
How about my projection suggestion?
I think that 'project' is different than what I am proposing - project transfers from one surface to another. The buffer is supposed to be something along the lines of LockBits/UnlockBits. I was hoping that an RAII approach through 'buffered_image' would make sure that what ever changes you made to the buffered images were copied back in.
The canvas should be 'locked' during the lifetime of the buffer_image, becase most of The drawing and even display operations can happen inside the video card,
And that's usually where they stay forever. I don't see why lock where nobody usually would even read it. Unless you fear multithreading issues, but I think these should be protected by the user.
The surface should not be modified while the buffered image is in memory.
and the pixels Have to be copied in and out before you can access them via GIL.
*If* they are ever accessed.
If they aren't, then you don't need a buffered_image. Maybe that is the case you were thinking of for 'project'?
Here is another approach: You could just try a simple scene graph:
scene_graph g; rot3 = g.push(rotate(3*degrees())) line = rot3.push(line(0,0, 100, 100)) box = rot3.push(rectangle(10, 10, 20, 10))
I don't think I understand this. Can you explain it more?
Yes, below.
rgb8_image_t img; g.flatten(view(img);
I actually think the second approach is more flexible.
I want to support both simple GUI drawings in normal windows as in-memory drawing and graphic hardware accelerated operations with this concept. So I think there should be a compromise between straightforward drawing to a window and more complicated transformations. Do you agree?
The second approach is to avoid using a 'surface' that keeps state about the transform etc, and instead explicitly store the transform, as well as colors etc. as part of the scene to be drawn. The idea is that you can provide a very simple scene graph (http://en.wikipedia.org/wiki/Scene_graph), which can then be 'rendered' or 'flattened' to either a rgb_view_t, or an OpenGL Rendering Context, or a CDC, or an SVG file. That approach is extremely flexible, and it does not require a 'surface' with 'state' to be a part of the public API. The state can be managed in a scene_graph_visitor that is responsible for the final rendering. --John
On Thu, Jul 10, 2008 at 5:34 PM, John Femiani
Ok. But this transformation only occurs when different set of coordinates are being used right? Why should the transformation be part of the surface concept?
Myabe you are right; It is just that's the way the existing API's I have seen do it. I think that is because the transfomation in done in hardware.
I don't want to prohibit transformations made by hardware. But I also want a straightforward surface concept. Do you think it is possible to create one that has both?
I am thinking:
image_surface canvas(m_device, my_image);
I don't get why pass a image, the device should be enough.
canvas.push_transform(rotates(3));
Shouldn't the rotation be done on the coordinates? Aren't this overloading the surface abstraction?
Not if the transform is done on the graphics card -- in that case the current transform is part of the card's state.
... ASSERT(!canvas.locked()); image_surface::buffer_image buf(canvas, 0, 0, w, h); ASSERT(canvas.locked());
Can't this just be another projection from one surface to another? Where the destination surface would be an image? It should accomplish the same thing, wouldn't it?
Well, I am imagining that you want to operate on the image in memory (using gil), in order to do something on the pixels that you cant do through the surface.
Got it. I thought you just wanted to read it.
I was trying to show how that might work, the 'LockBits'/'UnLockBits' approach comes from a long time ago when I played with CImage or Cbitmap I think (microsoft gdiplus IIRC, maybe .NET). Anyhow Java also has 'BufferedImage'. I think that the surface API should provide a way to do the same. It _might_ involve copying, or it might not.
Can't we just project it to a GIL surface, which also models the Image View concept? And if the current surface you want to use gives access to its pixels, then it can also model Image View concept. I don't think we should overload the concept with access to optional features. What do you think?
For instance, if the surface just uses software (no GPU) to render to the image passed in the constructor, buffered_image might just be a proxy of the original image.(or something)
I see, I don't have strong opinions, but that seems to make it more difficult than necessary to implement the surface concept. We can instead make the project operation very fast instead.
This way the canvas can provide drawing operations, and also a buffer_image type.
How about my projection suggestion?
I think that 'project' is different than what I am proposing - project transfers from one surface to another. The buffer is supposed to be something along the lines of LockBits/UnlockBits. I was hoping that an RAII approach through 'buffered_image' would make sure that what ever changes you made to the buffered images were copied back in.
It can be done simply by also modeling the Image View concept. It would be needed to know the type before trying it, or use SFINAE in client code to create really generic code. But it makes the surface concept much more simple and coherent IMO. [snip]
If they aren't, then you don't need a buffered_image. Maybe that is the case you were thinking of for 'project'?
Yes.
Here is another approach: You could just try a simple scene graph:
scene_graph g; rot3 = g.push(rotate(3*degrees())) line = rot3.push(line(0,0, 100, 100)) box = rot3.push(rectangle(10, 10, 20, 10))
[snip]
The second approach is to avoid using a 'surface' that keeps state about the transform etc, and instead explicitly store the transform, as well as colors etc. as part of the scene to be drawn. The idea is that you can provide a very simple scene graph (http://en.wikipedia.org/wiki/Scene_graph), which can then be 'rendered' or 'flattened' to either a rgb_view_t, or an OpenGL Rendering Context, or a CDC, or an SVG file.
Got it now. Can't it be done with the normal surface concept? Instead of rendering directly by the surface, it would just stack the operations and render it by something else.
That approach is extremely flexible, and it does not require a 'surface' with 'state' to be a part of the public API. The state can be managed in a scene_graph_visitor that is responsible for the final rendering.
I believe that can be done with the surface concept I have in mind. And using the same syntax as for everything else. Any ideas?
--John
Regards, -- Felipe Magno de Almeida
participants (3)
-
Christian Henning
-
Felipe Magno de Almeida
-
John Femiani