On Thu, Jul 10, 2008 at 4:14 PM, John Femiani
Felipe wrote:
[snip]
wnd<> w = create<>( _title = "Titulo" );
I would not make wnd<> part of the lib.
wnd<> is part of the GUI library. It represents a smart pointer to a window.
surface_create_line(*w, w->origin(), w->origin() + gui::make_point(10, 10));
I guess a coordinate-transform would only be used when projecting a surface on another right? When for example drawing an image.
Whenever a point is drawn. I think OpenGL gives a good example; The surface should probably keep a transform operation and internally multiply each point by it. Have you researched any other API's to see how they do it? I just know OpenGL, Java2D, and windows Device Contexts.
I have worked with OpenGL, Device Contexts and Direct3D. But I was thinking the user would usually use the surface's coordinate directly, so no transformation would be needed when doing basic drawing. After all, I want this to be as easy as possible for simple use cases.
struct my_coordinate_transform { gui::point operator()(surface::pixel_pointstd::size_t pos) const; };
surface_project(*w, image_surface, my_coordinate_transform());
What do you think?
That's right, but especially the linear or projective transforms formed by 2x2, 3x3, or 4x4 matrices need to be handled, since they are most common for affine transforms and they usually have some hardware support.
Ok. But this transformation only occurs when different set of coordinates are being used right? Why should the transformation be part of the surface concept?
I am thinking:
image_surface canvas(m_device, my_image);
I don't get why pass a image, the device should be enough.
canvas.push_transform(rotates(3));
Shouldn't the rotation be done on the coordinates? Aren't this overloading the surface abstraction?
... ASSERT(!canvas.locked()); image_surface::buffer_image buf(canvas, 0, 0, w, h); ASSERT(canvas.locked());
Can't this just be another projection from one surface to another? Where the destination surface would be an image? It should accomplish the same thing, wouldn't it?
This way the canvas can provide drawing operations, and also a buffer_image type.
How about my projection suggestion?
The canvas should be 'locked' during the lifetime of the buffer_image, becase most of The drawing and even display operations can happen inside the video card,
And that's usually where they stay forever. I don't see why lock where nobody usually would even read it. Unless you fear multithreading issues, but I think these should be protected by the user.
and the pixels Have to be copied in and out before you can access them via GIL.
*If* they are ever accessed.
Here is another approach: You could just try a simple scene graph:
scene_graph g; rot3 = g.push(rotate(3*degrees())) line = rot3.push(line(0,0, 100, 100)) box = rot3.push(rectangle(10, 10, 20, 10))
I don't think I understand this. Can you explain it more?
rgb8_image_t img; g.flatten(view(img);
I actually think the second approach is more flexible.
I want to support both simple GUI drawings in normal windows as in-memory drawing and graphic hardware accelerated operations with this concept. So I think there should be a compromise between straightforward drawing to a window and more complicated transformations. Do you agree? -- Felipe Magno de Almeida