Ok. But this transformation only occurs when different set of coordinates are being used right? Why should the transformation be part of the surface concept?
Myabe you are right; It is just that's the way the existing API's I have seen do it. I think that is because the transfomation in done in hardware.
I am thinking:
image_surface canvas(m_device, my_image);
I don't get why pass a image, the device should be enough.
canvas.push_transform(rotates(3));
Shouldn't the rotation be done on the coordinates? Aren't this overloading the surface abstraction?
Not if the transform is done on the graphics card -- in that case the current transform is part of the card's state.
... ASSERT(!canvas.locked()); image_surface::buffer_image buf(canvas, 0, 0, w, h); ASSERT(canvas.locked());
Can't this just be another projection from one surface to another? Where the destination surface would be an image? It should accomplish the same thing, wouldn't it?
Well, I am imagining that you want to operate on the image in memory (using gil), in order to do something on the pixels that you cant do through the surface. I was trying to show how that might work, the 'LockBits'/'UnLockBits' approach comes from a long time ago when I played with CImage or Cbitmap I think (microsoft gdiplus IIRC, maybe .NET). Anyhow Java also has 'BufferedImage'. I think that the surface API should provide a way to do the same. It _might_ involve copying, or it might not. For instance, if the surface just uses software (no GPU) to render to the image passed in the constructor, buffered_image might just be a proxy of the original image.(or something)
This way the canvas can provide drawing operations, and also a buffer_image type.
How about my projection suggestion?
I think that 'project' is different than what I am proposing - project transfers from one surface to another. The buffer is supposed to be something along the lines of LockBits/UnlockBits. I was hoping that an RAII approach through 'buffered_image' would make sure that what ever changes you made to the buffered images were copied back in.
The canvas should be 'locked' during the lifetime of the buffer_image, becase most of The drawing and even display operations can happen inside the video card,
And that's usually where they stay forever. I don't see why lock where nobody usually would even read it. Unless you fear multithreading issues, but I think these should be protected by the user.
The surface should not be modified while the buffered image is in memory.
and the pixels Have to be copied in and out before you can access them via GIL.
*If* they are ever accessed.
If they aren't, then you don't need a buffered_image. Maybe that is the case you were thinking of for 'project'?
Here is another approach: You could just try a simple scene graph:
scene_graph g; rot3 = g.push(rotate(3*degrees())) line = rot3.push(line(0,0, 100, 100)) box = rot3.push(rectangle(10, 10, 20, 10))
I don't think I understand this. Can you explain it more?
Yes, below.
rgb8_image_t img; g.flatten(view(img);
I actually think the second approach is more flexible.
I want to support both simple GUI drawings in normal windows as in-memory drawing and graphic hardware accelerated operations with this concept. So I think there should be a compromise between straightforward drawing to a window and more complicated transformations. Do you agree?
The second approach is to avoid using a 'surface' that keeps state about the transform etc, and instead explicitly store the transform, as well as colors etc. as part of the scene to be drawn. The idea is that you can provide a very simple scene graph (http://en.wikipedia.org/wiki/Scene_graph), which can then be 'rendered' or 'flattened' to either a rgb_view_t, or an OpenGL Rendering Context, or a CDC, or an SVG file. That approach is extremely flexible, and it does not require a 'surface' with 'state' to be a part of the public API. The state can be managed in a scene_graph_visitor that is responsible for the final rendering. --John