
* Dave Harris <brangdon@cix.compulink.co.uk> [2004-12-28 09:21]:
In-Reply-To: <20041224050911.GA9037@maribor.izzy.net> alan-boost@engrm.com (Alan Gutierrez) wrote (abridged):
I'm a bit concerned about the scope of this boost project. Our drawing framework is pretty huge.
This is an important discussion to have sooner than later.
1) Is a GUI project a bad idea?
2) Is Boost the right place for the a GUI project?
Supposing "yes" to both, that still leaves the question of scope. Possibly a "GUI" could just mean a high-level windowing system, and not a graphics drawing system. There is a lot to do just managing a tree of rectangles and dispatching events to and through them. The windowing system should arguably not care about the contents of the rectangles.
Supposing "no" to 1, and "yes" for 2, you mean? In another thread, I'm trying to drive home the point that the graphics needs are different for different platforms, indeed for different client area within the same program. An application might need a form bindery, another might be able to get by with only axis-aligned boxes, and another might require full compliment of vector graphics abstractions. I'm also trying to make the point that there is more to rendering than widgets on the one hand, and vector graphics on the other. I belive the scope is large, but if decomposed correctly, far more managable than what is currently available.
It seems to me that adding a graphics system, with pens and brushes and polygons and text and bitmaps and wotnot, makes the project much bigger. Which isn't to say we shouldn't do it, and do all of it, but perhaps it is worth dividing it up into chunks or layers that are more or less independent. If that is possible.
I agree completely! :^) Whew! In my thread on a GUI taxonomy, I came to see that it was really a rendering taxonomy. That a dialog is one way to render and hit test, a tree of rectangles, each containing a "component", a grid is similiar, but adds scrolling and selection. Both of these components can leave "visibility testing" to the windowing system, and focus on routing events. Rendering is clipped by rectangular view ports, so the overflow strategy is always clip or scroll, never reflow. A document reflows according to publishing convention. I might have a z-axis, so it might require visibility testing. A canvas is a collage of geographic shapes. It certialy requires it's own visibility testing. If I were to create a calendar for PalmOS, I might want to use forms and grids to arrange my information. I'd need axis-aligned boxes for rendering, but no ploygons, and no visibility testing. If I were to create an ER diagraming tool, I'd like to be able to draw on an vector graphics library that would let me compose shapes, and handle the visibility testing. I do not see how one is based off the other, and those libraries that model themselves in a heirarchy are destined to bloat. I think a Canvas is an abstraction that can be compiled out of many GUI applications. It is, to me, a rendering strategy that draws on a Surface, which is a software abstraction of a Device. A Surface class might provide line drawing, or it might provide ploylines, there could be a few classes of Surfaces, just like there are classes of Windows (modal, alert, SDI, MDI, etc.) Again, a very robust Palm OS application can bind to Palm OS form resources, and wouldn't require a Surface abstraction at all.
I believe the time is ripe for a small, light-weight, XML + CSS renderer to attack the new surge of RSS content on the web.
OK. Perhaps my issue is that when I think of "GUI" I don't immediately translate that into "XML + CSS".
I know. I don't think many people here do. I don't think XML+CSS, but I do think semi-structured content, like documents, and I'm putting forward XML+CSS as devil we all know. Also, XUL and XHTML make for nice declarative UIs. With C++, I feel you could create generic rednering compontents, Forms, Grids, Documents, and Canvases, and use generic programming to compose lean UIs. With C++ you can get the declaration and the behavior in the same langauge. You could compose UI renderers from generics, in the same way XUL is used to compose UI from nested XML elements and JavaScript. Talking about this application is to note it doesn't fit in this binary classifcation of widgets/vector graphpics. It is somewhere in between. ~ Also, I'm well aware that the G doesn't belong in the library name. In proper library I'd be able to model forms and documents on the console, just as I can now with curses and lynx. I think we are really talking about an event UI library. !~ In any case, I'm getting ready to wind down my participation in these discussions, because I don't think there is much interest here for a UI library, and I don't want to be accused of again hijacking, especially if I do make some progress on my own. I'll still be mucking around with Boost.Build and maybe that will be of some use to you all. -- Alan Gutierrez - alan@engrm.com