[gui] (Was GUI validation (was GUI Library Proposal for a Proposal))

"Eugene Lazutkin" <eugene@yymap.com> wrote
Just to revive discussion about GUI allow me to offer my 2c worth. (Sorry for big post).
Let's split GUI into two parts: G (2D graphics) and UI (user interface). :-) G can be used without UI: printing, generation of metafiles (e.g., Postscript, SVG) or pixmaps/bitmaps. UI can be used without G, if only stock components are used: simple dialog boxes.
The idea of separating GUI into various conceptual areas seems good to me, However I would have thought that the graphics primitives part must be designed first. Graphically a window should be part of a graphical system that deals with graphical transforms, which also opens the way for non window elements to be manipulated generically... From the point of view of instantiating a graphical entity it should be possible to know where and how big it is required and in relation to what, IOW a coordinate system.
Now we want to make all these components as light-weight as possible for performance reasons. Ideally they should be mapped 1-to-1 to facilities of underlying platform, while preserving platform independence. I don't think it is productive to walk original Java way and implement all widgets in terms of graphics primitives,
Drawing the various parts of a window, eg its border is fundamentally a graphics operation too. Should it not be possible to implement a gui on a simple operating system such as DOS or other simple embedded os that has no concept of high level graphics operations? There needs to be some underlying abstraction which does not rely on an operating system for its existence.
First order of business is 2D geometry. Following stuff is required:
1) Point. Simple 2D entity with X and Y components. In real life at least two versions are required: integral (screen coordinates are integral) and floating point (for precise calculations).
I agree that this humble 2D entity or family is the first graphics primitive that needs some resolution. Because they are used in so many ways a general purpose xy_pair type is the option I have currently been using. ( Being the 2D equivalent of an int or a float) Operations addition, subtraction, multiplication and division by a numeric. Perhaps this whole point/size issue shows a lack of clarity regarding the overall coordinate-system. A 'point' describes a relationship between one entity IN another, suggesting a graph edge relation. A 'size' is a property OF an entity only. The observation that the difference between a 'Point' and a 'Size' could be used to discriminate functions is valid, but when used in a context ( ie by finding the best model of the system)there is less need for pivotting (overload resolution) on particular types. As another example... if screen units were one type and window units were some other type eg millimeters or whatever, then conversion between these types could be achieved automatically rather than requiring a LPtoDp function etc. However this again may not be necessary if the framework fits correctly eg using a transform matrix as a edge property of a graph of the relationship between graphical entities.
Additional consideration: point should play well with native platform-dependent coordinate.
I dont view things quite this way. This is surely a classic case where the concept (the What) should be considered before any particular implementation . My criticism of current libraries in this discussion is that they have basically started with a particular implementation and tried to abstract from that. IOW each implementation should be a 'workaround' for the ideal approach which provides the standard from which to work.
Additional consideration: region should play well with native platform-dependent regions. Because region is relatively complex object, which is used mostly in specific context, it may make sense to implement it as a wrapper around native region.
I also think it would be necessary to work out the mechanics of regions first, else one gets into implementation problems again. If the goal is platform independence then as well as reuse, it would need to be possible to build the mechanics from scratch.
5) Transformation matrix. In 2D case it can be implemented as 3x3 matrix or (to conserve cycles and spaces) as 2x2 matrix + offset vector (6 values total). Usually it doesn't make any sense to use integer matrix. Algorithms to be implemented: addition, multiplication. It is beneficial to have construction from offset vector, rotation angle, mapping from rectangle to rectangle (with and without preserving aspect ratio) and so on. More complex algorithms are practical as well like "zoom around given point".
This relates to the overall coordinate system. The overall graph of spatial relationships between windows also can benefit by using transform matrix 'edges'. This also provides a clearer framework. IOW when 'moving' or 'sizing' a graphics entity (eg window) there is a more obvious answer to the question " in relation to what?" Objects representing transforms seem to be a good way to go.
Well, what are your thoughts? Discussions of UI are recurrent event in Boost. I saw several proposals for 2D geometry in this mail list. Authors of several GUI toolkits are frequent visitors here. Are we ripe already?
Thanks... There is a lot more interesting stuff in your post and I like the overall tone.Would it be an idea to use the boost wiki to further the discussion? regards Andy Little
participants (1)
-
Andy Little