
Eugene Lazutkin wrote:
"Aaron W. LaFramboise" <aaronrabiddog51@aaronwl.com> wrote in message news:419EB968.1050508@aaronwl.com...
Aleksey Chernoraenko wrote:
IMO, the best GUI "framework" would be a *set* of separate, specific libraries each of which would provide good abstractions/algorithms in some specific gui-related area:
I think that any new GUI interfaces created that fail to make these items a priority in their design will share the fate of all traditional user-interface frameworks, being inappropriate for modern design, and doomed to obsoletion.
Just to revive discussion about GUI allow me to offer my 2c worth. (Sorry for big post).
Let’s split GUI into two parts: G (2D graphics) and UI (user interface). :-) G can be used without UI: printing, generation of metafiles (e.g., Postscript, SVG) or pixmaps/bitmaps. UI can be used without G, if only stock components are used: simple dialog boxes.
That's the general idea :).
Graphics requires following components to be implemented: canvas, graphics primitives (shapes to describe geometric parameters, attributes to describe visual parameters, clipping facilities), and 2D geometry.
The 2D Geometry (the basics of it, anyway) is common between graphics and UI (e.g. position/size of the components).
User interface requires following components to be implemented: events, windowing, layout facilities, and widgets/controls. Layout requires 2D geometry. Widgets may require all listed components.
I am working on a GUI library in the boost-sandbox (boost/gui). At the moment, it isn't very comprehensive and doesn't have any docs. I have been busy lately, so haven't had chance to work on it. events -- yes (using boost::signal and std::map). windowing -- yes (basic 'frame'; need a 'form' (read dialog box) class). layout facilities -- simple move/resize is supported. (layout managers are an extension that can be implemented on top of this framework). components -- what subset of available controls do we use?
Now we want to make all these components as light-weight as possible for performance reasons. Ideally they should be mapped 1-to-1 to facilities of underlying platform, while preserving platform independence. I don’t think it is productive to walk original Java way and implement all widgets in terms of graphics primitives, nor implement graphics primitives in term of pixel rasterizer.
I entirely agree with this. This is what I am trying to achieve with the library I am working on in the sandbox.
First order of business is 2D geometry. Following stuff is required:
1) Point. Simple 2D entity with X and Y components. In real life at least two versions are required: integral (screen coordinates are integral) and floating point (for precise calculations). Depending of task conversion between them may require rounding off, ceil/ or floor operations.
Also, platforms use integral (Win32) or floating point (Cocoa) values, so it is hard to use a standard representation.
Additional consideration: it may be integrated with more general geometric toolkit, which may implement 2D and 3D points as well as N-dimensional vectors.
I have separated out point (as poisition) and size, causing some heated discussions!
Additional consideration: point should play well with native platform-dependent coordinate.
This is a given. The base types derive from the platform types in my library, so boost::gui::point is a POINT in Win32, NSPoint in Cocoa, etc. I also use library-implemented properties to map the names to standard names.
2) Rectangle. Again, two versions are required. [snip]
Additional consideration: rectangle should play well with native platform-dependent rectangle. There is a slight problem here: two most popular platforms implement rectangles differently. MS Windows keeps 2 points, while X-Window keeps point and size. From my experience 2 points representation is needed more frequently.
I have this as area. The difference in representation is problematic, since it makes writing generic code more complex.
[snip] 6) Vector as some kind of directional entity? Usually I use coordinate for that for practical reasons. I don’t think that there is a big practical difference between vector and point in 2D space. 7) Size. See #6 above.
There is no practical difference, but consider: gui::window mainwnd( gui::point(5, 5)); // what does this do? Is this: * setting mainwnd to be at (5, 5) on the screen? * creating mainwnd with a width and height of 5? gui::window mainwnd( gui::position(5, 5)); // at (5, 5) gui::window mainwnd2( gui::size(150, 500)); // 150x500 Having the two as distinct types makes it easier to see what is going on.
1) Points and rectangles should be implemented platform-independently. It allows achieving predictable performance results on all platforms. Additionally it is easier to use binary input/output to transfer them between platforms because layout is standardized.
E.g., once I implemented rectangles like this (pseudo code):
template<typename T> class Rect : public RectTrait<T>::layout { typedef RectTrait<T>::component_type component_type; // and so on };
On Windows I used Rect<int> and Rect<double> for calculations and Rect<RECT> for platform-dependent interface. Rect<RECT> was based on RECT and was binary compatible with all native functions. Anyway this stuff was hidden usually.
I chose to do: NSRect --> gui::cocoa::area RECT --> gui::win::area RectangleType --> gui::palmos::area where gui::cocoa, gui::win, etc. are where platform-specific details are contained, providing a common interface. This makes it easier to write the platform independant layer. <boost/gui/platform.hpp> is responsible for detecting the operating system+API used. It sets gui::platf to be a namespace alias to the namespace of the selected API. Thus, on Linux/GTK, gui::platf = gui::gtk. This then allows: gui::platf::area --> gui::area There are still some details to work out, and I only really have a working implementation for Win32.
What do we need from user interface?
Graphics can be easily abstracted without big loss of performance. Most common graphics tasks can be automated and done once. The reason is quite simple: foundation is pretty much universal. Unfortunately it is not the case with user interface. All OS vendors regard UI as a major differentiator between platforms. Different look and feel is the minor part of it. Standard set of controls/widgets is different across platforms. Different conventions are used to implement similar things. E.g., hot keys are different, menu layout is different, mouse buttons and click patterns can be different, and so on. Localization rules complicate everything.
Ah, the joys of GUI programming! ;)
Introduction of "special look and feel" is not viable. Just remember unsuccessful Java efforts. Users of respective platforms prefer applications with native look and feel.
I believe the default should be to use the native L&F, but allow owner/custom draw facilities to be built on top of the library.
Given all that it looks like the practical way is to provide a declarative way to describe user interface. (XUL, XAML?) We should be able to combine components (widgets/controls) on 2D surface, define event handling, and layout properties. Widgets/controls can be taken from predefined set of library components, which are mapped to native controls, if possible, or can be custom components.
I was thinking of a Java-style approach: gui::frame myframe( "C++GUI is easy..." ); gui::button yes( &myframe, "Yes" ); gui::button no( &myframe, "No" ); Note: * The use of constructors to create the components (Java-style). This does not mean that the components have to be custom drawn - I use this technique for creating GUIs in Windows without needing the Create/OnCreate creation flow of MFC. * There are no demands on the management of the component objects. The lifetime of the objects is managed by the user and not the library.
This description of UI can be internal (in the program) or external (e.g., in file using some format). External declaration can be replaced without recompiling a program. It simplifies localization, and minor (mostly visual) tweaks. Downside is possible mismatch with program code.
Yes -- how do you map the external file to the components in the program code. Note that I intend on supporting a "form" component. This is similar to what you are describing, I think. Mac, I believe, support forms. Windows supports the form concept as dialog boxes described in a resource file bound to the application at link time.
Each UI object should carry a list of properties, which can be used by layout engine (replaceable component itself) to modify geometry of component depending on window size. This facility should be used for initial layout to take care of different DPI of physical device, and for potential resizing. Some required properties are obvious like size and position of component in some units in some reference coordinate system, and so on. Some properties can be more elaborate like glue in TeX (elasticity, preferred size), which will govern transformation and position of component in different conditions.
It depends on how complex you want to make the library. I agree that size/position information should be available. In order to implement layouts, you would need to get the minimum (preferred) size of the component and alignment (horizontal|vertical) used to adjust a rectangle when resizing. Windows allows you to do resizing of components en mass, which makes resizing complex UIs efficient. Do we support this? How does it port to other platforms? How does this affect the resizing execution flow? I think that the layout architecture should be orthogonal to the UI elements, i.e. we should not impose layout code on the user: class my_frame: public gui::frame, public gui::grid_layout { ... }; The layout managers can then implement "stretchy" component logic as they require.
In order to implement all this we need an engine, which will interpret UI description according to platform-specific rules, and work as intermediary for event processing code, and, possibly, custom painting. Obviously this engine should be customizable with replaceable components.
Sketch above is not sufficiently detailed. For example it doesn’t define how to implement custom component. But I think it gives a preview of what can be done with this approach: true multi-platform support,
available with my code.
simplified creation of UI (instantiate user interface from description),
easy to build using constructors.
simplified painting (in most cases just dump graphics primitives once),
I do not have any graphics code -- I make no assumptions on how this is done.
simplified selection (get list of selected objects or top object),simplified UI refresh of visuals (just modify description), virtually no-code zooming/panning, unified printing (including metafiles and raster images), and so on.
These are not supported in my code as it stands. The zooming/panning code falls into the category of layout managers: how does this work with a drawing application? a text editor? web browser (using an external HTML rendering component?) etc.
Now if you look up you will see that this is a big ambitious project. I am not sure it should be done under Boost umbrella. I am not sure we have available bandwidth to do it. If you think otherwise, please give me your reasons.
I would like to collaborate on a joint effort. The way I see it is to have two project elements: * core -- basic windowing facilities; events; etc. This would be a candidate for adoption into Boost. Benefits: support (ability to test on a large number of platforms, etc.); standardization (C++ needs a decent GUI library as a part of the standard and Boost is the best platform to develop such a proposal). * extension -- advanced components; layout managers; etc. These will be a sort of "proof of concept".
Of course, it can be downscaled. For example, 2D geometry is universally useful. Even if you don’t want to go multi-platform (most popular choice of developers) you can still find some use for it. Graphics-heavy applications can use G part implementing UI separately for 1-2 platforms they want to support. Simplified UI part can be used to generate simple dialog boxes for different platforms. Different C++ UI bindings can be created for different platforms. It is not multiplatform way, but it is still would be better than most popular "toolkits".
Well, what are your thoughts? Discussions of UI are recurrent event in Boost. I saw several proposals for 2D geometry in this mail list. Authors of several GUI toolkits are frequent visitors here. Are we ripe already?
It is about time that C++ has a standard GUI framework. Maybe we can have an offshoot of Boost (and a boost.gui mailinglist) like there is for boost.build, etc. This would allow us to focus our efforts on developing such a library. I see this project as a collaborative effort because of its complexity. Regards, Reece