[GUI] some random thoughts on GUI libs

Some random thoughts on GUI libraries: Message filtering, dispatch: Somewhere buried in Joel de Guzman's Spirit or Phoenix docs I recall mention of functional programming techniques being useful for working with graphical user interface library organization. This comment stuck in my head as I generally dislike the organization of message dispatch and filtering code in libraries like the MFC and the WTL. I've been interested in the idea of leveraging Spirit to implement event processing systems for some time (here we parse event streams - not characters, using Spirit's grammar to encode state machine transition logic, and dispatching specific state-bound actions via Spirit's semantic actions). In the context of a GUI library, it's occurred to me that Spirit might be employed to "parse" streams of mouse, keyboard, and graphic subsystem notification events via a custom Sprit scanner. This of course implies that mouse, keyboard, and graphic subsystem hooks are abstracted in some platform independent manner but that's doable. In essence, this is just another way to encode complex state machines. But I think it would be cool to encode the message filtering and dispatch logic in EBNF! Message routing: signals / slots? Window / subwindow hierarchy: Given that typical GUI applications have many "windows" that are typically created and destroyed in response to (a) user input (b) non-graphical subsystem events, a succinct mechanism is needed to keep track of parent / child windows hierarchies. This information is used to calculate / transform drawing coordinates, and affect changes to message dispatching / routing / filtering. I think this information should be modeled using the BGL so we can customize the overall behavior of the system using graph algorithms. One could go a step further even and use a top-level graph to model the windowing hierarchy and another graph associated with vertex properties to encode an arbitrary closed polyline to represent non-rectangular windows. Threads: It's useful to be able to associate a "window" with a thread for high-performance GUI drawing. This is somewhat of a pain under Windows for legacy reasons but it's possible (I've done it within the context of the WTL). Here you simply blast the video subsystem with draw requests from multiple threads and place the onus of optimizing the call sequence on the driver and the underlying hardware. It would be useful to take this into account explicitly and build multithread support directly into the library. Placing the burden of this on the library user will invariably limit the portability of any application written against the lib I think. Just my two cents worth - Chris

* Christopher D. Russell <cdr@encapsule.com> [2004-12-23 10:16]:
Some random thoughts on GUI libraries:
Message filtering, dispatch:
Somewhere buried in Joel de Guzman's Spirit or Phoenix docs I recall mention of functional programming techniques being useful for working with graphical user interface library organization. This comment stuck in my head as I generally dislike the organization of message dispatch and filtering code in libraries like the MFC and the WTL. I've been interested in the idea of leveraging Spirit to implement event processing systems for some time (here we parse event streams - not characters, using Spirit's grammar to encode state machine transition logic, and dispatching specific state-bound actions via Spirit's semantic actions). In the context of a GUI library, it's occurred to me that Spirit might be employed to "parse" streams of mouse, keyboard, and graphic subsystem notification events via a custom Sprit scanner. This of course implies that mouse, keyboard, and graphic subsystem hooks are abstracted in some platform independent manner but that's doable.
In essence, this is just another way to encode complex state machines. But I think it would be cool to encode the message filtering and dispatch logic in EBNF!
This sounds like an interesting idea, but I don't understand it. 1) Treat a stream of events as a lexemes and match them using a parser. 2) Dispatch the message to a control based on the match. Do you do this in order to recognise gestures, like begin drag? Do you use it to recognize any single event? It seems like many events, a menu click, are mapped one-to-one to a procedure, and thus you get those strange macro tables like in MFC. You are saying that Spirt would be a more pleasing representation, and would allow for chords and gestures? Because when I think of a windowing application, I think of it as stateless, ready to handle any request the user generates, and that waiting for a stream of events, strikes me as a hung user interface, so "complex state machine" confuses. -- Alan Gutierrez - alan@engrm.com

Hi Alan, I have to step out and don't have time to write a complete response just now. But I will later today or tomorrow. One thought to mull over in the interim:
Because when I think of a windowing application, I think of it as stateless, ready to handle any request the user generates, and that waiting for a stream of events, strikes me as a hung user interface, so "complex state machine" confuses.
At a high-level of abstraction a GUI is very definitely state machine - a pretty complex one at that. For example, consider the process of clicking a button widget. A stream of mouse events is "parsed" against some set of rules that might encode something like "if window X has the focus AND the mouse cursor is within a 2D region circumscribed by some polyline AND the left button is clicked THEN dispatch drawing code to repaint the button in the depressed position AND relay the information that the button press occurred to handler Y". Or consider the case of the user typing on the keyboard. Is the ALT key depressed. Shift? Which window is focused? Should the keystroke change the focus or be directed to a handler associated with the currently focused window? The challenge is to come up with a succinct ways to encode all these "policies" explicitly as a set of states and state transition vectors (along with associated actions - i.e. handlers). If we can agree that GUIs are inherently state-based, then the discussion becomes how best to encode the various policies so that they are (a) readable and (b) easily extensible. The idea of perhaps adapting Spirit for this task is based on observations that in essence it allows you to succinctly encapsulate extremely complex (often recursive) policy rules and conveniently bind to semantic actions. There are many ways to encode state machine of course. There's a FSM library that at least been put up for review for example. BGL can also be usefully employed to represent state machines. However, there's a lot of work invested in Spirit and maybe it makes sense to leverage it? I'll write more later. I was supposed to be out of here 30 minutes ago ;) - Chris "Alan Gutierrez" <alan-boost@engrm.com> wrote in message news:20041223161132.GA6689@maribor.izzy.net...
* Christopher D. Russell <cdr@encapsule.com> [2004-12-23 10:16]:
Some random thoughts on GUI libraries:
Message filtering, dispatch:
Somewhere buried in Joel de Guzman's Spirit or Phoenix docs I recall mention of functional programming techniques being useful for working with graphical user interface library organization. This comment stuck in my head as I generally dislike the organization of message dispatch and filtering code in libraries like the MFC and the WTL. I've been interested in the idea of leveraging Spirit to implement event processing systems for some time (here we parse event streams - not characters, using Spirit's grammar to encode state machine transition logic, and dispatching specific state-bound actions via Spirit's semantic actions). In the context of a GUI library, it's occurred to me that Spirit might be employed to "parse" streams of mouse, keyboard, and graphic subsystem notification events via a custom Sprit scanner. This of course implies that mouse, keyboard, and graphic subsystem hooks are abstracted in some platform independent manner but that's doable.
In essence, this is just another way to encode complex state machines. But I think it would be cool to encode the message filtering and dispatch logic in EBNF!
This sounds like an interesting idea, but I don't understand it.
1) Treat a stream of events as a lexemes and match them using a parser. 2) Dispatch the message to a control based on the match.
Do you do this in order to recognise gestures, like begin drag?
Do you use it to recognize any single event?
It seems like many events, a menu click, are mapped one-to-one to a procedure, and thus you get those strange macro tables like in MFC. You are saying that Spirt would be a more pleasing representation, and would allow for chords and gestures?
Because when I think of a windowing application, I think of it as stateless, ready to handle any request the user generates, and that waiting for a stream of events, strikes me as a hung user interface, so "complex state machine" confuses.
-- Alan Gutierrez - alan@engrm.com _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Threads:
It's useful to be able to associate a "window" with a thread for high-performance GUI drawing. This is somewhat of a pain under Windows for legacy reasons but it's possible (I've done it within the context of the WTL). Here you simply blast the video subsystem with draw requests from multiple threads and place the onus of optimizing the call sequence on the driver and the underlying hardware. It would be useful to take this into account explicitly and build multithread support directly into the library. Placing the burden of this on the library user will invariably limit the portability of any application written against the lib I think.
Both, X and GDI are not thread safe (ie: a lot of the function calls are not re-enterent, and the globals variables are not locked)... you shouldn't do this... You may however draw from multiple threads, provided that you manually serialise the GUI commands. In reality you should have a single GUI thread, plus some worker threads - and this technique will be faster than multiple GUI-writers as you will only need to serialise when data needs to be accessed from the worker threads, as opposed to serialising for every draw. FWIW, the underlying driver/hardware takes no part in 'threaded-ness capabilities', its X/GDI that you have to worry about, as it is that sub-system that crontrols what gets drawn. Mathew

Christopher D. Russell wrote: > Some random thoughts on GUI libraries: > > Message filtering, dispatch: > > Somewhere buried in Joel de Guzman's Spirit or Phoenix docs I recall mention > of functional programming techniques being useful for working with graphical > user interface library organization. This sounds interesting. I am not sure how it would look, though. > This comment stuck in my head as I > generally dislike the organization of message dispatch and filtering code in > libraries like the MFC and the WTL. I have never liked their approach either. I like the Win32GUI approach and have adapted it into my Boost.GUI library. Windows has 4 types of message flow: the normal processing of messages (e.g. WM_MOVE); processing WM_COMMAND notifications (e.g. menus or button presses); processing WM_NOTIFY notifications (e.g. virtual list data requests); events sent by controls to parents (and possibly reflected back to the original sender). Win32GUI uses a std::map< UINT, handler > to process events and has one for the normal flow and one each for the notification flows; I am not sure how it handles reflection/reflected events. The problem with this is that it has 3 maps for event handling with the notification handlers being Windows-specific. The solution I have devised is to have a single std::map< event_id, handler > that processes all events sent to onevent( event * ). The event_id type is OS/Platform-specific, so for GTK+ it could be std::string. In Windows, this is struct event_id { unsigned int id; unsigned int code; }; where id is the event type (WM_PAINT, etc.) and code is normally 0, but for WM_COMMAND and WM_NOTIFY messages, it is 0 (for generic handlers) and is then set to the notification code (e.g. BN_PRESSED for button presses). The onevent function for Windows will then process reflected messages. The handler is currently defined as boost::signal< bool ( event * ev ) >, such that ev is the OS-specific event structure and the return value indicates is the message has been processed (and processing should be stopped). Since the signal has a non-member signature, you need a make_handler( Object * ob, bool ( Object::* h )( event * ev )) helper that constructs a function object that will call the member function. This is far more powerful than binding specifically to a component member function as it allows: * an application class to respond to close events; * the radio_group class to process a group of radio buttons to keep mutual exclusivity, handling onpressed() events; * layout managers to respond to resize events without you having to add the handlers manually, or write specific code to interact with the layout manager (NOTE: this is currently in code on my machine, and I will post it after Christmas). There is a defect with my code at present: returning true from a handler will stop all event processing. This affects filtering handlers such as radio_group and cascading handlers like layout managers and will result in undefined behaviour. I am thinking of binding a handler-type to the event handler: handler | filter | cascading which dictates how the events are processed. The event processor allows you to attach a handler to a component using: signal_type handler_for( event_id ); for example: frame.handler_for( event::onclose ) .connect( &terminate_gui ); Buttons define a signal_type onpressed() method that is an alias for the OS-specific: handler_for( button-pressed-event ) which in Windows is: handler_for( event_id( WM_COMMAND, BN_PRESSED )) I am also considering returning a proxy class to a signal object to allow for: new_.onpressed() .attach( handler_type::handler, this, &mainfrm::onnew ); cancel.onpressed() .attach( handler_type::handler, &terminate_gui ); > In essence, this is just another way to encode complex state machines. But I > think it would be cool to encode the message filtering and dispatch logic in > EBNF! This sounds cool. Could you give an example? It should be possible to build the state machine logic on top of the event processing outlined above. For example: button quit; quit // handle button presses << state::pressed[ &terminate_gui ] // convert pressing the spacebar into pressing the button << state::keypress( ' ' ).fire( state::pressed ) ; Is this what you are aiming at? I am not sure on the implementability of the above, but it will most likely be built on top of the event processing/binding that is in place by my library. This would be done my constructing the appropriate function objects and the appropriate calls to handler_for. The state machine should allow for successive slots, e.g.: frame // save the state of the application (e.g. position) << state::close[ state::bind( frame, &mainfrm::onsavestate )] // terminate the application << state::close[ &terminate_gui ] ; > Message routing: > > signals / slots? I am using signals/slots for event handling (see above for details). Regards, Reece

* Reece Dunn <msclrhd@hotmail.com> [2004-12-24 06:25]:
Christopher D. Russell wrote:
Some random thoughts on GUI libraries:
Message filtering, dispatch:
Somewhere buried in Joel de Guzman's Spirit or Phoenix docs I recall mention of functional programming techniques being useful for working with graphical user interface library organization.
This sounds interesting. I am not sure how it would look, though.
This comment stuck in my head as I generally dislike the organization of message dispatch and filtering code in libraries like the MFC and the WTL.
I have never liked their approach either. I like the Win32GUI approach and have adapted it into my Boost.GUI library.
[snip]
The event processor allows you to attach a handler to a component using:
signal_type handler_for( event_id );
for example:
frame.handler_for( event::onclose ) .connect( &terminate_gui );
Buttons define a signal_type onpressed() method that is an alias for the OS-specific: handler_for( button-pressed-event ) which in Windows is: handler_for( event_id( WM_COMMAND, BN_PRESSED ))
I am also considering returning a proxy class to a signal object to allow for:
new_.onpressed() .attach( handler_type::handler, this, &mainfrm::onnew ); cancel.onpressed() .attach( handler_type::handler, &terminate_gui );
In essence, this is just another way to encode complex state machines. But I think it would be cool to encode the message filtering and dispatch logic in EBNF!
This sounds cool. Could you give an example?
It should be possible to build the state machine logic on top of the event processing outlined above. For example:
button quit; quit // handle button presses << state::pressed[ &terminate_gui ] // convert pressing the spacebar into pressing the button << state::keypress( ' ' ).fire( state::pressed ) ;
Is this what you are aiming at? I am not sure on the implementability of the above, but it will most likely be built on top of the event processing/binding that is in place by my library. This would be done my constructing the appropriate function objects and the appropriate calls to handler_for.
The state machine should allow for successive slots, e.g.:
frame // save the state of the application (e.g. position) << state::close[ state::bind( frame, &mainfrm::onsavestate )] // terminate the application << state::close[ &terminate_gui ] ;
I like both approaches. I like Christopher Russell's idea, as best as I can imagine it, when I consider the complexity of routing a message through an MDI illustration application. IIUC, one could use EBNF to describle chords or gestures, and map those to the right function in the right object. A simple message table is not expressive enough to dispatch a message for something like Adobe Illustrator. At the same time, I think that what Recce has developed is a bit for familiar to developers, when they are dealing with a control, like a button, that generates a well defined event, like a button press. If I were putting together a JTable/TableModel sort of program, it is what I'd expect, and I wouldn't have to learn a Spirt grammar. Furthermore, if all I wanted was a dialog box to configure a handful of options, I don't know that I'd want to worry about event processing at all. Rather, I'd like to specify a mapping from the dialog box to a structure, perhaps attaching valiation to some of the controls, and just hand it off to a function that would gather the information, and not return until it was correct and accounted for. An FSM is the right way to look at a Canvas application, but if all I need is a Form submission, I don't want to turn on that part of my brain. (This is why I put forward a taxonomy.) Finally, I've been able to create cool interfaces using the event bubbling model of XML DOM, and I'd like to have that available. In XML DOM the message is routed to the top-most document division in a an XML tree structure, and then the event bubbles up the tree until an event handler is found. If you've not used this, it might sound like six of one, but it really is an economical expression of event handling in a Document. Not necessarily an XML document, of course. This is all to suggest that either message handling is expressed using Christpopher's most expressive method, and these other messages are implemented in those terms, or an ungangly yet effecient message routing library is developed, and Christopher's method is one interface to that library. -- Alan Gutierrez - alan@engrm.com

An FSM is the right way to look at a Canvas application, but if all I need is a Form submission, I don't want to turn on that part of my brain.
Note that this is why win32gui has the save_dlg class: http://www.torjo.com/win32gui/save_dlg.html In other words, something like this should be developed on top of the GUI lib. Best, John -- John Torjo, Contributing editor, C/C++ Users Journal -- "Win32 GUI Generics" -- generics & GUI do mix, after all -- http://www.torjo.com/win32gui/ -- http://www.torjo.com/cb/ - Click, Build, Run!

* John Torjo <john.lists@torjo.com> [2004-12-28 06:09]:
An FSM is the right way to look at a Canvas application, but if all I need is a Form submission, I don't want to turn on that part of my brain.
Note that this is why win32gui has the save_dlg class:
In other words, something like this should be developed on top of the GUI lib.
Or parallel to the GUI lib. Because there may be no G. Your implementation could be bound to ncurses or a telephony system. Something I'm sure your aware of, but I thought I'd make the point that this is a good place for generics. -- Alan Gutierrez - alan@engrm.com

Reece Dunn wrote:
The handler is currently defined as boost::signal< bool ( event * ev )
, such that ev is the OS-specific event structure and the return value indicates is the message has been processed (and processing should be stopped). Since the signal has a non-member signature, you need a make_handler( Object * ob, bool ( Object::* h )( event * ev )) helper that constructs a function object that will call the member function.
If you use boost::function<> to specify the event, you do not need a make_handler or any other type of extraneous support. Boost::function<> can serve for any type of function object. The end-user just has to know how to create a boost::function<>, most easily done through boost::bind<>. If you want to make it easy, macro-wise or function-wise for your end-user, sure you can use make_handler or an appropriate macro, but why not let the end-user decide what they want to do. Boost::bind is powerful, so do not take it away from those who want to use it. By using boost::function<> you are providing all the end-user needs for an event handler of any function type. Doing less than that just takes programmers back to the old restrictive ways of MFC, wxWidgets etc. etc. Since Borland with __closures and Microsoft with delegates have already moved on to their own non-standard C++ ways of doing universal function objects, and boost::function is able to do this purely using standard C++, not using seems just another step backward in C++ programming.

Reece Dunn wrote:
The handler is currently defined as boost::signal< bool ( event * ev )
, such that ev is the OS-specific event structure and the return value indicates is the message has been processed (and processing should be stopped). Since the signal has a non-member signature, you need a make_handler( Object * ob, bool ( Object::* h )( event * ev )) helper that constructs a function object that will call the member function.
Better would be boost::signal< boost::function<bool ( event * ev )> >. That way the programmer can pass any type of C++ function as the handler, not just member functions of classes from a monolithic bas class. Boost::bind<> is very good at creating function types which boost::function objects can hold.

Windows has 4 types of message flow: the normal processing of messages (e.g. WM_MOVE); processing WM_COMMAND notifications (e.g. menus or button presses); processing WM_NOTIFY notifications (e.g. virtual list data requests); events sent by controls to parents (and possibly reflected back to the original sender). Win32GUI uses a std::map< UINT, handler > to process events and has one for the normal flow and one each for the notification flows; I am not sure how it handles reflection/reflected events. The problem with this is that it has 3 maps for event handling with the notification handlers being Windows-specific.
Quick note: this is quite implementation specific, and I intend to change it, as I will port to other platforms. Best, John -- John Torjo, Contributing editor, C/C++ Users Journal -- "Win32 GUI Generics" -- generics & GUI do mix, after all -- http://www.torjo.com/win32gui/ -- http://www.torjo.com/cb/ - Click, Build, Run!
participants (6)
-
Alan Gutierrez
-
Christopher D. Russell
-
Edward Diener
-
John Torjo
-
Mathew Robertson
-
Reece Dunn