Thoughts for a GUI (Primitives) Library

As it stands, neither Boost nor C++ has any support for GUI applications. Therefore I am suggesting a library that provides the bare minimum required to create a GUI application. My thoughts on what this library would contain are as follows: - Types for controls (contexts bound to the extent of their parent) and windows (contexts that remain separate from their parents). - An event handler class that is passed when controls and windows are created that contains all information needed to handle events for the new context. - Functions for generating events (if applicable) and (a) function(s) to initiate message handling (unless the event loop is run in its own thread like in Java). - Functions to access and manipulate the data of the contexts (size, position, z-order, ect). - Miscellaneous other types used for support (such as for screen positions). - A way to retrieve the native concept of a window/control. As you can see, there are no controls/widgets/ect defined, either from the library or using the native versions. Such things can be added later, if people can ever agree on how they should be implemented. It needs a lot of refining, but I believe this covers the *minimum* support that is needed for a GUI.

I don't know if there's enough space for this. There are a plenty of GUIs Framework, like Qt or wxWindows, which are more advanced. -- Quiero ser el rayo de sol que cada día te despierta para hacerte respirar y vivir en me. "Favola -Moda".

Expanding on my earier summary, here is a list of some events for the library to support to get the discussion on the design of the library started: *Creation:* The first event handled by the context; it is sent so any setup can be preformed. *Destruction:* Sent just before a context is destroyed (destruction finishes once the message is processed); used so the event handler can clean up related resources. *Resizing/Moving:* Sent before the context is resized or moved, includes the new position and/or size. It should be possible for the handler to change the values and thereby altering the new position/size. (Question: should this be one event or two seprate events?) *Resized/Moved:* Sent after the context is resized or moved, so that any changes that need to be made as a result can be made. (Question: should this be one event or two seprate events?) I would like to hear suggestions for events addtional event, and thoughts on those that have been presented.

On 02/09/10 02:42, Gwenio wrote:
Expanding on my earier summary, here is a list of some events for the library to support to get the discussion on the design of the library started:
*Creation:* The first event handled by the context; it is sent so any setup can be preformed.
*Destruction:* Sent just before a context is destroyed (destruction finishes once the message is processed); used so the event handler can clean up related resources.
*Resizing/Moving:* Sent before the context is resized or moved, includes the new position and/or size. It should be possible for the handler to change the values and thereby altering the new position/size. (Question: should this be one event or two seprate events?)
*Resized/Moved:* Sent after the context is resized or moved, so that any changes that need to be made as a result can be made. (Question: should this be one event or two seprate events?)
The current trend for GUIs is for them to be declarative, data-driven, and as automatic as possible. Basically, what would be nice, is specifying what data you have, what data you want, and not have to deal with any other detail. The system should then automatically select the most suitable widget, with the suitable disposition and sizes, and resize everything automatically on window resize etc. What you're suggesting is a fairly low-level widget interface, that is far from covering all things it would need to cover. For this, I'd suggest using an existing library like Qt, and write your declarative data-driven system on top of it.

On 09/08/2010 08:53 AM, Mathias Gaunard wrote:
The current trend for GUIs is for them to be declarative, data-driven, and as automatic as possible. Basically, what would be nice, is specifying what data you have, what data you want, and not have to deal with any other detail. The system should then automatically select the most suitable widget, with the suitable disposition and sizes, and resize everything automatically on window resize etc.
You are raising a very important point here: As this entire discussion proves, the term "GUI library" is highly ambiguous. While some understand it to mean a library that implements a GUI *engine* (with a wide range of often conflicting requirements), others understand it as an API that is useful to *programatically interface* with a GUI toolkit. What application developers care most about is a powerful and convenient API that lets them bind their application logic to a graphical frontend. As such, I think this interface should be minimal, and focus on semantics, rather than style (for example by following the Model-View-Controller paradigm). (Styling from within the library should only be done as much as is required to convey the semantics. Everything else could be done completely outside this API, to give application users control over the styling, not programmers.) In contrast, some people seem to suggest that the internals of the (to-be-rewritten) GUI engine itself need to be implemented using boost components. All this strongly reminds me of the various discussions about a boost.xml library we had in the past: I argued that the API is the most important part of it (leaving room for third-party implementations to be plugged in), while others argued that the XML parser needs to be written using spirit, etc. The views didn't necessarily contradict each other, but demonstrate how different the focus was / is. Stefan -- ...ich hab' noch einen Koffer in Berlin...

On Wed, Sep 8, 2010 at 8:53 AM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
Basically, what would be nice, is specifying what data you have, what data you want, and not have to deal with any other detail. The system should then automatically select the most suitable widget, with the suitable disposition and sizes, and resize everything automatically on window resize etc.
That's been my goal over the years. I think I've worked on all the necessary pieces of the solution, but have never had it all in one place, thus surely still missed a few details. But I've seen enough to believe that it is doable. Tony

Here is some more details regarding design, this time it is some details regarding what functions are needed to interact with the system. *Window State:* - Hidden The window is invisible to the user and they cannot tell it exists. - Minimized The window is invisible to the user, but they can tell it exists and can make it visible. - Windowed The window is visible. - Maximized This is a substate of "Windowed", in which the window is not resizable or moveable. Rather it contantly fills the "work area" of the screen. - Fullscreen This is a substate of "Windowed", in which the window is not resizable or moveable. Rather it contantly fills the screen and covers everything in the non-work area part of the screen when active. Function Listing for Window State: - show Makes a window visible without otherwise altering its state. - hide Makes a window invisible without otherwise altering its state. - minimize Makes a window invisible but leaves something that shows the window exist; The "icon" can be used to restore the window. - restore Returns the window to its previous windowed state. - to_window Makes a window visible, and puts it in a windowed state. If the window is maximized or fullscreen, its normal size and position are restored. - maximize Makes a window visible, and puts it in a maximized state. - to_fullscreen Makes a window visible, and puts it in a fullscreen state. Additionally there would be functions to determine the window's current state, but I will leave what functions there should be to discussion for now. Note: only show and hide are relavent to a control; the rest only apply to windows.

Now for a start for discussion on event handling. *Event Loop:* There are two way that come to mind: let the application call a message loop/get message function to retreive a message or have the sending of messages handled internally like in Java. I personally lean toward having a function that is called to process messages, but would like to hear what advantages and disadvantages people can come up with for each method. *Event Handling:* There are several way that event handling can be implemented. Ultimately it involves data being provided to the back end to route messages to where they need to go. As the target is OO code, date for event handling would need to be given for each window context. Quick List of (some) Methods: - Callback Procedure This is the simplist form, it has a pointer to a function pointer that is to be called. This would be redundant, because the function would need to determine the message type, which would have already been done by the back end to convert the message to the correct form for the callback. - Abstract Handler Class This involves having a abstract base class for all event handlers to inherit from. The derived classes then define how to handle each type of event. The only problem with this arises when dealing with binaries compiled separately from the main program as not all aspects of virtual function calls is standardized. This would only be a problem in some cases, if ever. - Use Boost Signals It seems some people believe that Boost Signals would be a good way to implement event handling, but I am not very familiar with this particular library, so I cannot really comment on it. I would like to hear the reason for and against going this route though. - Descriptive Structure The last way I know of would be to pass a descriptive object (or a pointer to one) that provides the back end with the information needed to handle events (pointers to related object and pointers of functions to call, ect). This is fairly ambiguous, and would need a lot of work to design. I await comment on these ideas, and for people to present their own ideas so they can be discussed.

I'm afraid to step into this big discussion, but here it goes... On Thu, Sep 2, 2010 at 5:17 PM, Gwenio <urulokiurae@gmail.com> wrote:
Now for a start for discussion on event handling.
*Event Loop:*
There are two way that come to mind: let the application call a message loop/get message function to retreive a message or have the sending of messages handled internally like in Java. I personally lean toward having a function that is called to process messages, but would like to hear what advantages and disadvantages people can come up with for each method.
These are the same thing, seen from different points of view. Either roll your own event loop, or pass in your function to the standard supplied event loop, or derive from the standard event loop and implement the virtual onEvent() function. A good library would allow all of these.
From the OS point of view, there *is* a event queue. You can provide layers above that in various ways.
Quick List of (some) Methods: - Callback Procedure This is the simplist form, it has a pointer to a function pointer that is to be called. This would be redundant, because the function would need to determine the message type, which would have already been done by the back end to convert the message to the correct form for the callback. - Abstract Handler Class This involves having a abstract base class for all event handlers to inherit from. The derived classes then define how to handle each type of event. The only problem with this arises when dealing with binaries compiled separately from the main program as not all aspects of virtual function calls is standardized. This would only be a problem in some cases, if ever. - Use Boost Signals It seems some people believe that Boost Signals would be a good way to implement event handling, but I am not very familiar with this particular library, so I cannot really comment on it. I would like to hear the reason for and against going this route though. - Descriptive Structure The last way I know of would be to pass a descriptive object (or a pointer to one) that provides the back end with the information needed to handle events (pointers to related object and pointers of functions to call, ect). This is fairly ambiguous, and would need a lot of work to design.
Let the Inversion Principle be your guide. The core code needs to call a function to notify someone when something happens. It doesn't care about the rest of the framework. So it requires a boost::function. No more, no less. If that boost::function turns around and calls signals/slots or an abstract framework or whatever, that's fine, but it is not up to that piece of the framework to decide. In fact, "framework" is the problem. It is a "dirty word" in GUI design. There should be no framework. Just pieces that happen to work together. Not because they were designed together, but the opposite - because they were designed separately. So separately that they were made to work with anything, not just some pieces they expected to work with.
From another email:
On Thu, Sep 2, 2010 at 5:04 PM, Simonson, Lucanus J < lucanus.j.simonson@intel.com> wrote:
Have you ever written a GUI framework before? Have you ever done cross platform applications (with a gui) that work with both Mac and Windows, for example? What application domain are you targeting? Games have pretty different requirements from office type applications. Before you attempt to improve on the state of the art be sure you have mastered the state of the art and are in a position to make the right calls on what the next steps are.
20+ years of writing GUIs. I'm on my 4th or 5th or more now, depending on how you count. Some are backed by OS controls, some are independent, some mimic and fit into the OS system, some require their own framework. Some are "give me a place to put pixels and I'll do the rest", cross-platform, desktop, mobile, etc. Even java (sadly :-). This is a _hard_ problem. Is it doable. Yes. If I had the time/money I'd tackle it. But I think it would take at least 3 great programmers and 3 years of work. That's what, at least 1 million$? And maybe that doesn't even count the edit-box. Text editing is a nightmare. (It happens to be something else I worked on for years and years. I do NOT want to do it again. One of the other guys would need to do that part!) Maybe now with ICU and FreeType and whatever it would be not as bad, but I suspect it is still a nightmare. There is a chance you could do it a piece at a time. That _would_ help keep each piece independent - each piece would need to work with existing GUIs and somehow be a positive addition to the current state of GUIs. That way people would start to want more. More random thoughts: - a check box with an isChecked() function is *wrong*. Ask the model, not the view. In fact, I think we should spend more time defining a Model library (_start_ with Adobe's Adam stuff). Given a good description of the model, and the view becomes easier. Hand me your model and I can automatically build an aesthetically pleasing GUI for you. That's the "holy grail" of GUI, for me at least, and I think it is now doable. It can't be the only option, but it would be a welcome addition to existing GUIs, I would think. - a layout engine should take "Layoutable" objects or something more generic, not Controls. Split these concepts. I've worked on layout engines that worked with rectangles - those were reusable engines. I've worked on engines that required virtual functions in the base class of the control hierarchy. Those are *not* reusable. Again, Inversion Principle - a layout engine should take what it needs (no more, no less), not what you happen to have. - does a Button class create its view/window on construction, or separately. Some systems, MFC-like, allow: struct MyDialog : Dialog { Button ok; Button cancel; Text message; }; You can create a Button without any "window" backing it. Instead you need to call Button.createWindow() separately. I'm not a big fan of this, but this is one way. The other is when you call ok = new Button(), the HWND (or whatever) is created. If the HWND goes away, so does the Button - what does that mean for your pointer? shared_ptr and/or weak_ptr can help here. - even a simple class like Button should be broken into a number of pieces. ie Button behaviour (how it interacts with mouse and keyboard - if you click down on the button then hold down the space bar, then move the mouse outside the button (while still held down) does the button still look pressed???) is separate from button drawing. So why are they in the same class? I'd prefer more templates and CRTP and simple free functions. Mix them together to build controls. - a good GUI would be bindable to other languages - python, lua, java, javascript, actionscript,... Lots of GUI-ish logic is often easier in a scripting language. Of course exposing templates is hard to do. As it multiple inheritance. There would be limitations depending on the language. - Adam/Eve is a nice language for declarative GUI. We also had a DSEL version, which was nice (particularly for auto-layout, since the model often lives in the code). But having the script outside the code means the UI Designers can make it look good without touching the code. A GUI builder is nice. Maybe we can convince Adobe to add one to ASL. Don't write a framework. Write some good reusable and useful and novel and replaceable pieces of GUI code that can be used now. I understand that a GUI "framework" that we could all use would be a "win". But that's just too daunting. What's the biggest "win" that could be added to existing frameworks? Model + auto layout? Something else? Tony

All first-class GUI libraries have had commercial backing, multiple full time develop and testing staff, going for several years. It's an insane amount of work, however you slice it.
If we like to see a standardized C++ GUI some day (and I personally do) then someone have to do that design and implementation. As I understand, the committee can't afford it. In my opinion boost is the right place because it's the only library that influenced the standard, as far as I know. *Creation:*
The first event handled by the context; it is sent so any setup can be preformed.
*Destruction:* Sent just before a context is destroyed (destruction finishes once the message is processed); used so the event handler can clean up related resources.
Can you give an example for when you need these events? Why can't you do initialization where you create the context and destruction where you destroy it? Probably 'resources' will be stored in some object that will be responsible for the context so binding their lifetimes sounds logical. I don't want double construction/destruction as in MFC/WTL. *Resizing/Moving:*
[...] (Question: should this be one event or two seprate events?)
It should be one event. Resizing and moving can be tied when you implement docking, for example. For simplicity you may separate them if there is no handler for unified version. *Event Loop:*
[...] I personally lean toward having a function that is called to process messages, but would like to hear what advantages and disadvantages people can come up with for each method.
That's right because event passing should be more complicated than just calling the right callback function. Consider keyboard input. Keyboard is a shared resource, therefore keyboard events must be forwarded from the focused window up the chain of its parents until some handler marks explicitly that it handled the event. I'm not familiar with any framework/OS which handles keyboard input this way (if you know, tell me please), but it seems that this approach will eliminate the necessity of message filtering to implement accelerators, for example. Y. Galka

On Sat, Sep 4, 2010 at 4:44 AM, Yakov Galka <ybungalobill@gmail.com> wrote:
That's right because event passing should be more complicated than just calling the right callback function. Consider keyboard input. Keyboard is a shared resource, therefore keyboard events must be forwarded from the focused window up the chain of its parents until some handler marks explicitly that it handled the event. I'm not familiar with any framework/OS which handles keyboard input this way (if you know, tell me please), but it seems that this approach will eliminate the necessity of message filtering to implement accelerators, for example.
Messages should be sent down from parent to child, then back up. Some systems do parent down. Others do child up. Both end up needing hacks to handle the exceptional cases. So do both. See ActionScript 3.0. http://www.adobe.com/devnet/actionscript/articles/event_handling_as3_03.html Tony

Can you give me a real world example where you need the "capturing" phase for any reason other than a hack for some other design problem? I just can't think of any. Thank you. On Sat, Sep 4, 2010 at 13:57, Gottlob Frege <gottlobfrege@gmail.com> wrote:
On Sat, Sep 4, 2010 at 4:44 AM, Yakov Galka <ybungalobill@gmail.com> wrote:
That's right because event passing should be more complicated than just calling the right callback function. Consider keyboard input. Keyboard is
shared resource, therefore keyboard events must be forwarded from the focused window up the chain of its parents until some handler marks explicitly that it handled the event. I'm not familiar with any
a framework/OS
which handles keyboard input this way (if you know, tell me please), but it seems that this approach will eliminate the necessity of message filtering to implement accelerators, for example.
Messages should be sent down from parent to child, then back up. Some systems do parent down. Others do child up. Both end up needing hacks to handle the exceptional cases. So do both. See ActionScript 3.0.
http://www.adobe.com/devnet/actionscript/articles/event_handling_as3_03.html
Tony _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sat, Sep 4, 2010 at 8:40 AM, Yakov Galka <ybungalobill@gmail.com> wrote:
Can you give me a real world example where you need the "capturing" phase for any reason other than a hack for some other design problem? I just can't think of any. Thank you.
http://www.adobe.com/devnet/actionscript/articles/event_handling_as3_03.html
- Global Shortcuts (like in Visual Studio) - Macro recording, both the start/stop shortcuts, and the recording of inputs - Automation - hacks - unknowns For a framework, hacks and unknowns are valid reasons. The reason I detest MFC is because it was inflexible beyond anything more than they had planned for. ie standard UIs were easy, anything else was near impossible. I know that any framework that did child first ended up needing a parent-first hack, and anything that was parent-first needed child-first sometimes. Unfortunately I can't remember the particular reasons, but it seemed to always happen. I do agree that events should go to the child first almost always. It's those other cases that I'm concerned about. Tony

I'm just a observer on these lists, but talk of a cross-platform GUI library always catches my attention. I thought I should post a link to OMGUI, http://omgui.sourceforge.net/ , since it hasn't been mentioned yet. It looked to be shaping up to be a very nice cross platform library, unfortunately work on it seems to have stopped. It could be worth reviving. Kind regards, Eoin On Mon, Sep 6, 2010 at 1:15 AM, Gottlob Frege <gottlobfrege@gmail.com> wrote:
On Sat, Sep 4, 2010 at 8:40 AM, Yakov Galka <ybungalobill@gmail.com> wrote:
Can you give me a real world example where you need the "capturing" phase for any reason other than a hack for some other design problem? I just can't think of any. Thank you.
http://www.adobe.com/devnet/actionscript/articles/event_handling_as3_03.html
- Global Shortcuts (like in Visual Studio) - Macro recording, both the start/stop shortcuts, and the recording of inputs - Automation - hacks - unknowns
For a framework, hacks and unknowns are valid reasons. The reason I detest MFC is because it was inflexible beyond anything more than they had planned for. ie standard UIs were easy, anything else was near impossible.
I know that any framework that did child first ended up needing a parent-first hack, and anything that was parent-first needed child-first sometimes. Unfortunately I can't remember the particular reasons, but it seemed to always happen.
I do agree that events should go to the child first almost always. It's those other cases that I'm concerned about.
Tony _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

If you really want modern c++ gui programming, take a look at gtkmm. It uses a similar implementation of signals that boost has. It uses standard iterator paradigm everywhere it can. There is a good implementation of utf-8 strings that mimics std::string. The design of gtkmm is really good, far more powerfull than Qt. The only drawback is that there are a lot of bugs on Win32 and MacOs implementations that need to be addressed. where to find it : http://www.gtkmm.org

On Mon, Sep 6, 2010 at 6:59 AM, ecyrbe <ecyrbe@gmail.com> wrote:
The design of gtkmm is really good, far more powerfull than Qt.
Could you qualify that? Which aspects of gtkmm's design are "more powerful" than Qt's analogues? David -- David Sankel Sankel Software www.sankelsoftware.com 585 617 4748 (Office)

I've taken a look at the libraries mentioned in this thread. My comments: - main/WinMain/whatever (notus, omgui): The library shall not implement the main() for me. I may want to use it as a sub-library in my library. To eliminate the need of writing #ifdefed main in standalone apps, it's better to use a MACRO that expands to the correct entry function. - Static event handling tables (omgui, gtkmm): This is "what you don't use you don't pay for" issue. In most cases I don't need to do dynamic handler connect/disconnect. So, if there are N copies of my custom-omega-super-control with M event handlers, I want the memory footprint be O(N+M), not O(NM). - Model/View (omgui, gtkmm): As Tony already noted: "Ask the model, not the view". A checkbox with isChecked() function is wrong. Actually both these libraries implement multi-platform interface to low-level primitives. They don't simplify gui development. Really, I don't see how they are better than e.g. wxWidgets. They all (including wxWidgets and Qt) just mimic OS interface in a portable way and add a bit sugar to object oriented event handling. Both Notus and Adam&Eve sounds promising in the last aspect. However Adam&Eve aim to simplify dialog development only, they're not a general gui framework, right? On Thu, Sep 2, 2010 at 11:53, Thomas Klimpel <Thomas.Klimpel@synopsys.com>wrote:
There are also lots of other C++ GUI systems around, none of which I have ever found personally flexible enough.
Still, why start yet another imperfect GUI system then? Wouldn't it make more sense to help one of the existing GUI systems to become more perfect (or at least perfect enough to be acceptable)?
Because the design of existing GUI systems is rotten. 6 out of 7 general purpose gui libraries mentioned in this thread actually have the same 20 years old design (see Model/View above). You can't radically change the design of an existing project, it's either very hard or people won't agree. On Thu, Sep 2, 2010 at 22:09, Binglong Xie <binglongx@gmail.com> wrote:
2). Design tool support. Crafting GUI with lines and lines of hand written C++ code may not scale for complex GUI (unless C++ reaches that expressiveness of a much higher level). The C++ GUI framework will need to have a matching GUI design tool. [...] None of above is easy.
In assumption that you solved all the other problem, this one becomes a piece of cake.
[...] Maybe lessons from Java could be learned. AWT vs. Swing, native look&feel vs. uniform look&feel blah blah. Each one has lovers and haters. [...]
On Thu, Sep 2, 2010 at 23:04, Simonson, Lucanus J < lucanus.j.simonson@intel.com> wrote:
What application domain are you targeting? Games have pretty different requirements from office type applications.
These two are good metrics for generality of the library. It should be possible to easily switch the back-ends. E.g. changing from native look&feel to uniform look&feel. The same is true for GUI for games. How exactly do games have different requirement? It's the same concepts at the end. Yakov

On Mon, Sep 6, 2010 at 1:26 PM, Yakov Galka <ybungalobill@gmail.com> wrote:
I've taken a look at the libraries mentioned in this thread. My comments:
- main/WinMain/whatever (notus, omgui): The library shall not implement the main() for me. I may want to use it as a sub-library in my library. To eliminate the need of writing #ifdefed main in standalone apps, it's better to use a MACRO that expands to the correct entry function.
- Static event handling tables (omgui, gtkmm): This is "what you don't use you don't pay for" issue. In most cases I don't need to do dynamic handler connect/disconnect. So, if there are N copies of my custom-omega-super-control with M event handlers, I want the memory footprint be O(N+M), not O(NM).
- Model/View (omgui, gtkmm): As Tony already noted: "Ask the model, not the view". A checkbox with isChecked() function is wrong. Actually both these libraries implement multi-platform interface to low-level primitives. They don't simplify gui development. Really, I don't see how they are better than e.g. wxWidgets. They all (including wxWidgets and Qt) just mimic OS interface in a portable way and add a bit sugar to object oriented event handling.
I agree nothing should be done regarding the entry point. Yes, it is more effecient to associate the object with an event handler, rather than having one event handler per object. For this initial stage, I am sticking to the "multi-platform interface". Higher level stuff will come later, if a design can be agreed to.

On Mon, Sep 6, 2010 at 1:26 PM, Yakov Galka <ybungalobill@gmail.com> wrote:
Both Notus and Adam&Eve sounds promising in the last aspect. However Adam&Eve aim to simplify dialog development only, they're not a general gui framework, right?
Eve is a layout solution. It can be used in dialogs or in the main window UI or whatever. But no, it is not a framework, or a event handler or ... Adam is a model language. It is fairly targeted at solving the "bring up a dialog to get a bunch of info, which turn into params to pass to a function to do something". I do believe that it could evolve into a more generic description of the model of the entire UI of a program. Or separate scripts for each part of the UI, more likely. Again, not a gui framework. Tony

On Sun, Sep 5, 2010 at 12:55, Pierre Morcello <pmorcell-cppfrance@yahoo.fr>wrote:
This is offlist. [...] The STL (written mostly by Stepanov) influenced it even more.
Yes. It's just so natural part of the standard nowadays that it's easy to forget. :) On Mon, Sep 6, 2010 at 02:15, Gottlob Frege <gottlobfrege@gmail.com> wrote:
On Sat, Sep 4, 2010 at 8:40 AM, Yakov Galka <ybungalobill@gmail.com> wrote:
Can you give me a real world example where you need the "capturing" phase for any reason other than a hack for some other design problem? I just can't think of any. Thank you.
http://www.adobe.com/devnet/actionscript/articles/event_handling_as3_03.html
- Global Shortcuts (like in Visual Studio)
This one is wrong! Let me describe in detail how I see the process of handling keyboard shortcut, commands (button clicks), etc..: 1) The event is sent to the focused window (e.g. a control in my XYZ document viewer). 2) Every event handler marks explicitly if it handles the event or not. 3) If the event is handled, no more processing is done (e.g. the control is a 'find' textbox, I pressed Ctrl+Left Arrow to move the cursor one word left). 4) Otherwise the event is passed to the parent (e.g. the control is a 'case-sensitive' checkbox). 5) The parent processes the shortcut key for 'Previous chapter'. If the user intends to execute 'Previous chapter' command but 3 happens, she must transfer the focus to the parent manually. That's the desired behavior! The point is that the implementer of the parent can't know if someone in the future will add a grand-grandchild that wants this event. If you implement the shortcuts with parent processing first, then you make it *impossible* for the user to use child's shortcuts. Of course, the problem is in 1. That is in a given system (e.g. Windows) there is no general way to check if the control processed the message or not. That's why you need hacks, and that's why I want a portable GUI library to do as much as possible towards implementing this behavior.
- Macro recording, both the start/stop shortcuts, and the recording of inputs - Automation
I guess these are done on a higher level than just capturing the input. I can't analyze these without a broader look on the implementation. I don't have experience with implemented these in any project.
- hacks - unknowns
For a framework, hacks and unknowns are valid reasons.
Agree. But hacks should remain hacks. I mean you shouldn't make your design less elegant because you want to make hacks easier to implement. Even if the 'unknowns' aren't hacks (as you've already mentioned + see below), the library should make the common case implicit.
[...]
I know that any framework that did child first ended up needing a parent-first hack, and anything that was parent-first needed child-first sometimes. Unfortunately I can't remember the particular reasons, but it seemed to always happen.
Unfortunately I can't remember either :). But I can think of two more cases when you need parent-first: - "Subclassing" without really subclassing. - Implementing some "key" indicator in the parent (e.g. handle Caps Lock). Best regards, Yakov Galka

On Sat, Sep 4, 2010 at 4:44 AM, Yakov Galka <ybungalobill@gmail.com> wrote:
Can you give an example for when you need these events? Why can't you do initialization where you create the context and destruction where you destroy it? Probably 'resources' will be stored in some object that will be responsible for the context so binding their lifetimes sounds logical. I don't want double construction/destruction as in MFC/WTL.
A creation event may not be useful, but an event for destruction could be used to tell whatever is managing the resources (if any) to release them. In the end the usefulness of these events comes down to how the library is designed.

Hi Gwenio, On Mon, Sep 6, 2010 at 22:28, Gwenio <urulokiurae@gmail.com> wrote:
In the end the usefulness of these events comes down to how the library is designed.
Right, so design the library first. On Mon, Sep 6, 2010 at 22:07, Gwenio <urulokiurae@gmail.com> wrote:
[...] For this initial stage, I am sticking to the "multi-platform interface". Higher level stuff will come later, if a design can be agreed to.
Yes, I think that all of us understand your intent, so you don't have to repeat it again. The problem is that there are *already* many low-level libraries with "multi-platform interface" (wxWidgets, Qt, omgui, gtkmm, and more...). To do a better design you should analyze why their design is wrong, what problems you want to solve and come up with a design that solves these problems. Almost everything you said till now doesn't differ conceptually from existing libraries, except the iostream analogy which is good and reminds Notus ideas. I've been for the quest of "how a good gui library should look" for about a year. However I could tell only counter-examples for a good library. I still don't have a grasp of the whole design. It's good that you opened this conversation here because: 1) As Daniel said two people is the perfect size of a team to agree on a design, however making a good design requires much more thought, IMO. 2) Some libraries that people mentioned here and I wasn't aware of, added to my understanding of the topic (again: notus, Adam&Eve). My claim is that you should think of higher level stuff and the low-level will come later. Think of event passing and model/view binding. Think of what concept should be there to make it possible to the same logic to operate with a dialog with "native" look, "uniform" look or even drawn with OpenGL on a single canvas in some computer game menu. How to design these to make all the three co-exist in a single application? Yakov

On Mon, Sep 6, 2010 at 5:09 PM, Yakov Galka <ybungalobill@gmail.com> wrote:
Yes, I think that all of us understand your intent, so you don't have to repeat it again. The problem is that there are *already* many low-level libraries with "multi-platform interface" (wxWidgets, Qt, omgui, gtkmm, and more...). To do a better design you should analyze why their design is wrong, what problems you want to solve and come up with a design that solves these problems. Almost everything you said till now doesn't differ conceptually from existing libraries, except the iostream analogy which is good and reminds Notus ideas.
I've been for the quest of "how a good gui library should look" for about a year. However I could tell only counter-examples for a good library. I still don't have a grasp of the whole design. It's good that you opened this conversation here because: 1) As Daniel said two people is the perfect size of a team to agree on a design, however making a good design requires much more thought, IMO. 2) Some libraries that people mentioned here and I wasn't aware of, added to my understanding of the topic (again: notus, Adam&Eve).
My claim is that you should think of higher level stuff and the low-level will come later. Think of event passing and model/view binding. Think of what concept should be there to make it possible to the same logic to operate with a dialog with "native" look, "uniform" look or even drawn with OpenGL on a single canvas in some computer game menu. How to design these to make all the three co-exist in a single application?
I repeat that I am focusing on the low-level components because I do not want the discussion to get stuck on what appears to be a very difficult subject. Therefore I would like discussion of higher level parts to be limited to what would be required to implement them. For Boost and the Standard Library, a good GUI design would be usable in as many types of applications as possible; while a good design for the back end will support as many designs for a GUI as possible.

On Tue, Sep 7, 2010 at 02:16, Gwenio <urulokiurae@gmail.com> wrote:
I repeat that I am focusing on the low-level components because I do not want the discussion to get stuck on what appears to be a very difficult subject. Therefore I would like discussion of higher level parts to be limited to what would be required to implement them.
Then *please*, explain how your library will differ from e.g. gtkmm? link: http://www.gtkmm.org/en/

On Tue, 7 Sep 2010 09:47:03 +0200, Yakov Galka wrote:
On Tue, Sep 7, 2010 at 02:16, Gwenio <urulokiurae@gmail.com> wrote:
I repeat that I am focusing on the low-level components because I do not want the discussion to get stuck on what appears to be a very difficult subject. Therefore I would like discussion of higher level parts to be limited to what would be required to implement them.
Then *please*, explain how your library will differ from e.g. gtkmm?
link: http://www.gtkmm.org/en/
I hope that, for starters, the end-result would look nothing like that produced by gtkmm. GTK-based GUIs on Windows just look 'wrong'. I'd hope that any solution we envisage would use the platform native widgets to render itself just as SWT does for Java. Alex -- Easy SFTP for Windows Explorer (http://www.swish-sftp.org)

On Tue, Sep 7, 2010 at 8:40 AM, Alexander Lamaison <awl03@doc.ic.ac.uk>wrote:
I hope that, for starters, the end-result would look nothing like that produced by gtkmm. GTK-based GUIs on Windows just look 'wrong'. I'd hope that any solution we envisage would use the platform native widgets to render itself just as SWT does for Java.
Getting a native look would be difficult; ideally you would have a set of functions that provide the functionallity to draw in the native style. The other way would to have a wrapper for native controls and some method to adjust how they function. The latter option would result in there being separate braches of the library for native and custom controls.

On Tue, 7 Sep 2010 10:19:23 -0400, Gwenio wrote:
On Tue, Sep 7, 2010 at 8:40 AM, Alexander Lamaison <awl03@doc.ic.ac.uk>wrote:
I hope that, for starters, the end-result would look nothing like that produced by gtkmm. GTK-based GUIs on Windows just look 'wrong'. I'd hope that any solution we envisage would use the platform native widgets to render itself just as SWT does for Java.
Getting a native look would be difficult;
Why? It's been done well in the past. Just not for C++ (if you exclude the C++ 'port' of SWT)
ideally you would have a set of functions that provide the functionallity to draw in the native style.
Yuk. This approach doesn't work well. Every time the native style changes you have to update your fakes widgets. Maintainence nightmare! Not to mention that it's almost impossible to simulate native widgets faithfully.
The other way would to have a wrapper for native controls and some method to adjust how they function.
This is how SWT, Adam/Eve, etc. work though I'm not sure what you mean by adjust how they function.
The latter option would result in there being separate braches of the library for native and custom controls.
If there are native controls why bother with custom controls. (I also don't see what 'branches' have to do with anything - do you mean 'option')? Alex -- Swish - Easy SFTP for Windows (http://www.swish-sftp.org)

On Tue, Sep 7, 2010 at 11:41 AM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote:
On Tue, 7 Sep 2010 10:19:23 -0400, Gwenio wrote:
On Tue, Sep 7, 2010 at 8:40 AM, Alexander Lamaison <awl03@doc.ic.ac.uk>wrote:
I hope that, for starters, the end-result would look nothing like that produced by gtkmm. GTK-based GUIs on Windows just look 'wrong'. I'd hope that any solution we envisage would use the platform native widgets to render itself just as SWT does for Java.
Getting a native look would be difficult;
Why? It's been done well in the past. Just not for C++ (if you exclude the C++ 'port' of SWT)
ideally you would have a set of functions that provide the functionallity to draw in the native style.
Yuk. This approach doesn't work well. Every time the native style changes you have to update your fakes widgets. Maintainence nightmare! Not to mention that it's almost impossible to simulate native widgets faithfully.
And yet many people do want a native look, so if it's not using native controls already, I think building it with enough theme support to make this work is imperative.
The other way would to have a wrapper for native controls and some method to adjust how they function.
This is how SWT, Adam/Eve, etc. work though I'm not sure what you mean by adjust how they function.
I think that using native controls, while much simpler, would severely handcuff us in terms of how great we could design the library. A clean break from any native controls would give us many more options. I know that saying "WPF" will turn some people off immediately, but by ditching the native controls, they were able to make everything vector-based, with great extensible functionality, almost no restrictions on where you put things (ie. you can put a ListView inside a button if you want), and incredible support for customizing the look and feel at every level, from global themes to customizing how the items of a collection appear to making a single button have an obnoxious 90's web look. I would hope that any new GUI framework would have such functionality too. If it's not going to enable radically better UI, then there isn't much point in switching from established frameworks like wx and Qt, even if they do use C++ rather poorly.
The latter option would result in there being separate braches of the library for native and custom controls.
If there are native controls why bother with custom controls. (I also don't see what 'branches' have to do with anything - do you mean 'option')?
Not all native controls exist on all platforms, and not all native controls have the same functionality. If we didn't want to limit the library to the small subset of controls and functionality that exist on all platforms, we'd eventually need to write some custom controls. -- Cory Nelson http://int64.org

On Tue, 7 Sep 2010 12:50:50 -0700, Cory Nelson wrote:
On Tue, Sep 7, 2010 at 11:41 AM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote: On Tue, 7 Sep 2010 10:19:23 -0400, Gwenio wrote:
On Tue, Sep 7, 2010 at 8:40 AM, Alexander Lamaison <awl03@doc.ic.ac.uk>wrote:
I hope that, for starters, the end-result would look nothing like that produced by gtkmm. GTK-based GUIs on Windows just look 'wrong'. I'd hope that any solution we envisage would use the platform native widgets to render itself just as SWT does for Java.
Getting a native look would be difficult;
Why? It's been done well in the past. Just not for C++ (if you exclude the C++ 'port' of SWT)
ideally you would have a set of functions that provide the functionallity to draw in the native style.
Yuk. This approach doesn't work well. Every time the native style changes you have to update your fakes widgets. Maintainence nightmare! Not to mention that it's almost impossible to simulate native widgets faithfully.
And yet many people do want a native look, so if it's not using native controls already, I think building it with enough theme support to make this work is imperative.
Absolutely! I simply wouldn't touch a GUI library that didn't render native widgets. I'd be too embarrassed to release software that had an amateur look&feel.
The other way would to have a wrapper for native controls and some method to adjust how they function.
This is how SWT, Adam/Eve, etc. work though I'm not sure what you mean by adjust how they function.
I think that using native controls, while much simpler, would severely handcuff us in terms of how great we could design the library. A clean break from any native controls would give us many more options.
I know that saying "WPF" will turn some people off immediately
Not me. But by all accounts the effort involved in developing the 'windowless' WPF GUI library wass extreme. Possibly as great as all other Boost libraries combined. (If I had time I'd find the quotes from MS insiders to back this up. Raymond Chen said something pertinent to this at one point.)
If there are native controls why bother with custom controls. (I also don't see what 'branches' have to do with anything - do you mean 'option')?
Not all native controls exist on all platforms, and not all native controls have the same functionality. If we didn't want to limit the library to the small subset of controls and functionality that exist on all platforms, we'd eventually need to write some custom controls.
The overlap is greater than you think but you have a valid point. SWT's approach to this to to custom-draw a control if and only if it doesn't exist on the platform in question and can't be created by aggregating two or more other controls. This is an acceptable approach as it it _not_ simulating a native control: there is no native control to simulate. I know I keep harping on about SWT but it really is a *fantastic* GUI library that gives cross-platform, best-of-all-worlds, native GUIs rather than lowest-common deminator support or custom-drawn nonesense. My idea of the ideal Boost.GUI library would be something in the vein of SWT but with a much nicer interface. For those of you who hate the idea of manipulating individual controls and want to program at a higher level look at the relationship between SWT and JFace. JFace create common UI elements in a model-based manner and is built on top of SWT. SWT in no way depends on any aspect of JFace. The same should apply to any higher-level GUI framework in Boost. It should build on top of an SWT-style layer and, in my opinion, they should be separate libraries. Alex -- Easy SFTP for Windows Explorer (http://www.swish-sftp.org)

On Tue, Sep 7, 2010 at 5:38 PM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote:
Absolutely! I simply wouldn't touch a GUI library that didn't render native widgets. I'd be too embarrassed to release software that had an amateur look&feel.
Lots of Adobe software is written with non-native UI. Not typically described as amateur. What they have is a 'look' that tends to be somewhere between Mac and Windows, with some things possibly skinned per-paltform, but most of it with just an in between Adobe look. As long as it looks good, it doesn't need to be native. Note also that people are now accustomed to web sites - each has its own look and feel.

On Tue, Sep 7, 2010 at 8:32 PM, Gottlob Frege <gottlobfrege@gmail.com> wrote:
On Tue, Sep 7, 2010 at 5:38 PM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote:
Absolutely! I simply wouldn't touch a GUI library that didn't render native widgets. I'd be too embarrassed to release software that had an amateur look&feel.
Lots of Adobe software is written with non-native UI. Not typically described as amateur. What they have is a 'look' that tends to be somewhere between Mac and Windows, with some things possibly skinned per-paltform, but most of it with just an in between Adobe look.
As long as it looks good, it doesn't need to be native.
Note also that people are now accustomed to web sites - each has its own look and feel.
That is another thing. The Wt C++ Web Toolkit library can make a webserver with sites made in C++ in a very nice way, but it can also make standalone apps (a built-in webserver served by the Qt HTML control). Even MFC has html controls. The aspect of an html-designed GUI allows for proper flowing, wrapping, all kinds of interaction and controls and much more, assuming the HTML container is fast enough to feel responsive (such as Webkit or Mozilla's, but not IE). We need a good html-based GUI engine to make standalone programs.

On Tue, 7 Sep 2010 22:32:18 -0400, Gottlob Frege wrote:
On Tue, Sep 7, 2010 at 5:38 PM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote:
Absolutely! I simply wouldn't touch a GUI library that didn't render native widgets. I'd be too embarrassed to release software that had an amateur look&feel.
Lots of Adobe software is written with non-native UI. Not typically described as amateur. What they have is a 'look' that tends to be somewhere between Mac and Windows, with some things possibly skinned per-paltform, but most of it with just an in between Adobe look.
Funny, I had exactly these apps in mind when I wrote that comment. ;-) The look like the've been written in Flash! Alex -- Easy SFTP for Windows Explorer (http://www.swish-sftp.org)

On Wed, Sep 8, 2010 at 6:47 AM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote:
Lots of Adobe software is written with non-native UI. Not typically described as amateur. What they have is a 'look' that tends to be somewhere between Mac and Windows, with some things possibly skinned per-paltform, but most of it with just an in between Adobe look.
Funny, I had exactly these apps in mind when I wrote that comment. ;-) The look like the've been written in Flash!
Which ones? I mostly worked on the Digital Video and Audio Apps (Premiere, After Effects, etc). Of course parts of those UIs *were* written in Flash (but just small parts). Tony

On Thu, 9 Sep 2010 00:07:08 -0400, Gottlob Frege wrote:
On Wed, Sep 8, 2010 at 6:47 AM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote:
Lots of Adobe software is written with non-native UI. Not typically described as amateur. What they have is a 'look' that tends to be somewhere between Mac and Windows, with some things possibly skinned per-paltform, but most of it with just an in between Adobe look.
Funny, I had exactly these apps in mind when I wrote that comment. ;-) The look like the've been written in Flash!
Which ones? I mostly worked on the Digital Video and Audio Apps (Premiere, After Effects, etc). Of course parts of those UIs *were* written in Flash (but just small parts).
Sorry, been on holiday. One that comes to mind because I just used it is the installer that Adobe seem to use now. I'm not criticising Photoshop etc. They do a good job of trying to look native and I'm sure they have their reasons for not actually using native controls. However, generally it's hard to simulate it well. Alex -- Easy SFTP for Windows Explorer (http://www.swish-sftp.org)

On Fri, Sep 17, 2010 at 10:57 AM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote:
On Thu, 9 Sep 2010 00:07:08 -0400, Gottlob Frege wrote:
On Wed, Sep 8, 2010 at 6:47 AM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote:
Lots of Adobe software is written with non-native UI. Not typically described as amateur. What they have is a 'look' that tends to be somewhere between Mac and Windows, with some things possibly skinned per-paltform, but most of it with just an in between Adobe look.
Funny, I had exactly these apps in mind when I wrote that comment. ;-) The look like the've been written in Flash!
Which ones? I mostly worked on the Digital Video and Audio Apps (Premiere, After Effects, etc). Of course parts of those UIs *were* written in Flash (but just small parts).
Sorry, been on holiday.
One that comes to mind because I just used it is the installer that Adobe seem to use now. I'm not criticising Photoshop etc. They do a good job of trying to look native and I'm sure they have their reasons for not actually using native controls. However, generally it's hard to simulate it well.
Especially with setup's like mine as I use a dark theme on XP, so when I see some light theme Vista'ish looking program, it looks *really* poor in quality...

Hello all, Few points why would GUI library be impossible to bring into Boost (IMHO). The major problem is religious wars. Let's start: 1. Use "native" widgets or not - "native" - which native for Unix to choose Qt? GTK? Motif? Tk? - Your own vector based: - how would they integrate to current desktop environment's look and feel - how are you going to implement complex text rendering under Unix? I mean complex text layouts like right to left, up to down, what about fonts rendering... etc. etc. etc. Most likely it would be done wrong. 2. What strings should be used? std::string, std::wstring, custom string like Qt's QString or GTKmm's ustring? - std::string/UTF-8 - hey what about Windows developers to believe that wide API is best? - std::wstring - hey what about Unix developers for whom UTF-8 is natural and wchar_t is wasteful (4 bytes) - Custom strings - why to use this when you have strings in C++! - Use both... Now everything become bloated, the code is doubled! or everything in headers or templates. 3. What about event loop: - Let's use ASIO as central event loop! But it template bloated - Let's use our custom event loop. But how do I integrate with ASIO event loop to support asynchronous networking. 4. What about OS support. What GUI systems are you going to work with: - Windows, X Server, What about cell phones? What about frame buffers? 5. What 3D rendering would you use? - OpenGL - hey but Direct3D developers don't like it, it has bad windows support (thanks to MS) - Direct3D - hey but it is not portable - Both - hey but you can't write cross platform code - Let's do our custom... 6. What about licensing? Can we use LGPL libraryes? GPL libraries? or only BSD/MIT like ones. - Only BSD - you will get stuck implementing 99% of the code from the scratch (especially stuff like complex text layout, that by no chance would be done right) - BSD and LGPL - hey but it is not free enough got 10% of users! But hey, these 10% of users use proprietary libraries that much worthier then GPL... Bottom line ------------ It is not about how specific widget would look like, it is about concept, most GUI library developers decide one way or other, it may be good or bad but it works. So finally you don't like some design, you choose something that works, if this is Qt with preprocessor, if this is "native C++" GTK or even macro-based wxWidgets... or even (ohh hell) MFC. But finally it gives you solid and useful codebase - and what is mostly important: well debugged and supported one! GUI framework is **VERY** complex project and it is **VERY** hard to do it right (see Enlightenment as great example how cool stuff may never become useful) More then that it is not clear what "right" is... It would never happen in Boost. As it tries to please them all. So you can discuss look and feel and other issues for how. Artyom

On 08/09/10 06:28, Artyom wrote:
Let's start:
1. Use "native" widgets or not 2. What strings should be used? std::string, std::wstring, custom string like Qt's QString or GTKmm's ustring? 3. What about event loop: 4. What about OS support. What GUI systems are you going to work with: 5. What 3D rendering would you use?
Let's finish, there is nothing here than a proper concept based, generic design can't solve using policy, strategies and ad hoc customization point.

Joel Falcou wrote:
On 08/09/10 06:28, Artyom wrote:
Let's start:
0. Come to agreement on what the scope of such a library would be
1. Use "native" widgets or not 2. What strings should be used? std::string, std::wstring, custom string like Qt's QString or GTKmm's ustring? 3. What about event loop: 4. What about OS support. What GUI systems are you going to work with: 5. What 3D rendering would you use?
Let's finish, there is nothing here than a proper concept based, generic design can't solve using policy, strategies and ad hoc customization point.
uh-oh - step 0. is just about impossible to accomplish. But the subsequent steps can't be done without this one. This current discussion is typical of others which happen from time to time. It's happened with GUI's before. To summarize it's a) Let's build a gui library b) Let's agree on what it should be like c) Let's agree on how it should be implemented ... It's an approach which is doomed to failure in my opinion. a) scope is ever exanding to satisify "everyone" b) it's too much design by comittee. The result can't help but lack conceptual integrity. c) It's too much like "waterfall" development. Any effor of such a scale get's feedback as the project progresses. d) It's too ambitious and requires too many resources. To make progress in this area. Someone has to: a) define a doable task b) do it c) submit it for everyone to pick at it d) go back and re-do it e) loop until it's accepted. Not everyone will like it. But it's only important that a significant number of people find it useful. It's still possible or likely that such efforts will fail due to the difficulty of the task and the tenacity required. Robert Ramey

On Tue, 7 Sep 2010 23:37:22 -0800, Robert Ramey wrote:
To make progress in this area. Someone has to:
a) define a doable task b) do it c) submit it for everyone to pick at it d) go back and re-do it e) loop until it's accepted.
+1 Alex -- Easy SFTP for Windows Explorer (http://www.swish-sftp.org)

Robert Ramey wrote:
To make progress in this area. Someone has to:
a) define a doable task b) do it c) submit it for everyone to pick at it d) go back and re-do it e) loop until it's accepted.
Completely agree. Here would be my shot at this. I know it's very manageable in scope and would still be useful in my projects, but those aren't very demanding. The question is if people think it could be more broadly useful. Initially the library would simply provide a general method for binding data and broadly specifying layout. To actually render the controls, it would internally wrap an existing third party library and provide a way of accessing the underlying controls for low level display settings (actually, it would offer a choice of existing third party libraries to make it easy to use in existing projects). To give a quick idea of what I'm proposing, the closest analog seems to be data binding in WPF, though I'm not terribly familiar with this so people may know of better examples. The basic idea is that any class that models something like the property map concept or iterator concept would automatically have a reasonable default view which could also be completely changed. More complex dialogs could easily be created by nesting these types inside each other. A view basically consists of a list of subviews (eg particular controls) and a layout for them (eg vertical list). I think this is a fairly well established idea already. What would be nice is if I could define default views for arbitrary classes through template specialization. For example, suppose I had an existing employee class: struct Employee { int id, string name }; Then defining a default view might go something like this: template<> View default_view(Employee& e) { View v; v.set_layout(vertical_list); //accepts certain predefined layouts or an arbitrary function v << employee.id; //add the first subview; this calls default_view<int> v.add_sub_view(default_view(employee.name)); //the previous line could equivalently be written like this return v; } Suppose I have an object of type Employee: Employee employee = { 999, "Joe Smith" }; I could then call show(employee) or equivalently show(default_view(employee)) to pop up a window for display/editing. There are a few nice things about this approach: 1. I can define a view for my data without modifying the class definition. 2. It requires almost no boilerplate code. 3. It's easy to define views for complex types as long as views for the subtypes are defined. 4. Most importantly, IMO, a lot of very powerful things can be done with iterators 5. If an object models something like the Boost property map concept, it automatically has a reasonable default view. For the last point, consider the following alternative definition of an Employee class: // declare the class typedef property<employee_id_t, int, property<employee_name_t, string,
Employee;
// create and populate Employee employee; put(employee_id, employee, 999); employee[employee_name] = "Joe Smith"; //alternate syntax The nested template syntax makes it possible to automatically loop over the members to create a view with no work by the user. default_view(employee) would be automatically defined. An existing class could get the same behavior by specializing the a few functions (put, get, etc.) Iterators: The library would define a default view for any container that can provide an iterator, such as std::vector, std::list, etc. So I could easily define a Department class that has a list of employees: typedef property<dept_code_t, int, property<dept_name_t, string, property<dept_employees, list<Employee>
Department;
Calling show(department) would create a dialog with a list box of employees. Double clicking on an employee could automatically pop up a more detailed dialog with no event handling code required from the user. If I have a Company class that has a list of Departments, I could easily select among several standard views for complex iterators: Tabbed dialog with a page for each department Department combo box that controls an employee list box Record selectors to flip through departments, along with a search function This is getting long, so I'll just add that I have some idea of how events could be incorporated as well. If people are interested in this approach we can discuss it more. Alec

Yes, I'm interested. Also please explain how the above approach will be customizable (e.g. I want to put a spin-control or a textbox for my float property and add data validation rules). On Wed, Sep 8, 2010 at 14:31, Alec Chapman <archapm@fas.harvard.edu> wrote:
Robert Ramey wrote:
To make progress in this area. Someone has to:
a) define a doable task b) do it c) submit it for everyone to pick at it d) go back and re-do it e) loop until it's accepted.
Completely agree. Here would be my shot at this. I know it's very manageable in scope and would still be useful in my projects, but those aren't very demanding. The question is if people think it could be more broadly useful.
Initially the library would simply provide a general method for binding data and broadly specifying layout. To actually render the controls, it would internally wrap an existing third party library and provide a way of accessing the underlying controls for low level display settings (actually, it would offer a choice of existing third party libraries to make it easy to use in existing projects).
To give a quick idea of what I'm proposing, the closest analog seems to be data binding in WPF, though I'm not terribly familiar with this so people may know of better examples. The basic idea is that any class that models something like the property map concept or iterator concept would automatically have a reasonable default view which could also be completely changed. More complex dialogs could easily be created by nesting these types inside each other.
A view basically consists of a list of subviews (eg particular controls) and a layout for them (eg vertical list). I think this is a fairly well established idea already. What would be nice is if I could define default views for arbitrary classes through template specialization. For example, suppose I had an existing employee class:
struct Employee { int id, string name };
Then defining a default view might go something like this:
template<> View default_view(Employee& e) { View v; v.set_layout(vertical_list); //accepts certain predefined layouts or an arbitrary function v << employee.id; //add the first subview; this calls default_view<int> v.add_sub_view(default_view(employee.name)); //the previous line could equivalently be written like this return v; }
Suppose I have an object of type Employee: Employee employee = { 999, "Joe Smith" }; I could then call show(employee) or equivalently show(default_view(employee)) to pop up a window for display/editing.
There are a few nice things about this approach: 1. I can define a view for my data without modifying the class definition. 2. It requires almost no boilerplate code. 3. It's easy to define views for complex types as long as views for the subtypes are defined. 4. Most importantly, IMO, a lot of very powerful things can be done with iterators 5. If an object models something like the Boost property map concept, it automatically has a reasonable default view.
For the last point, consider the following alternative definition of an Employee class:
// declare the class typedef property<employee_id_t, int, property<employee_name_t, string,
Employee;
// create and populate Employee employee; put(employee_id, employee, 999); employee[employee_name] = "Joe Smith"; //alternate syntax
The nested template syntax makes it possible to automatically loop over the members to create a view with no work by the user. default_view(employee) would be automatically defined. An existing class could get the same behavior by specializing the a few functions (put, get, etc.)
Iterators:
The library would define a default view for any container that can provide an iterator, such as std::vector, std::list, etc. So I could easily define a Department class that has a list of employees:
typedef property<dept_code_t, int, property<dept_name_t, string, property<dept_employees, list<Employee>
Department;
Calling show(department) would create a dialog with a list box of employees. Double clicking on an employee could automatically pop up a more detailed dialog with no event handling code required from the user.
If I have a Company class that has a list of Departments, I could easily select among several standard views for complex iterators: Tabbed dialog with a page for each department Department combo box that controls an employee list box Record selectors to flip through departments, along with a search function
This is getting long, so I'll just add that I have some idea of how events could be incorporated as well. If people are interested in this approach we can discuss it more.
Alec
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Wed, Sep 8, 2010 at 2:31 PM, Alec Chapman <archapm@fas.harvard.edu> wrote:
Robert Ramey wrote:
To make progress in this area. Someone has to:
a) define a doable task b) do it c) submit it for everyone to pick at it d) go back and re-do it e) loop until it's accepted.
Completely agree. Here would be my shot at this. I know it's very manageable in scope and would still be useful in my projects, but those aren't very demanding. The question is if people think it could be more broadly useful.
Initially the library would simply provide a general method for binding data and broadly specifying layout. To actually render the controls, it would internally wrap an existing third party library and provide a way of accessing the underlying controls for low level display settings (actually, it would offer a choice of existing third party libraries to make it easy to use in existing projects).
+1
To give a quick idea of what I'm proposing, the closest analog seems to be data binding in WPF, though I'm not terribly familiar with this so people may know of better examples. The basic idea is that any class that models something like the property map concept or iterator concept would automatically have a reasonable default view which could also be completely changed. More complex dialogs could easily be created by nesting these types inside each other.
A view basically consists of a list of subviews (eg particular controls) and a layout for them (eg vertical list). I think this is a fairly well established idea already. What would be nice is if I could define default views for arbitrary classes through template specialization. For example, suppose I had an existing employee class:
struct Employee { int id, string name };
Then defining a default view might go something like this:
template<> View default_view(Employee& e) { View v; v.set_layout(vertical_list); //accepts certain predefined layouts or an arbitrary function v << employee.id; //add the first subview; this calls default_view<int> v.add_sub_view(default_view(employee.name)); //the previous line could equivalently be written like this return v; }
Suppose I have an object of type Employee: Employee employee = { 999, "Joe Smith" }; I could then call show(employee) or equivalently show(default_view(employee)) to pop up a window for display/editing.
The Mirror reflection library (http://bit.ly/bn7iYM), does something similar for input dialogs for constructing new instances of nested types with possibly multiple constructors. Look for the factory generator utility in the docs. Here (http://bit.ly/bqNqah) can also be found some images of gui dialogs automatically generated by this lib. Besides this factory generator I'm also working on a similar generator for viewers/manipulators of existing instances. These would create a reusable gui object that would accept instances of a given type and would allow to view or even manipulate them. I'm planning to add this utility in the next release of the library. [snip] BR Matus

Matus Chochlik wrote:
The Mirror reflection library (http://bit.ly/bn7iYM), does something similar for input dialogs for constructing new instances of nested types with possibly multiple constructors. Look for the factory generator utility in the docs. Here (http://bit.ly/bqNqah) can also be found some images of gui dialogs automatically generated by this lib.
That looks interesting. Without having looked at the code, my main question is how closely is the dialog rendering tied to the reflection mechanism? If I'm using a class from a library that I can't modify, is there a way to make it provide the information your library needs? If so, it could be a useful starting point. Yakov Galka wrote:
Yes, I'm interested. Also please explain how the above approach will be customizable (e.g. I want to put a spin-control or a textbox for my float property and add data validation rules).
My thought was that it in the property map syntax it could be passed as an additional template parameter: typedef property<var1_t, int, // regular textbox property<var2_t, int, spinner
some_type;
In the more general method it would look like: struct some_type { int var1, var2 }; template <> view default_view(some_type& t) { view v; v << t.var1 << spinner(t.var2); } The library would access the data through get()/put() functions. The user provides these, and put() can do validation and throw if there's a problem. I'm not sure if this is completely consistent with everything I've written so far, but I think it could be with some work. I'm a bit tied up for a couple weeks, but after that I could try to put a demo together that would illustrate these ideas more concretely.

On Fri, Sep 10, 2010 at 2:52 AM, Alec Chapman <archapm@fas.harvard.edu> wrote:
Matus Chochlik wrote:
The Mirror reflection library (http://bit.ly/bn7iYM), does something similar for input dialogs for constructing new instances of nested types with possibly multiple constructors. Look for the factory generator utility in the docs. Here (http://bit.ly/bqNqah) can also be found some images of gui dialogs automatically generated by this lib.
That looks interesting. Without having looked at the code, my main question is how closely is the dialog rendering tied to the reflection mechanism? If I'm using a class from a library that I can't modify, is there a way to make it provide the information your library needs? If so, it could be a useful starting point.
The factory generator is built on top of the basic compile-time reflection. It uses the meta-data describing the class constructors to create a factory for a particular type using application specific means of getting input data for the constructors. This means that you can create not only GUI-based factories but factories that can use other data sources (a database dataset, an XML document, etc.) There are some examples showing this here http://bit.ly/dC4fn0. (see the mirror/example/factories/*.cpp) The Lagoon run-time layer built on top of Mirror also allows to create factories dynamically by using run-time polymorphism, but this is still work-in-progress and the resulting code tends to be bloated. The library is non-intrusive i.e. in most cases you do not need to modify a class in order to reflect it. That is unless you want to reflect private members that do not have public getter/setter functions. This is however (at least in my in my experience) usually not a good practice anyway. The classes need to be manually registered with Mirror, (http://bit.ly/bLjQFI explains how). Besides this I'm looking for a suitable tool of automating the registering and integrating it into the build process. [snip] BR Matus

I'm in the way of implementing a prototype to see what you all think (I already have some code, but it's not ready to be shown). It will take some weeks to launch because I'm a little busy with other things, but I can say I'll launch it as soon as it's ready. My prototype will be a very simple program (a currency converter). The main goal of this prototype is to see how it feels a generic programming approach to gui programming. I can say what I've done for now. I have a gui_controller class template. This class runs the main loop and delegates some operations to a backend. The backend can be replaced. The backend I'm using now is Gtkmm under linux, just for testing purposes, because it's what I'm used to. I have a view class. A view class shows a concrete type of data, just one type (for now). How the view shows the data is done in the following way: template <class DataToShow> class view { public: typedef typename generic_view_type<DataToShow>::type generic_view_t; typedef typename concrete_view_type<generic_view_t>::type concrete_view_t; ..... private: std::unique_ptr<concrete_view_t> view_; }; The generic_view_t is a tag type, which can be one of label, entry, button... But it's platform independent. The concrete_view_type<generic_view_t> metafunction maps this view to a real framework widget. Say qt, gtkmm, wxwidgets, MFC. This is a simple way to keep independent generic and concrete types. Concrete types could be anything, even they needn't to be widgets, if that's required. The controller should instantiate views when there is a need. All views can be instantiated when needed. They are held in a type erasable called any_view. That's what I have for now (more or less). My plans are, for now, to make a complete working example with the minimum requirements implemented to see how it feels. Other things to consider once this is done: - Containers. - Command pattern. - Layout distribution. - String encoding?. See you. Thanks for your time.

On Fri, Sep 10, 2010 at 10:55 AM, Germán Diago <germandiago@gmail.com> wrote:
template <class DataToShow> class view { public: typedef typename generic_view_type<DataToShow>::type generic_view_t;
typedef typename concrete_view_type<generic_view_t>::type concrete_view_t; ..... private: std::unique_ptr<concrete_view_t> view_; };
generic_view_type<int>::type might need to be different in various cases. Would you do: generic_view_type<some_wrapper<int>>::type or generic_view_type<int, some_tag>::type generic_view_type<int, ranged<0, 100, 10, 50> >::type; // 0 to 100, steps of 10, default of 50 ? Tony

On Wed, Sep 8, 2010 at 8:31 AM, Alec Chapman <archapm@fas.harvard.edu> wrote:
struct Employee { int id, string name };
Then defining a default view might go something like this:
template<> View default_view(Employee& e) { View v; v.set_layout(vertical_list); //accepts certain predefined layouts or an arbitrary function v << employee.id; //add the first subview; this calls default_view<int> v.add_sub_view(default_view(employee.name)); //the previous line could equivalently be written like this return v; }
// declare the class typedef property<employee_id_t, int, property<employee_name_t, string,
Employee;
// create and populate Employee employee; put(employee_id, employee, 999); employee[employee_name] = "Joe Smith"; //alternate syntax
Maybe use Boost.Fusion? It can create parallel descriptions of structs. But note also that you need more Model information - both per-field info (min/max/default on some ints, etc), as well as relationships between types (this is invalid when that is 0, etc). Tony

On Wed, Sep 8, 2010 at 06:28, Artyom <artyomtnk@yahoo.com> wrote:
1. Use "native" widgets or not
Use none! That's exactly was my point! Designing a low level library is wrong. Design the high level which can be bound to "native" widgets or even "windowless" drawn on some canvas. ****TAKE A LOOK AT http://notus.sourceforge.net/ !!!**** 2. What strings should be used? std::string, std::wstring, custom string
like Qt's QString or GTKmm's ustring?
As a windows programmer I say: use UTF-8 with std::string. See Pavel's answer here: http://stackoverflow.com/questions/1049947/should-utf-16-be-considered-harmf.... Why UTF-8? In non gui code std::string is more common. Single byte encoding is also default for std::exception::what() for example. Anyway if there is no consensus on this topic you still can use both through a configuration (typedef std::string/std::wstring tstring;). 3. What about event loop:
This is the hard part. 4. What about OS support. What GUI systems are you going to work with:
Any GUI that the user of the library wants. The library will supply bindings to the common controls on common GUIs. 5. What 3D rendering would you use?
Any rendering that the user of the library wants. 3D rendering (unlike basic 2D rendering) shouldn't be considered at all. It should be possible to use 3rd party portable libraries with this GUI. 6. What about licensing? Can we use LGPL libraryes? GPL libraries? or only
BSD/MIT like ones.
You don't use other libraries for anything except the bindings for some platform. If you use the native API that's not a problem. Yakov

On Tue, Sep 7, 2010 at 11:55 PM, Yakov Galka <ybungalobill@gmail.com> wrote:
On Wed, Sep 8, 2010 at 06:28, Artyom <artyomtnk@yahoo.com> wrote:
2. What strings should be used? std::string, std::wstring, custom string like Qt's QString or GTKmm's ustring?
As a windows programmer I say: use UTF-8 with std::string. See Pavel's answer here: http://stackoverflow.com/questions/1049947/should-utf-16-be-considered-harmf.... Why UTF-8? In non gui code std::string is more common. Single byte encoding is also default for std::exception::what() for example. Anyway if there is no consensus on this topic you still can use both through a configuration (typedef std::string/std::wstring tstring;).
UTF-8 is variable-length encoded. Some people find that inconvenient, because they need to search for substrings instead of simple code units. UTF-16 is variable-length encoded. Some people like to treat it as fixed-length, and they are doing it wrong. The Windows API was originally built with UCS-2 in mind -- it only later got UTF-16 support added. I often wonder if they would have gone that route had they known UCS-2 would soon run out of uses. UTF-32 is fixed-length, but it wastes a lot of space for most cultures. If you are processing Unicode correctly, then you will most likely need to deal with grapheme clusters (the visible individual characters that get rendered to your screen). Grapheme clusters can be made of multiple code points, and there are even different combinations of code points that create the same grapheme cluster. So for any real Unicode processing, the encoding does not matter because it is always effectively variable-length and often not usable with simple Unicode-ignorant functions like strstr(). I see two use cases with strings: a) You are using your strings so trivially that it doesn't matter that they are Unicode or anything else. You're basically just copying sequences of bytes around that you got from somewhere else, and probably eventually passing them to a renderer which handles (b). b) You are processing your strings in a meaningful way, where it matters that they are Unicode, and it will be unfortunately complex no matter what you do. So I'd make the argument that it _does not matter_ what encoding is used. Make an arbitrary choice! -- Cory Nelson http://int64.org

On 08/09/10 10:31, Cory Nelson wrote:
I see two use cases with strings: a) You are using your strings so trivially that it doesn't matter that they are Unicode or anything else. You're basically just copying sequences of bytes around that you got from somewhere else, and probably eventually passing them to a renderer which handles (b). b) You are processing your strings in a meaningful way, where it matters that they are Unicode, and it will be unfortunately complex no matter what you do.
In practice, a) on UTF-8 data that has been normalized with NFC works well enough so that it's understandable that people don't want to unroll b). Caring about combining character sequences, grapheme clusters or even words or anything else (later referenced to with 'whatever') isn't much more complex though. The only thing that could break is substring search, and there are typically two solutions: - deal with ranges of whatever instead of ranges of code units - deal with ranges of code units, then ignore matches that don't lie on the right whatever boundaries. The two approaches work, but have different performance characteristics. I haven't benchmarked, but I would expect the second one to be significantly faster on real data, even if the first one seems like the most natural solution. My unicode library provides both approaches.
So I'd make the argument that it _does not matter_ what encoding is used. Make an arbitrary choice!
One slight advantage of UTF-16/UTF-32 is that you could possibly use wide string literals to portably input data, if you've got your compiler set up correctly, while you cannot do that with regular string literals unless your locale is also utf-8 (which isn't possible on windows). I guess the best choice to make people happy is to allow any range for input and deduce the encoding according to the value_type, and return utf-8 ranges for output. Input/output here refers to what you give the API and what the API gives back to you.

On 08/09/10 05:28, Artyom wrote:
Hello all,
Few points why would GUI library be impossible to bring into Boost (IMHO).
The major problem is religious wars.
The original post of the this thread looks like a request for comments and ideas, so it's natural people say what they want and think it should be like. Whether the original poster (Gwenio) wants to go forward with all the ideas is up to him. If he comes with an implementation and submits it for review, it should be accepted regardless of religious wars as long as it accepts its own choices.
Let's start:
1. Use "native" widgets or not
That's an implementation detail, so it doesn't really matter. I also think it's a bad argument to begin with, based on the perceived look & feel of certain toolkits. Qt and recent Adobe products don't use native widgets on Windows, yet most people don't even notice it.
2. What strings should be used? std::string, std::wstring, custom string like Qt's QString or GTKmm's ustring?
None is the right answer. Strings are just ranges of characters, or rather data that encodes such characters. Generic programming techniques allow to integrate with any type that fulfills a certain concept.
3. What about event loop:
- Let's use ASIO as central event loop! But it template bloated - Let's use our custom event loop. But how do I integrate with ASIO event loop to support asynchronous networking.
What's template-bloated about Asio? You certainly have more experience with it than I do, since you used it in CppCMS and then rewrote your own, but it feels quite lightweight to me.
4. What about OS support. What GUI systems are you going to work with: 5. What 3D rendering would you use?
I think it's a given it's best to focus on interface and use other existing tools for the implementation, so as to avoid re-inventing the wheel.
6. What about licensing? Can we use LGPL libraryes? GPL libraries? or only BSD/MIT like ones.
According to my understanding of the Boost community, LGPL is ok as long as it's just for a particular binding.
GUI framework is **VERY** complex project and it is **VERY** hard to do it right (see Enlightenment as great example how cool stuff may never become useful)
More then that it is not clear what "right" is...
It would never happen in Boost. As it tries to please them all. So you can discuss look and feel and other issues for how.
Someone could integrate Adam & Eve into Boost and get a very good starting point; making Eve a DSEL would make more trendy and Boost-like. There has been such efforts ongoing already, albeit I'm not sure what their status is. I think Adobe would have something to gain from this as well, so it sounds like a realistic and feasible approach to me. I'm afraid I don't share the same thoughts about Gwenio's.

On Thu, Sep 9, 2010 at 11:29 AM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
Someone could integrate Adam & Eve into Boost and get a very good starting point; making Eve a DSEL would make more trendy and Boost-like. There has been such efforts ongoing already, albeit I'm not sure what their status is.
I think Adobe would have something to gain from this as well, so it sounds like a realistic and feasible approach to me.
Sean always said that he was fine with someone else putting it into boost, but he didn't have the time/resources to commit to it. I know there is a DSEL inside Adobe, would be nice to tease it out of them. Alternatively start from scratch with Proto - which would have made it easier for us, wish it had existed then. Tony

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost- bounces@lists.boost.org] On Behalf Of Gottlob Frege Sent: Saturday, September 11, 2010 10:10 PM To: boost@lists.boost.org Subject: Re: [boost] Thoughts for a GUI (Primitives) Library
On Thu, Sep 9, 2010 at 11:29 AM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
Someone could integrate Adam & Eve into Boost and get a very good
point; making Eve a DSEL would make more trendy and Boost-like. There has been such efforts ongoing already, albeit I'm not sure what
starting their
status is.
I think Adobe would have something to gain from this as well, so it sounds like a realistic and feasible approach to me.
Sean always said that he was fine with someone else putting it into boost, but he didn't have the time/resources to commit to it.
I know there is a DSEL inside Adobe, would be nice to tease it out of them. Alternatively start from scratch with Proto - which would have made it easier for us, wish it had existed then.
You might consider contacting Jaakko Jarvi, as his team is still carrying on the "academic" side of the work on Adam. I am not sure how interested in Eve they are; although, in a practical sense, they must be working on this. I know that Jaakko, Sean, Mat, etc. made changes to the constraint solver that may not have made it into the Adobe tree.
Tony _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Mon, Sep 13, 2010 at 6:05 PM, Smith, Jacob N <jacob.n.smith@intel.com> wrote:
You might consider contacting Jaakko Jarvi, as his team is still carrying on the "academic" side of the work on Adam. I am not sure how interested in Eve they are; although, in a practical sense, they must be working on this. I know that Jaakko, Sean, Mat, etc. made changes to the constraint solver that may not have made it into the Adobe tree.
And Sean is back at Adobe - I hear there may be some ASL updates...

On Sep 7, 2010, at 5:40 AM, Alexander Lamaison wrote:
On Tue, 7 Sep 2010 09:47:03 +0200, Yakov Galka wrote:
On Tue, Sep 7, 2010 at 02:16, Gwenio <urulokiurae@gmail.com> wrote:
I repeat that I am focusing on the low-level components because I do not want the discussion to get stuck on what appears to be a very difficult subject. Therefore I would like discussion of higher level parts to be limited to what would be required to implement them.
Then *please*, explain how your library will differ from e.g. gtkmm?
link: http://www.gtkmm.org/en/
I hope that, for starters, the end-result would look nothing like that produced by gtkmm. GTK-based GUIs on Windows just look 'wrong'.
The visual appearance of programs has very little to do with library design. Qt programs look great IMO, but they don't use any native widgets even on Windows. They just have a very faithful native l&f. But you can style them differently even on Windows. So the question is perfectly valid. IMO it doesn't make any sense to design low-level components if you don't have a high-level design. Sebastian

Sebastian Redl wrote:
On Sep 7, 2010, at 5:40 AM, Alexander Lamaison wrote:
On Tue, 7 Sep 2010 09:47:03 +0200, Yakov Galka wrote:
On Tue, Sep 7, 2010 at 02:16, Gwenio <urulokiurae@gmail.com> wrote:
I repeat that I am focusing on the low-level components because I do not want the discussion to get stuck on what appears to be a very difficult subject. Therefore I would like discussion of higher level parts to be limited to what would be required to implement them.
Then *please*, explain how your library will differ from e.g. gtkmm?
link: http://www.gtkmm.org/en/
I hope that, for starters, the end-result would look nothing like that produced by gtkmm. GTK-based GUIs on Windows just look 'wrong'.
The visual appearance of programs has very little to do with library design. Qt programs look great IMO, but they don't use any native widgets even on Windows. They just have a very faithful native l&f. But you can style them differently even on Windows.
So the question is perfectly valid. IMO it doesn't make any sense to design low-level components if you don't have a high-level design.
+1 Design top down, implement bottom up. Right now instead of talking about design of low level components you should instead be talking about design goals and requirements. We should discuss the difficult subjects. Regards, Luke

On Tue, 7 Sep 2010 09:51:10 -0700, Sebastian Redl wrote:
On Sep 7, 2010, at 5:40 AM, Alexander Lamaison wrote:
On Tue, 7 Sep 2010 09:47:03 +0200, Yakov Galka wrote:
On Tue, Sep 7, 2010 at 02:16, Gwenio <urulokiurae@gmail.com> wrote:
I repeat that I am focusing on the low-level components because I do not want the discussion to get stuck on what appears to be a very difficult subject. Therefore I would like discussion of higher level parts to be limited to what would be required to implement them.
Then *please*, explain how your library will differ from e.g. gtkmm?
link: http://www.gtkmm.org/en/
I hope that, for starters, the end-result would look nothing like that produced by gtkmm. GTK-based GUIs on Windows just look 'wrong'.
The visual appearance of programs has very little to do with library design. Qt programs look great IMO, but they don't use any native widgets even on Windows. They just have a very faithful native l&f. But you can style them differently even on Windows.
On Linux I would say that GTK and QT *are* the native widgets for their respective desktop environments, Gnome and KDE. Widgets are native to a desktop environment rather than an OS and it's only becuase Windows just has one environment that people talk about native Windows widgets.
So the question is perfectly valid. IMO it doesn't make any sense to design low-level components if you don't have a high-level design.
The problem is that these are two completely separate things. Any high-level framework will be build on top of low-level widgets so at some point these will need wrapping in a portable manner so at some point this job has to be done anyway. Exposing these as a boost library in its own right means we don't force users to use the high-level concepts if they prefer not to or prefer to roll their own. Alex -- Easy SFTP for Windows Explorer (http://www.swish-sftp.org)

On Tue, Sep 7, 2010 at 12:49 PM, Alexander Lamaison <awl03@doc.ic.ac.uk> wrote:
On Tue, 7 Sep 2010 09:51:10 -0700, Sebastian Redl wrote:
On Sep 7, 2010, at 5:40 AM, Alexander Lamaison wrote:
On Tue, 7 Sep 2010 09:47:03 +0200, Yakov Galka wrote:
On Tue, Sep 7, 2010 at 02:16, Gwenio <urulokiurae@gmail.com> wrote:
I repeat that I am focusing on the low-level components because I do not want the discussion to get stuck on what appears to be a very difficult subject. Therefore I would like discussion of higher level parts to be limited to what would be required to implement them.
Then *please*, explain how your library will differ from e.g. gtkmm?
link: http://www.gtkmm.org/en/
I hope that, for starters, the end-result would look nothing like that produced by gtkmm. GTK-based GUIs on Windows just look 'wrong'.
The visual appearance of programs has very little to do with library design. Qt programs look great IMO, but they don't use any native widgets even on Windows. They just have a very faithful native l&f. But you can style them differently even on Windows.
On Linux I would say that GTK and QT *are* the native widgets for their respective desktop environments, Gnome and KDE. Widgets are native to a desktop environment rather than an OS and it's only becuase Windows just has one environment that people talk about native Windows widgets.
So the question is perfectly valid. IMO it doesn't make any sense to design low-level components if you don't have a high-level design.
The problem is that these are two completely separate things. Any high-level framework will be build on top of low-level widgets so at some point these will need wrapping in a portable manner so at some point this job has to be done anyway. Exposing these as a boost library in its own right means we don't force users to use the high-level concepts if they prefer not to or prefer to roll their own.
Two things: Has anyone looked at the Juce GUI framework (http://www.rawmaterialsoftware.com/juce.php)? Its license is a bit nasty (GPL), but its design is fascinating. Unfortunately it does not use native widgets, but it has a few nice features including being *very* fast, everything is vector drawn, multi-platform with no code changes needed, easy to port to other 'things (I made a test port of it to work as the menu system in Ogre3D, a 3d rendering engine, worked with OGL and D3D through it perfectly), a few down sides is it still uses the hierarchy type setup, but it did use signal/slots setup everywhere, quite a few other niceties as well, and I do enjoy using it, very simple to set up, can be integrated into other GUI frameworks with ease for a hybrid set up, and can be added in as a quick drop-in into any program. I would not recommend using its style as the hierarchy style is quite limited (I like one of the aforementioned ideas of templating personality, actions, etc...), but I love the aspect that it can be 'dropped' into a program, can be stuffed into other GUI programs transparently (its vector drawing is amazingly useful and easy to perform, could put a Juce interactive chart into an MFC program for example). Plus with the ability to write it into other systems, such as a 3d rendering context, was an absolute boon! I have been tempted to 'try' to port wxWidgets into a 3d context as well (Ogre3D), and it should be possible (wxWidgets likes more control though...). wxWidgets has the nice bit that it can use native widgets, or everything is self drawn (Universal mode, and you can theme anything how you like then), unfortunately you cannot use both at the same time, which is an obvious design flaw. I do like the Adobe Adam&Eve set up, have not tried using it yet though so unsure of how it actually 'feels'. But for a GUI library that I would use: * If big and requires things to be built around it, like wxWidgets or Qt, would be used sparingly * If could be 'dropped' into existing non-gui apps, would be used more often. * If could be 'dropped' into existing gui apps using a different framework, would be used even more often. * Should be able to use native widgets, but should not be forced, should be able to choose between native or themed, preferably you can call something like bool SetTheme(const std::string&), and if passed a keyword like "__native__" or so then it should use native widgets, else it should return false to allow choosing an actual theme (or default to a default) in case the current platform does not have native widget support for it. * If could easily be integrated into a 3d rendering context (like Ogre3D, and I could do the porting work as I know how), in addition to the above, I would use it exclusively.

On 09/07/2010 03:30 PM, OvermindDL1 wrote:
On Tue, Sep 7, 2010 at 12:49 PM, Alexander Lamaison<awl03@doc.ic.ac.uk> wrote:
On Tue, 7 Sep 2010 09:51:10 -0700, Sebastian Redl wrote:
So the question is perfectly valid. IMO it doesn't make any sense to design low-level components if you don't have a high-level design.
Two things: Has anyone looked at the Juce GUI framework (http://www.rawmaterialsoftware.com/juce.php)? Its license is a bit nasty (GPL), but its design is fascinating.
On 09/07/2010 01:29 PM, Simonson, Lucanus J wrote:
Design top down, implement bottom up. Right now instead of talking about design of low level components you should instead be talking about design goals and requirements. We should discuss the difficult subjects.
I think what Lucanus is saying is really central, in particular in this particular topic and discussion. Just look at the arguments made above: I think it is clear that different participants in this discussion pursue quite different goals. Unfortunately, these aren't compatible with each other, so it's hard to reach an agreement. One party argues for a standard C++ API that allows to integrate with existing GUIs (libraries, frameworks, toolkits), uses native toolkits (or at least, styles), etc. The other wants to see such a boost project to endorse new design ideas (even new GUI approaches) that are highly experimental. Neither party is wrong, of course, but unless you figure out which set of requirements a "boost.gui" project should support, you are very unlikely to get anywhere. FWIW, Stefan -- ...ich hab' noch einen Koffer in Berlin...

On Tue, Sep 7, 2010 at 3:47 AM, Yakov Galka <ybungalobill@gmail.com> wrote:
Then *please*, explain how your library will differ from e.g. gtkmm?
Design wise, not much I guess. However I see getting a back end for GUIs into the C++ standard as important, and this would be an step toward that. The reason I am not pushing to complete a full library/framework is I believe no matter how state of the art the design is now, it would eventually become out of date as the standard does not update quickly. Therefore, I aim for a design that would be acceptable for the Standard Library, not the best design for any particular GUI. And

On Tue, Aug 31, 2010 at 4:16 PM, Gwenio <urulokiurae@gmail.com> wrote:
As it stands, neither Boost nor C++ has any support for GUI applications. Therefore I am suggesting a library that provides the bare minimum required to create a GUI application.
My thoughts on what this library would contain are as follows:
Here is a paper that might give you ideas if you are interested in making a revolutionary (vs. evolutionary) C++ GUI library [note, this may take some time to digest]. http://haskell.cs.yale.edu/frp/genuinely-functional-guis.pdf Good luck! David -- David Sankel Sankel Software www.sankelsoftware.com 585 617 4748 (Office)

Since there is a lot of discussion about the disign for high level parts for the library, I prepose the discussion be split into two. The low level disign only needs to know the requirements of the other parts are, and the higher parts only need to know what is possible. The reason they should be separated is the design of the high level components is a hot topic that is not going to be easily resolved.

On Wed, Sep 8, 2010 at 17:34, Gwenio <urulokiurae@gmail.com> wrote:
Since there is a lot of discussion about the disign for high level parts for the library, I prepose the discussion be split into two. The low level disign only needs to know the requirements of the other parts are, and the higher parts only need to know what is possible. The reason they should be separated is the design of the high level components is a hot topic that is not going to be easily resolved.
Low-level parts aren't going to be solved easily either. An even bigger problem is that we don't have a clear separation of what is high-level and low-level. Anyway they are related so you should take in mind either level when you design the other. On Wed, Sep 8, 2010 at 14:32, Mathias Gaunard <mathias.gaunard@ens-lyon.org>wrote:
On 02/09/10 22:17, Gwenio wrote:
Now for a start for discussion on event handling.
*Event Loop:*
<snip />
*Event Handling:*
<snip />
All of this is already dealt with by Boost.Asio, which provides a powerful unified way of dealing with asynchronous events.
1) Boost.Asio doesn't support native OS message queue (correct me if I'm wrong). 2) Event handling in GUI is not necessarily asynchronous. It's more on how to design the message passing from one component to the other in an extensible way with some additional requirements. Although I agree that it would be better if it was compatible with Asio. Regards, Yakov

On Wed, Sep 8, 2010 at 11:59 AM, Yakov Galka <ybungalobill@gmail.com> wrote:
On Wed, Sep 8, 2010 at 17:34, Gwenio <urulokiurae@gmail.com> wrote:
Since there is a lot of discussion about the disign for high level parts for the library, I prepose the discussion be split into two. The low level disign only needs to know the requirements of the other parts are, and the higher parts only need to know what is possible. The reason they should be separated is the design of the high level components is a hot topic that is not going to be easily resolved.
Low-level parts aren't going to be solved easily either. An even bigger problem is that we don't have a clear separation of what is high-level and low-level. Anyway they are related so you should take in mind either level when you design the other.
On Wed, Sep 8, 2010 at 14:32, Mathias Gaunard <mathias.gaunard@ens-lyon.org>wrote:
On 02/09/10 22:17, Gwenio wrote:
Now for a start for discussion on event handling.
*Event Loop:*
<snip />
*Event Handling:*
<snip />
All of this is already dealt with by Boost.Asio, which provides a powerful unified way of dealing with asynchronous events.
1) Boost.Asio doesn't support native OS message queue (correct me if I'm wrong). 2) Event handling in GUI is not necessarily asynchronous. It's more on how to design the message passing from one component to the other in an extensible way with some additional requirements.
Although I agree that it would be better if it was compatible with Asio.
Low level: the part the allow interaction with the system and cannot be replaced without knowing what the system is. The event handling components should translate the system messages, determine which control they are for, and pass it on to where ever the application or library as indicated it should be sent.

Fine. Then clarify some more details please. On Wed, Sep 8, 2010 at 18:17, Gwenio <urulokiurae@gmail.com> wrote:
Low level: the part the allow interaction with the system and cannot be replaced without knowing what the system is.
According to your definition the low-level part should implement the controls with system independent interface. Consider a grid control (aka list-view with details style). What interface should it have? Should it store the data (with list_view::set_item_data()) or ask the owner to provide the data when needed by a callback?
The event handling components should translate the system messages, determine which control they are for, and pass it on to where ever the application or library as indicated it should be sent.
Recall message capturing/bubbling phases discussed before. Should the low-level event handling component implement them or perhaps it'll be broken into low-level component that merely captures system messages and a high-level component that actually transfers them to their destination (implementing capturing/bubbling and non-system events on the way)?

The low level part should avoid implementing a list control. Event handling would translate system messages, and then pass them to a specified handler to be processed. I have spent some time thinking over how the library should be broken up. Primative: - Is not a GUI in and of itself. - Provides a system independent way to build a GUI. - Should be designed to avoid limiting the types of GUI that can be created and the systems it can be ported to (with that order of priority). - Should be usable with any library (both system and third party) that runs on the same system it is currently being build for (e.g. it should be possible to render with GDI, OpenGL, and DirectX when built for Windows). The code to wiring the library to others will not be provided; only the means to do so are included. Native: - Supports the creation of GUIs that fit with the system. - Could possibly provide more than one way to create "native" GUIs (e.g. a set of native controls and a set of functions to draw in the native style and get color/metric information). Component: - Defines common controls, and ways extend and customize them for the application. Primatives would be independant of the other two, and if it is designed properly then it will not affect the design of them. The reason primative and native are listed separately when they both deal with the system is in the intrest of keeping the scope of the project doable. Ideally at least one mechanism provided by the native unit would be able to intagrate with the component library. I have read some discussion about having an away to discribe the needs of your application programatically, and then have a GUI front end created at runtime. Such would be separate for the above catagories if it was created and they would merely provide one possible back end. On Wed, Sep 8, 2010 at 2:33 PM, Yakov Galka <ybungalobill@gmail.com> wrote:
Fine.
Then clarify some more details please.
On Wed, Sep 8, 2010 at 18:17, Gwenio <urulokiurae@gmail.com> wrote:
Low level: the part the allow interaction with the system and cannot be replaced without knowing what the system is.
According to your definition the low-level part should implement the controls with system independent interface. Consider a grid control (aka list-view with details style). What interface should it have? Should it store the data (with list_view::set_item_data()) or ask the owner to provide the data when needed by a callback?
The event handling components should translate the system messages, determine which control they are for, and pass it on to where ever the application or library as indicated it should be sent.
Recall message capturing/bubbling phases discussed before. Should the low-level event handling component implement them or perhaps it'll be broken into low-level component that merely captures system messages and a high-level component that actually transfers them to their destination (implementing capturing/bubbling and non-system events on the way)? _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 08/09/10 16:59, Yakov Galka wrote:
1) Boost.Asio doesn't support native OS message queue (correct me if I'm wrong). 2) Event handling in GUI is not necessarily asynchronous. It's more on how to design the message passing from one component to the other in an extensible way with some additional requirements.
Boost.Asio has a work queue where you can post arbitrary work.

On Thu, Sep 9, 2010 at 18:02, Mathias Gaunard <mathias.gaunard@ens-lyon.org>wrote:
On 08/09/10 16:59, Yakov Galka wrote:
1) Boost.Asio doesn't support native OS message queue (correct me if I'm
wrong). 2) Event handling in GUI is not necessarily asynchronous. It's more on how to design the message passing from one component to the other in an extensible way with some additional requirements.
Boost.Asio has a work queue where you can post arbitrary work.
At least on Windows you must *wait* on the queue in the same thread that created the window. Also calling the window procedure from a thread different from the one that window belongs to is looking for trouble. It *is* possible to extend Boost.Asio to wait simultaneously on the thread queue and asio's work queue, but it means that you need to introduce modifications to Asio. Don't get me wrong, I'm not objecting. On Thu, Sep 9, 2010 at 16:46, Mathias Gaunard <mathias.gaunard@ens-lyon.org>wrote:
[...]
One slight advantage of UTF-16/UTF-32 is that you could possibly use wide
string literals to portably input data, if you've got your compiler set up correctly, while you cannot do that with regular string literals unless your locale is also utf-8 (which isn't possible on windows).
I guess the best choice to make people happy is to allow any range for input and deduce the encoding according to the value_type, and return utf-8 ranges for output. Input/output here refers to what you give the API and what the API gives back to you.
Sorry, but I can't understand you. If you talk about 'input' done through the API of the GUI library then it doesn't matter what encoding you choose. The library will do the conversion to and from the one that's accepted by the OS (UTF-16 on windows, etc...). If you're talking about some other 'input' then it won't be portable anyway. On Thu, Sep 9, 2010 at 17:29, Mathias Gaunard <mathias.gaunard@ens-lyon.org>wrote:
2. What strings should be used? std::string, std::wstring, custom string
like Qt's QString or GTKmm's ustring?
None is the right answer. Strings are just ranges of characters, or rather data that encodes such characters.
Not quite right. Strings are ranges from algorithmic point of view. They are containers for all other uses. I prefer to write: string caption = window.get_caption(); rather than: string caption; window.get_caption(back_inserter(caption)); On Thu, Sep 9, 2010 at 16:25, Mathias Gaunard <mathias.gaunard@ens-lyon.org>wrote:
On 08/09/10 07:55, Yakov Galka wrote:
3. What about event loop:
This is the hard part.
What's hard about it? We already have a good event loop in Boost.Asio.
The integration into Boost.Asio for example. But it's not really hard. I just thought about the higher level when I wrote this.

On 09/09/10 17:48, Yakov Galka wrote
Sorry, but I can't understand you.
I'm talking about template<typename Range> void set_caption(const Range& range); then, depending on typename range_value<const Range>::type, you deduce whether it's UTF-8, UTF-16, or UTF-32.
Not quite right. Strings are ranges from algorithmic point of view. They are containers for all other uses. I prefer to write: string caption = window.get_caption(); rather than: string caption; window.get_caption(back_inserter(caption));
window.get_caption() doesn't need to specify what its return type is, other than it being a range.

I have been thinking about the design for event handlers, and have come up with the prototype: class control_event_handler { public: enum events { // contains ids for events that can be stored with the "set" function }; template<class Func> control_event_handler(Func f) {f(*this);} // A function (object, pointer, or labanda) is passed to set the handlers. ~control_event_handler() {} template<class Func,events evt> void set(Func f); // Sets how to handle a specific type of event // Functions calling handlers go here private: // How the the function data is stored goes here }; A reference to an event handler object would be passed on the creation of each window or control. The intent is something like the vtable used for finding the function to call for virtual classes, only providing more flexablity in regard to reusing a function is for multiple handlers. Additional capabilities would be added for the release version such as copy function(s) (something like those for locales would be good) to make creating similar handlers. I still have not determind how to store the data for event handlers, but it will likely be a series of function containers from one of the Boost libraries. I am not familiar enough with the function object libraries of Boost to really know which would be the best to use. Anyone care to provide an opinion about which would be best to use?

On 9/25/2010 3:04 PM, Gwenio wrote:
I have been thinking about the design for event handlers, and have come up with the prototype:
class control_event_handler { public: enum events { // contains ids for events that can be stored with the "set" function }; template<class Func> control_event_handler(Func f) {f(*this);} // A function (object, pointer, or labanda) is passed to set the handlers. ~control_event_handler() {} template<class Func,events evt> void set(Func f); // Sets how to handle a specific type of event // Functions calling handlers go here private: // How the the function data is stored goes here }; A reference to an event handler object would be passed on the creation of each window or control. The intent is something like the vtable used for finding the function to call for virtual classes, only providing more flexablity in regard to reusing a function is for multiple handlers. Additional capabilities would be added for the release version such as copy function(s) (something like those for locales would be good) to make creating similar handlers.
I still have not determind how to store the data for event handlers, but it will likely be a series of function containers from one of the Boost libraries. I am not familiar enough with the function object libraries of Boost to really know which would be the best to use. Anyone care to provide an opinion about which would be best to use?
An event handler needs to be multi-cast, which means using boost::signals internally. Anything less is the same old limitation which inadequate GUI and component libraries always seem to settle upon for some hideous reason.

On Sat, Sep 25, 2010 at 21:04, Gwenio <urulokiurae@gmail.com> wrote:
[...] enum events { // contains ids for events that can be stored with the "set" function };
This approach is intrinsically not extensible. You should be able to attach arbitrary events, probably identified by 'event_type' - an objects holding the information about the event.
template<class Func,events evt> void set(Func f); // Sets how to handle a specific type of event
set() function allows attaching event handlers at run-time, however I'm not sure this is a correct approach. Maybe generating a table at compile time would be better. This is the static vs dynamic dilemma. As a C++ programmer I honor the static approach. Q: Who owns control_event_handler? Is it O(NM) or O(N+M) approach? It looks like O(NM) because you perform a call on f without specifying the target of the event. As a O(NM) I'm not going to use it.

The same handler could be assigned to multiple windows or controls, so unless the programmer chooses to create a new handler for each object that needs one, it would be O(N+M). Refernce counting like with facets would be a good thing, though if there are construction/destruction events then ref counting could be tied to that (depending on how control specific data is associated). The call on f is to set up the functions, not signal an event. That constructor is for static objects so all the events can be set without passing all the functions as parameters. Each event would pass the target along with the message data, similar to how the window procs for WIndows receive the HWND of the reciever of the message. Regarding extensibility, the handler is meant to deal with system messages; therefore at best for custom messages there would be a way to install translators that would convert unknown messages into generic events such as command and notify (message flow: system -> standard or custom translator -> event handler -> application). This is because the code processing the system messages would need to know how to format the data for the event handler, and if it cannot then the ability to add custom events to the handler would be useless. Furthermore it seems redundant to me to have both custom translations and custom event types. The reason behide dynamically creating the handler is because other than having the handler be a virtual class no method of creating it at compile time comes to mind, and I was trying to find a better way because it seems that many feel that is not good enough (or at least with some implementations of it like in MFC). Would you care to suggest a way?

From the view point of the implementation, all event handlers need to be the same. Having an abstract base class for event handlers would make if easier to meet the various requirements people have, as they can define their own. So the question is what reasons are there to make using an abstract base class a bad design choice, if any? And what are the drawbacks to using it if
Question regarding the design of the event handler: should there be one generic class that is designed to hold event handling data or should event handlers derive from an abstract base class? they exist? Also, if handlers are derived for an abstract class, should controls be associated with both a handler and some object that stores date or should those two things be one and the same? Once these questions are answered, it will be time to focus on specific requirements and the implementation of the library.

On Sun, Sep 26, 2010 at 18:08, Gwenio <urulokiurae@gmail.com> wrote:
[...] Regarding extensibility, the handler is meant to deal with system messages; therefore at best for custom messages there would be a way to install translators that would convert unknown messages into generic events such as command and notify (message flow: system -> standard or custom translator -> event handler -> application). This is because the code processing the system messages would need to know how to format the data for the event handler, and if it cannot then the ability to add custom events to the handler would be useless. Furthermore it seems redundant to me to have both custom translations and custom event types.
We want to implement event handling. If so it doesn't matter what events we pass, system events or custom events. A general solution will handle them both. The reason behide dynamically creating the handler is because other than
having the handler be a virtual class no method of creating it at compile time comes to mind, and I was trying to find a better way because it seems that many feel that is not good enough (or at least with some implementations of it like in MFC). Would you care to suggest a way?
There is nothing bad with virtual class. You can use this approach to create event handlers at compile time: class MyEventHandler { public: typedef tuple< all_mouse_events, // this is a group of different mouse events. we can (?) use SFINAE to call only those that are overridden. or perhaps default to a do nothing handler if there is no override for e.g. mouse_down_event, which is simpler to implement. // mouse_move_event, // or specify them one by one system_native_event // native events, untranslated > handled_events; void on_event(const mouse_move_event& e) { ... } void on_event(const system_native_event& e) { ... } }; class MyEventHandler2 { /* .. */ }; And then you can compose all these: typedef compose_handlers<tuple<MyEventHandler, MyEventHandler2>> MySuperHandlers; Or something like that. compose_handlers derives from all the handlers passed and from event_handler. event_handler is a virtual class with dispatch_event function. compose_handlers implements dispatch_event: template<class EventList...> class compose_handler : public event_handler, public EventList... { public: virtual void dispatch_event( ??? ) { for each event type/group in linearized EventList: if type matches, cast to it and on_event(casted_event_reference); } }; Some variations are possible, mainly inheritance vs typedefed lists, static vs non-static on_event. Note: Without compile time reflection or ugly preprocessor macros we can't avoid specifying handled events twice in *any* approach. We have to specify them once for the handler function and once for what events we actually handle. However using grouping like above we can simplify this a bit. One question is how to dynamically dispatch the event to the right handler function type. Possible ways: * Use dynamic_cast. Pros: grouping of event types is possible. mouse_move can inherit mouse_event so on_event(mouse_event) can handle mouse_move events too. User doesn't need to maintain event ids. Cons: Slow. events are virtual classes. * Use typeid: Pros: User doesn't need to maintain event ids. Faster (?) than dynamic_cast. Can be done without virtual classes: the one who dispatched the event knows the exact type. Cons: No grouping of events. I don't know what implementations do when dynamic linking is used but I guess it still may be slow on some implementations. * Use address of global object as Id: Will this even work with dynamic linking? * Use statically generated event ids. Pros: as fast as possible. Cons: The burden of ensuring uniqueness falls on the user. On Fri, Oct 15, 2010 at 21:27, Gwenio <urulokiurae@gmail.com> wrote:
[...] Also, if handlers are derived for an abstract class, should controls be associated with both a handler and some object that stores date or should those two things be one and the same?
Why do you think that there must be a single answer for this question? I think that depends on the control. Some controls may not be C++ objects at all, e.g. native button may send command events. We may want to handle events from grid cells which are defined implicitly, etc... Once these questions are answered, it will be time to focus on specific
requirements and the implementation of the library.
There are more questions to answer. Also someone here said that he is up to implementing a prototype of his ideas, it would be nice to know what his progress is and what his ideas are. CLARIFICATION: everything I said is provided "as is" and without any warranty. It's just brainstorming and isn't guaranteed to be logical or make any sense.

On Sat, Oct 16, 2010 at 6:52 AM, Yakov Galka <ybungalobill@gmail.com> wrote:
We want to implement event handling. If so it doesn't matter what events we pass, system events or custom events. A general solution will handle them both.
There is nothing bad with virtual class.
You can use this approach to create event handlers at compile time:
class MyEventHandler { public: typedef tuple< all_mouse_events, // this is a group of different mouse events. we can (?) use SFINAE to call only those that are overridden. or perhaps default to a do nothing handler if there is no override for e.g. mouse_down_event, which is simpler to implement. // mouse_move_event, // or specify them one by one system_native_event // native events, untranslated > handled_events;
void on_event(const mouse_move_event& e) { ... } void on_event(const system_native_event& e) { ... } };
class MyEventHandler2 { /* .. */ };
And then you can compose all these:
typedef compose_handlers<tuple<MyEventHandler, MyEventHandler2>> MySuperHandlers;
Or something like that. compose_handlers derives from all the handlers passed and from event_handler. event_handler is a virtual class with dispatch_event function. compose_handlers implements dispatch_event:
template<class EventList...> class compose_handler : public event_handler, public EventList... { public: virtual void dispatch_event( ??? ) { for each event type/group in linearized EventList: if type matches, cast to it and on_event(casted_event_reference); } };
Some variations are possible, mainly inheritance vs typedefed lists, static vs non-static on_event.
Note: Without compile time reflection or ugly preprocessor macros we can't avoid specifying handled events twice in *any* approach. We have to specify them once for the handler function and once for what events we actually handle. However using grouping like above we can simplify this a bit.
One question is how to dynamically dispatch the event to the right handler function type. Possible ways: * Use dynamic_cast. Pros: grouping of event types is possible. mouse_move can inherit mouse_event so on_event(mouse_event) can handle mouse_move events too. User doesn't need to maintain event ids. Cons: Slow. events are virtual classes. * Use typeid: Pros: User doesn't need to maintain event ids. Faster (?) than dynamic_cast. Can be done without virtual classes: the one who dispatched the event knows the exact type. Cons: No grouping of events. I don't know what implementations do when dynamic linking is used but I guess it still may be slow on some implementations. * Use address of global object as Id: Will this even work with dynamic linking? * Use statically generated event ids. Pros: as fast as possible. Cons: The burden of ensuring uniqueness falls on the user.
I was thinking more along the lines of the event handler that the backend deals with would all ready define the standard events, and have a function to call for unknown events that is platform specific. The handler for unknown events would then process custom messages, and do whatever is going to be done with them. Having just one method in the event handler adds to
If adding custom translations is possible, then they can pass non-standard messages if they want to; however doing more that allowing for translating unknown messages would make the translation step far more complex (or at least a greater overhead) if it can be done at all. the processing time, as to format the message into the correct type would require the message to have already been identified and it would need to be identified again.
Why do you think that there must be a single answer for this question? I think that depends on the control. Some controls may not be C++ objects at all, e.g. native button may send command events. We may want to handle events from grid cells which are defined implicitly, etc...
Because this relates to the structure of the class that will store the information for working with the native objects for all controls. So is the likelihood that someone will want to pair the object containing control specific data with a separate event handler object great enough to be worth however much memory a pointer takes? To be clear this has nothing to do with separating how data is stored for a control, like a tree view, and how the definition of how the control behaves. That can be done regardless of which way this goes.
There are more questions to answer. Also someone here said that he is up to implementing a prototype of his ideas, it would be nice to know what his progress is and what his ideas are.
Of course there is more to decide, but they are more specific like should there be creation/destruction events and where the origin for the coordinate system should be.

One more consideration that came up is whether or not to have the event handler class and the 'context' class (the one that hold the data for working with the native objects) should be one and the same or separate. When not using virtual functions for event handling it is obviously better to separate the two, but when using virtual functions there is not much diffrence in terms of overhead (with combining them being more the slightly better choice). Because it is so close (only one addition pointer), I was wondering if there were any advantages to still keeping them separate in term of design. Thoughts?
participants (22)
-
Alec Chapman
-
Alexander Lamaison
-
Artyom
-
Cory Nelson
-
David Sankel
-
ecyrbe
-
Edward Diener
-
Eoin O'Callaghan
-
Germán Diago
-
Giorgio Zoppi
-
Gottlob Frege
-
Gwenio
-
Joel Falcou
-
Mathias Gaunard
-
Matus Chochlik
-
OvermindDL1
-
Robert Ramey
-
Sebastian Redl
-
Simonson, Lucanus J
-
Smith, Jacob N
-
Stefan Seefeld
-
Yakov Galka