
I will have some time to contribute, Please let me know If you want some coding help. Regards --Dev On Sat, Dec 15, 2012 at 6:52 AM, Topher Cooper <topher@topherc.net> wrote:
I've only had time to read over the documentation quickly, but I found the discussion interesting, and I thought I would throw out some thoughts, with the caveat that I might have missed something.
1) /Simplicity is a simplistic goal/. Generally good design means that doing simple things is simple and doing more complex things is more complex proportionately (and no more than proportionately) to the degree of additional complexity. This may mean that there are many "components" (methods, parameters, libraries, classes, or whatever) and options but that at any given time most can be ignored. Part of the design process is to be clear what can be ignored when, and part of the implementation is documentation (whatever form that that takes) that makes it easy to focus on what one needs for a task and to be barely aware that there is more available (this, by the way, is my one complaint about the use of "literate programming" mechanisms -- there is a tendency to over-rely on them with a resulting mass of undifferentiated information).
One of the tools for accomplishing this are careful selection early on of an explicit set of use cases. Careful use of defaults, especially of defaults that interact intelligently with explicits (this can lead to a system that from a usage viewpoint that "just does what is expected" but which can be a bear to implement and formally describe, with lots of "unlesses" and "with this combination of factors this, and with this combination that"), as well as things like policies and pre-specified frameworks that specify lots of things all at once.
2) /Single vs multiple inputs?/ There seems to be a natural way to handle a need for multiple inputs -- one interposes a component with an expandable set of individual inputs and a single output.
That's a good, well integrated /mechanism/ but a weak /interface/. The reason is that the combining component is conceptually very closely tied to the input. I would suggest that a library of common ways of handling multiple inputs ("tuplers", "ands", "ors", "averagers", etc. as well as the current "only one allowed"), and that these be "declared" (or, given the run-time reconfigurability "redeclared") along with the input itself. Any wire addressed to the input actually gets attached to the combiner, and redeclaration would automatically pass the existing inputs to the new combiner.
Of course, in line with my previous comment, there should be a default, as well as a way to declare the default over some sense of scope.
3) /Typed inputs, outputs -- and thus wires and signals/? Flexibility and performance may be primary in your use-cases, but that doesn't mean reliability has zero value. I really should not be able to attach an RGB wire to an input meant to process an aerial heading just because they both use a 3-tuple of numbers for representation. Its conceivable that one could generate tons of nonsense data without detection this way. Typing won't protect you, of course, from the result of attaching an RGB wire to the wrong RGB input, but decades of experience with strong typing has shown that it can radically cut down on the frequency of errors. (Note that this is not an argument against C++ template style duck typing generally: its compile time nature makes for a different circumstances, especially in situations where there is less of a tendency to use general purpose types with more specific operations -- e.g., R(), G() and B() rather than first, second, third or get<1>(), get<2>() and get<3>() -- than I suspect is the case with many uses of this package).
Note that this is a run time type -- whether or not it is implemented via run-time type labels or as a reflection (no pun intended) of the C++ type system is not the issue. Performance wouldn't be an issue since the checking would occur when a wire is attached and wouldn't have any overhead during "ticks". If it is purely run-time, however, there wouldn't be any obvious way to enforce that the type placed on an output has any relation to the type label. On the other hand, dynamically modifying the output of a component would not be possible. Maybe a mix would be the right thing, though I'm not sure exactly how that would be done.
Of course, I would think that a "not type specified" type, providing the present behavior, would be appropriate, and is a reasonable default. However, specifying a different default, within some scope, would be valuable as well -- if you are simulating logic circuits then either 2-state or 3-state logic signals would be a good default, even if parts of the system, dealing with edge-triggered circuitry or A/D/A, or analog sensors, need other types.
4) /Compile-time vs run-time configuration? /(a.k.a., declaration vs execution?) There seems to be the potential for large gains in fixed compile-time configuration and in some cases, in reliability. On the other hand, its clear that the primary intended use cases require more flexibility. It would seem that, once again, a hybrid system would useful. What occurs to me is that one should be able to create components by combining "primitive" components (classes derived from DspComponent) at compile time using the same concepts (wires, inputs, outputs) as the run-time system. Prototypes could be turned into fixed sets of components when fully understood and debugged, and also compiled units could be replaced by the same set of primitive components wired together dynamically (in fact, it might be possible for the class representing the hard-wired version to automatically include a static method that could be called to create a soft wired version of itself, an instance method instead of or in addition to the static one would allow compiled components to be hot-swapped out for a run-time configurable version of itself).
That's a lot of work, of course, and I'm certainly not in a position to do it.
5) /Object-Oriented vs Generic Interface?/ -- I'm not going to take sides here, but it seems unlikely that the small overhead of run-time-bound calls would make much of a difference except in the limited case of a large network of simple components (e.g., a large, low-level, logic gate system) with high sequentiality and very little I/O or logging. In any other circumstances, I would say that the time for the indirection would be completely swamped by the component internals, by other kinds of system overhead and by I/O. That doesn't mean that generic programming isn't preferable but only that the performance overhead of virtual calls isn't an argument for it unless one can show that the non-monitored, large logic-gate type system requiring high-performance is an important design case.
Just some thoughts, hope someone finds them useful or at least interesting.
Topher
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>