
On 12/18/2012 1:23 AM, Marcus Tomlinson wrote:
Hi Topher, thanks for the post :)
1) /Simplicity is a simplistic goal/. Generally good design means that doing simple things is simple and doing more complex things is more complex proportionately (and no more than proportionately) to the degree of additional complexity. This may mean that there are many "components" (methods, parameters, libraries, classes, or whatever) and options but that at any given time most can be ignored. Part of the design process is to be clear what can be ignored when, and part of the implementation is documentation (whatever form that that takes) that makes it easy to focus on what one needs for a task and to be barely aware that there is more available.
I hope that I am at least close to achieving this. DSPatch was designed with just this in mind. Even the documentation was structured as to describe only the bare essentials to using DSPatch up front, while providing the possibility for additional functionality based on those essentials.
That's a start, but minimal documentation for the remainder, a lack of a good way to understand why/when you would want each additional piece, means that there is a sudden, very sharp, upward kink in the problem-complexity vs solution-complexity graph. As it stands, it looks like too much internals (documented only by code) to understand why you would want to use this or that. Generally they seem to be justified by optimization concerns rather than increase in functionality. That requires an understanding of what and why the system needs some help optimizing.
One of the tools for accomplishing this are careful selection early on of an explicit set of use cases. Careful use of defaults, especially of defaults that interact intelligently with explicits (this can lead to a system that from a usage viewpoint that "just does what is expected" but which can be a bear to implement and formally describe, with lots of "unlesses" and "with this combination of factors this, and with this combination that"), as well as things like policies and pre-specified frameworks that specify lots of things all at once.
With a framework that is (or at least tries to be) generic, it is hard to describe or list use cases. There are so many uses for DSPatch that I am not aware of. I understand what you're saying though, and once again, I hope that with my tutorials and example application I am achieving this. If not, please let me know what's lacking.
I'm talking about use cases for design. Its quite likely that many of these would never be implemented. Also, examples and tests are generally much simpler systems than real ones -- that is required to make them quickly understandable. You certainly didn't spend a lot of thought on how to make generating two random Booleans and anding them together easy and efficient. Inevitably, any designer is going to have some thoughts about how even a generic package is going to be used, which essentially amount to vague, unwritten use cases. That means there are unexaminable assumptions. Creating explicit use cases -- representing different patterns of usage that are deemed important -- makes the assumptions more obvious, and may also suggest other use cases. A use case is not necessary a description of an application, though that is frequently useful in pinning it down. It might be (a somewhat more elaborate version of) something like "A large number of simple nodes each with relatively few inputs and outputs, simple signals, only about 20% of the nodes activated on any given cycle, with linear dependency chains that, however, vary radically from cycle to cycle." That might, for example, describe a controller of some sort specified at the logic-gate level. That is generic, but looking at what could be done to optimize such cases, and trying to understand how those optimizations might interact with other use cases, as well as what is needed to make any such application easier to implement.
I really should not be able to attach an RGB wire to an input meant to process an aerial heading just because they both use a 3-tuple of numbers for representation. Its conceivable that one could generate tons of nonsense data without detection this way.
I guess I could argue that in a real-world situation, connecting 2 terminals that source and sink data packets of the same structure (header, CRC etc.), regardless of the payload (nonsense to the receiver or not) these packets would be accepted. However...
Now that shows the effects of an implicit (not necessarily wrong) set of use cases. You feel that typical "real world" uses would involve complex, specialized, structured objects as signals, such as streaming audio packets straight off a network. My intuition (obviously not backed up by a formal use case analysis) is that a significant sub-group of usages would use much simpler, generic data types -- strings, numbers, booleans, enums, tuples, vectors (for example, representing labeled messages), etc.These might repesent control signals, power levels, "facts" etc. There are also more complex generic kinds of data, for example statistical distributions. The distributions are complex entities but they might represent multiple different kinds of quantities within the same application (fluctuating temperatures, pressures, fluid flow rates, for example). Even a system which primarily uses more complex signal types, could use these more generic kinds of signals as well, representing things like system clock ticks (I mean that as a domain quantity, rather than the network cycles), error counts, activations, etc. Tell me -- how often in your conventional programming do you pass numbers, bools, strings etc to procedures rather than encapsulating each in a specific class with unique access protocols that represent the intended use, Age, Name, IdentificationNumber, ResourceAvailable, etc.? Actually, just looking at your interface gives me the answer -- as often as most of us. Thinking about it, I think my sense of this being a likely pattern of usage is based on a sense that an important use case (or set of use cases) would be for discrete simulations. Its not the most likely thing I would use it for, but it seems like a natural. After all, Object Oriented Programming was originally invented for this purpose, and this matches almost precisely how such systems are conceptualized.