
-----Original Message----- From: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org] On Behalf Of Jim Douglas
I may have used the words concrete->abstract axis. It might be a basis on which to rate the libraries with a "fuzzy" classification that indicates position on the axis and spread of coverage. E.g. Boost.Crc even contains some pre-defined ready-to-use concrete types, whereas at the other extreme Boost.MPL is an abstract framework.
And the pp-lib is a framework that is more general still.
From a purely practical POV here is the communications problem:
I have a piece of software to construct and I have sufficient confidence that I can write it in C/C++ in N weeks from the ground up.
How can I extract enough information from the Boost docs to identify those Boost libs that are applicable to my problem domain?
For a library approaching full generality, it is applicable to just about *every* domain, but that doesn't mean that it should always be used. Whether or not it should be has to be determined on a case by case basis in comparison with other possibilities.
How can I then evaluate those libraries so identified, enough to be convinced that the full *learning* time plus deployment time will result in the project lasting N weeks or less?
You can't because it doesn't work that way. For libraries like these, you learn them to increase your general toolset which can later be applied to any number of projects. Note that this is a lot like learning a language and the accompanying techniques outside of the context of a real-world project with real-world deadlines.
Examples (as opposed to use cases) in the documentation exist mainly as an attempt to induce the necessary lateral thought, not to show what problems the library solves.
Examples - yes, I can't get enough of them. A couple of small caveats:
a) There is a danger that you might not relate to an example if you can't see that by changing a few things you can put it to good use in your particular context. OK so you can't please all of the people all of the time.
The danger is not that you might not relate to an example. The danger is that you might not "get it". Meaning not that you don't understand what the example does, but that you might not achieve the perspective (or the beginnings of the perspective) that the example hints at. Does this mean that less people use such a library? Yes, it does, and, in its way, that is a good thing. Because such a library can be used just about anywhere, it takes both a reasonable amount of understanding and a reasonably correct perspective to apply wisely. It shouldn't be thrown at any possible problem where it could be used. Instead, a smaller number of people (that know what they're doing) use libraries like this to implement other things (often still fairly general) that can be used effectively by more people. As a practical example of this effect, the pp-lib can be used to generate code for a template metaprogramming framework (e.g. the MPL). The MPL can, in turn, be used to implement a binary literal utility (see Dave and Aleksey's book). The binary literal utility can be used to implement some sort of message cracking function. Each step down this path represents a decrease in generality. With each decrease, the number of people that are likely to use it correctly increases (sometimes dramatically). This occurs because the higher the generality, the higher the number of possible malconceived uses (as well as the higher the number of well-conceived uses). The person that uses the binary literal utility (which is fairly easy to use) is also ultimately using the pp-lib, though indirectly. What I'm getting at here is that although the number of direct users of something as general as the pp-lib or the MPL may be fewer, the number of indirect users may be much greater. All of this isn't about documentation per se, but rather that the goal of a library is not always about maximum number of direct users.
b) Examples need to be grounded in the real world - which tenuously links to use cases. 'foo/bar' examples should be banned :-)
Presumably you mean "contrived" examples. I think that depends heavily on what a piece of code is supposed to be an example of. If you make examples too complex, the point that it is trying to make is lost in the particular details of the example. Then the example itself becomes a cookbook interface when it is really supposed to be an example of something else. Using an example to bring a very general interface or group of interfaces all the way down the generality axis to a particular use case virtually requires that the example be complex. Even for something as simple as a binary literals example requires dealing with domain-specific issues (like octal literals) that have nothing to do with the library being used to implement it. The problem that I have with your viewpoint is that you seem to be expecting even very general libraries to show where they're useful in concrete, specific examples. By definition, a finite set of use case examples doesn't even come close to being sufficent. A different tack is needed. Instead, examples for a highly general library should focus on the perspective and creativity required to apply the highly general library to an unbounded number of specific situations. So, the purpose of such examples aren't about use cases, they're about a perspective-shift that, if attained, allows users to _identify_ use cases. If that is what it is about, whether or not an example is practical or contrived is a non-issue. I don't, OTOH, have a problem with your viewpoint when it is applied to less general libraries where the number of use cases is relatively small or falls into a relatively confined scope. I just don't think that the whole concept of "how to do documentation" can fit into a unilateral mold for all types of libraries. I don't think that the documentation for highly general libraries like the MPL or the pp-lib, for example, should be "dumbed down" (i.e. made concrete enough) to the point of being understandable by all users. I actually think that that is dangerous. Such tools can create gigantic messes if not used wisely (e.g. if they get used mainly for the sake of being clever). For those that are actually interested in expanding their perspective (as opposed to just getting a job done), then the learning of these kinds of tools should fall outside of a "project that is supposed to take N weeks or less" because such learning implies not just learning what each interface does, but how to identify use cases when you come across them. Documentation that induces that ability in users is a lot harder to write than one might think based on a casual first glance. I think it's worth pointing out that for such libraries, the reference documentation alone (that includes only contrived examples) is enough for someone that is actually motivated to learn these things. I'm not the original author of the pp-lib, for example, yet I understand it completely and know how to identify use cases when I come across them--simply because I invested the time and effort to shift the way that I view things (and I'm not the only one). If more C or C++ programmers were familiar with functional programming, with metalevel programming (such as the macro systems in Lisp or Scheme), and with lazy evaluation (such as in Haskell) rather than just imperative runtime-level strict-evaluation programming, this stuff would be a lot easier for users to understand and apply. It really is a pity that the imperative paradigm is, by definition, a superset of the functional paradigm, yet so much of the functional paradigm is unused or forgotten. Don't misunderstand me. I am not one who thinks that the functional paradigm is superior. In fact, I think that the functional paradigm does not adequately represent reality and, as often as not, that a functional solution requires a great deal of logical contortion. [Some models in software are naturally imperative while others are naturally functional. Any time that you use an imperative design to solve a functional model or vice versa, it causes logical contortion that complicates the design. Sometimes that complication is worth it for other reasons (e.g. efficiency or provability)--there are always tradeoffs.] Nevertheless, there is a lot of good stuff that is common in functional designs that shouldn't be ignored but largely has been. Regards, Paul Mensonides