
AMDG Hervé Brönnimann wrote:
Steven: You seem to be following a partial differential approach, where you record the partial derivative of an expression wrt to each of the variables that compose it. Your particular approach is using a sparse representation of the vector of derivatives wrt to all the variables in your system. Wrt the evaluation of the uncertainty (sqrt (0.2*0.2 + 0.2*0.2) = 0.28 in your example), this just corresponds to a norm for the vector, here the Euclidean norm, with a twist that you're not dividing by the total number of variables (not even by the number of variables that are actually involved in your expression). (Note that with a finite number of variables, all norms are equivalent anyway.) It's not an especially new idea. If you go to second derivatives, you can do the same with the Jacobian matrix of your expression.
It should in theory be possible to generalize this to an arbitrary number of derivatives, specified as a template parameter, right?
The discussion then becomes sparse vs. dense representation. Your idea seems to boil down to using a sparse representation of this otherwise well known aproach. Am I missing something?
So, I've just reinvented yet another wheel... Ok. What are the problems with this approach. For me the most important thing is that I can hide the messy calculations behind a nice API. The sparse representation makes this easy.
BTW, error propagation can also be handled by boost.interval. Note, I am not talking about tracking dependencies (it's the same kind of thing that makes square(x) have a smaller interval enclosure than x * x). Although in general, for small errors, there isn't much difference and we've used interval analysis exactly for that purpose, assuming all the variables are always independent (you get an guaranteed enclosure either way, just not as tight as it could be if you had know the dependencies).
I happen to need dependency tracking. In Christ, Steven Watanabe