
Eric,
It shouldn't be the series' job to keep a circular buffer of data for the algorithm to use. Rather, if the algorithms requires a buffer of previously seen data, it should cache the data itself,
Might it be worth a comment to the above affect in the documentation? It seems to be a fundamental principal for the library, which, at least to me, wasn't clear.
as in the rolling average implementation I sent around a few days ago.
I missed that until after I had posted.
The Sequence concept on which all the time series' are built requires readable and incrementable cursors. That means the time series algorithms *should* all work with an "input" or single-pass series types -- that is, one with a destructive read. That would be the way to go IMO. I could see a time series type implemented in terms of std::istream that reads runs from std::cin, for instance. Or more practially, one that memory-maps parts of a huge file and traverses it with single pass cursors. This would be a very interesting time series! The algorithms haven't been tested with such a single pass series, but I don't see a fundamental problem with it.
Excellent. Files normally contain multivariate data though, so presumabley it would require multiple series backed by a common object to do the memory mapping?
I'm not 100% sure I understand your use case. But most of the series types and algorithms allow non-discrete sequences. That is, the offsets can be floating point. Could that help?
Yes I had seen that, but wasn't sure how it worked for sampled data. In my case I have a multiple time series with a (common) sample time that varies stochastically between 40-60ms. It wasn't clear to me that the offsets could be non-constant stride (whether integer or floating point). Even the sparse series seems to require a constant discretisation.
Yup, no convolution yet. Sure would be nice. Patches welcome! :-)
:-) Hugo