
Darryl Green wrote:
I (tried to) send this yesterday but some prob with my mailer. I see the idea of nulls has been discussed a bit by Matt Hurd and James Jones.
Eric Niebler wrote:
I think the current framework can handle this situation quite
naturally. The offsets need not be integral multiples of the discretization. The offsets can be floating point, which can be used to represent exact times. Or as Jeff Garland suggested, the offset could in theory be a posix ptime (not tested). That way you wouldn't have to pass around a separate vector representing the times, or make your data tuple with times.
It was only when I read this that I took another look at the docs and realized that your library is using a non-uniform sampling model. This is interesting - but could make some algorithms hard to write and/or much less efficient?
<snip> I think you misunderstood, or I was unclear. The library accommodates both uniform and non-uniform sampling. In another reply, I wrote:
Yes, I think what is there already can be pressed into service to achieve the same effect. Conceptually, a TimeSeries (as defined by Boost.Time_series) is two sequences: the elements and the runs. The elements are the values of the time series, and the runs are a sequence of (offset, end offset) pairs [*]. And the offsets can be integral or floating point (or ptimes, I guess). So if I'm understanding correctly, all we need is a flexible way to view the elements of one time series given the runs of another. Does that make sense?
[*] Obviously, a time series doesn't need to be stored in memory that way, so long as it's able to present that interface. A sparse series need not store end offsets, and a dense series need not store any offsets at all -- the runs sequence can be generated on the fly.
Take the case of a dense_series. It has uniform sampling. Its "runs" sequence has the property of being "dense", meaning that each run is of unit length, and the run offsets are monotonically increasing. Obviously, finding a sample with a particular offset in a dense series is O(1), but not in a sparse series. Algorithms can test for density using a trait and select an optimal implementation. The default implementation would handles series with non-uniform sampling. It's all done using compile time traits, so there is zero runtime overhead. HTH, -- Eric Niebler Boost Consulting www.boost-consulting.com