
Steven Watanabe wrote:
AMDG
Eric Niebler <eric <at> boost-consulting.com> writes:
How would you suggest ordered_append be implemented?
Mostly copied from dense_ordered_inserter:
<snip> Hmm, yes an in-place append would be a good addition. Agreed.
Imagine the series storage is std::map<offset, value>. Inserting one value with insert(value) is O(log N). Inserting N values this way is O(N log N). But if you use insert(begin, end), where begin and end are iterators to sorted data, the insert is faster. O(N).
It is in fact possible to fill a binary tree by pushing sorted data in linear time.
<snip> I don't doubt it. std::map does it for its range-based insert so it must be possible. And this may seem like splitting hairs to you, but what I said is true -- you can't efficiently build a std::map by pushing in sorted data in one element at a time, because the interface doesn't allow for it. You have to pass it a range of sorted elements. You may say, "Well, then don't use a std::map to build a time series!" Fair enough. But when a perfectly satisfactory data structure like std::map cannot be adapted to satisfy a concept, it suggests a faulty concept.
The point is, you should be able to efficiently be able to write ordered data to a series without any concern for how that data is being stored, or even if it's being stored.
I don't think you should have *only* a range insert. I think an output iterator that behaves more like std::back_inserter is necessary too.
An in-place append, yes.
But the lower-level stuff is still needed so that series types can be efficiently built in generic code (like the time series algorithms, for instance), which can't make any assumptions.
I agree that we need the lower level stuff. I just disagree about *what* lower level stuff.
You're suggesting to replace the ordered_inserter (actually an ordered_assign) with an ordered_append? And perhaps also a clear() member? Or keep ordered_assign? And (crucially) you would do away with commit() on both ordered_assign and ordered_append? Or would you keep them on the low-level interface and add a higher level interface that called commit for you and handled the 90% case? The thoughts I've heard that I like so far are: - add an ordered_append - rename ordered_inserter to ordered_assign - add a higher level interface to make them both easier to use The thought I'm not sold on is: - get rid of commit() on the lower level interface. <snip>
Actually what I would like most is
ordered_inserter inserter(series); for(;;) *insert++ = make_tuple(poll_some_value(), last_time, current_time);
With the additional guarantee that you can process the partial results at any time.
OK. It seems to me that such a thing could be built on top of an ordered_append that has a commit(), simply by calling commit() after every appended element. Would you agree? Thought: it occurs to me that commit() is a bad name. It is evocative of transactional semantics, which is not necessarily accurate. What I'm looking for is a way to signal the end of an input sequence, such that *if* the series has decided to buffer the input, it now knows it has received it all. It doesn't *have* to buffer the input, though. Maybe done() or end() or even eof(). Anyway, the STL-ish assign(begin,end) doesn't have this problem because the end is specified explicitly and it has extra benefits (the size may be known), but this doesn't mesh well with the signal processing domain. In assign(begin,end), the container *pulls* the data. In signal processing, data is often read by polling, and then *pushed* into a container, hence the desire for something like a back_inserter. Time passes ... If we ditched the assigners and the appenders and went with the range-based STL-ish interface, what happens? Any convenient push-based interface we build on top would have to resort to sticking in one value at a time, just like std::back_inserter. Do you really feel that's an improvement? It's an honest question. -- Eric Niebler Boost Consulting www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com