On 16/05/2017 20:28, Joaquin M López Muñoz wrote:
In any case, the runtime slicing detection will slow down ranges, as you'll need to check the typeid of each *it reference in the range.
Not in every case, see for instance the two insert overloads at:
https://github.com/joaquintides/poly_collection/blob/master/include/boost/po...
where you can notice than when the type is terminal we don't repeat typeid checking.
Is the iterator's "value_type" a guarantee of the dynamic type of the referenced object? I can imagine ptr_vector or Boost.Intrusive iterators, where value_type is the base class, but the dynamic type of the object is a derived class. If the implementation wants to avoid any slicing and support these "special" iterators then it should check the typeid of every value in the range.
We sort of already have that:
bc.insert(bc.end<warrior>(),warriors_first, warriors_last);
See:
https://github.com/joaquintides/poly_collection/blob/master/include/boost/po...
I was not talking about a range taking local iterators, but a range of iterators of a vector holding warriors. Something like: https://github.com/joaquintides/poly_collection/blob/master/include/boost/po... which also assumes iterator::value_type is the dynamic type of the whole range.
This is efficient typeid-wise, not so with respect to preallocating based on distance. If we go ahead with your proposal of static-fying the interface when the concrete type is known, we can get much better --actually, as fast as possible.
For absolute zero overhead (no repeteated typeid, no repeated hash search), a segment handle utility comes to my mind:
[...]
I think this is not needed in light of my previous comment.
As a suggestion, I would include in the insertion benchmarks a vector push_back. At least for packed_segment, vector push_back is the upper_bound of any performance improvement. If the performance difference between the new "staticfied" single value insertion function: while(....) pc.insert(warrior()); and the vector push_back code while(...) warrior_vector.push_back(warrior()); is significant, then maybe a uglier but near-zero overhead alternative (like the segment handle) methods might be a good idea. Specially for power programmers that want to create a bunch of warriors when the new game level starts ;-)
1) I also noticed that bc.size() and bc.empty() are not constant-time. It seems a bit surprising, it might be noticeable when the number of segments grows. This avoids common data in poly_collection but I imagine most users assume these functions are free.
Strictly speaking, they're constant time wrt number of elements, assuming (correctly, I'd say) that the number of registered types is bounded and don't increase linearly with size(). I'm very against caching size for consistency reasons and because we are adding a little fat to every operation of the container.
I don't know if anyone will use poly_collection with many different types and few elements per segment. As always users will surprise us ;-) In any case I would recommend documenting this visible somewhere (ideally in the method description as Complexity) with a complexity similar to (I hope I don't get it wrong): O(distance(segment_traversal().begin(), segment_traversal().end())) as typically most programmers would think about those as extremely cheap operations and might call them in a loop.
2) The number of additional operations that could take advantage of the "I know your static type" optimization is important, although the speedup will depend on the cost of each operation (eg. size()/ reserve() vs. a virtual function call):
template<typename T> void reserve(size_type n); template<typename T> void shrink_to_fit(); template<typename T> size_type capacity() const; template<typename T> size_type max_size() const; template<typename T> size_type size() const; template<typename T> bool empty() const; template<typename T> void clear();
If/when we staticfy the interface of course we'll go all the way and improve as much as can be improved. The functions you list above, though, are not expected to be a performance problem in any sensible program.
Right. My "the root of all evil" side is also reviewing the library ;-) And hopefully these are my last comments ;-) I think none of them are needed for the acceptance of the library, as they could be added as future improvements. Best, Ion