Re: [boost] [range] for_each and std::map of std::list

On 04/07/2011 04:39 AM, Mathias Gaunard wrote:
On 07/04/2011 12:56, Thorsten Ottosen wrote:
Well, that is only a problem if you want to avoid manually applying the adaptor twice (or more), right?
If so, I think that is a limitation we can live with. A simple solution covers most cases IMO.
Oh, the adaptor I wrote flattens recursively.
I guess just flattening the top level could be fairly easy.
You could also require that the user specify the number of levels of recursion explicitly, as in many cases the final value_type you want to iterate over may be a range itself. Consider, for instance, that a string is itself a range of characters, but in many cases the user would not want to flatten a range of strings into a range of characters.

On Thu, Apr 7, 2011 at 11:17 AM, Jeremy Maitin-Shepard <jeremy@jeremyms.com>wrote:
On 04/07/2011 04:39 AM, Mathias Gaunard wrote:
On 07/04/2011 12:56, Thorsten Ottosen wrote:
Well, that is only a problem if you want to avoid manually applying the
adaptor twice (or more), right?
If so, I think that is a limitation we can live with. A simple solution covers most cases IMO.
Oh, the adaptor I wrote flattens recursively.
I guess just flattening the top level could be fairly easy.
You could also require that the user specify the number of levels of recursion explicitly, as in many cases the final value_type you want to iterate over may be a range itself. Consider, for instance, that a string is itself a range of characters, but in many cases the user would not want to flatten a range of strings into a range of characters.
+1 I don't think auto-detecting the flattening depth is the way to go. The closest that makes sense to me is to create a mechanism to "flatten until value_type == whatever", but that can be built as a layer on top of explicitly specifying the flattening depth. I've done this before, it's kind of a PITA and I currently don't like my current implementation and had plans at some point to rework it... - Jeff

On Thu, Apr 7, 2011 at 7:24 PM, Jeffrey Lee Hellrung, Jr. < jeffrey.hellrung@gmail.com> wrote:
On Thu, Apr 7, 2011 at 11:17 AM, Jeremy Maitin-Shepard <jeremy@jeremyms.com>wrote:
On 04/07/2011 04:39 AM, Mathias Gaunard wrote:
On 07/04/2011 12:56, Thorsten Ottosen wrote:
Well, that is only a problem if you want to avoid manually applying the
adaptor twice (or more), right?
If so, I think that is a limitation we can live with. A simple solution covers most cases IMO.
I believe it is easy to come up with a simple adapter that covers most of the cases. However I also believe that it would define an interface that if not carefully considered may be limiting. I wish to be able to solve the related segmented iteration with superior performance than is afforded by segmented iterators. I perceive the segmented iteration problem to be another special-case of the projection of an N-dimensional structure to a linear sequence. The classic flattening of segmented iterators causes additional overhead since each iterator increment must check to see if the end of the local range was reached in order to correctly move to the start of the next local range. However, if we allow ourselves to develop a small number of 'enumeration' algorithms such as for_each, that are optimized for a newly defined set of traits, and a projection policy, we can produce a nested loop that is optimal for traversing this structure. This small set of enumeration algorithms would have to be optimized for the several idioms, but it then opens the way to better support not just segmented iteration, but specialization of chunked contiguous blocks of memory such, and thereby enables better compiler auto vectorization. Clearly projection from high-dimensional spaces to linear sequences requires parameterization, and hence the earlier questions with respect to picking columns versus rows first for matrices is deferred to the client as they decide the parameters to pick for the projection policy. The same technique solves the various decisions that need to be made when projecting trees either depth-first or breadth-first. In summary I believe that I need to separate a set of primitive algorithms that define highly optimized enumeration algorithms. The other Boost.Range algorithms where appropriate should implemented in terms of these enumeration primitives. A Concept or Concepts need to be clarified to define projection as a distinct subset of adaptors, and appropriate Concepts need to be defined for the projection policies. My earlier discussion of tree containers and n-dimensional containers was meant to invoke thoughts of containers not implemented by the STL that could be leverage, at least some, generic algorithms. From my own experience in high performance mathematical computing, and from discussion with colleagues in this field there are frequently special high-dimensional containers that would benefit from access to these algorithms. I have spent some time developing these ideas and solutions, and would prefer to solidify these ideas rather than implement the simple flattened adaptor that would, in my opinion, move the Boost.Range design in the wrong long-term direction.
Oh, the adaptor I wrote flattens recursively.
I guess just flattening the top level could be fairly easy.
You could also require that the user specify the number of levels of recursion explicitly, as in many cases the final value_type you want to iterate over may be a range itself. Consider, for instance, that a string is itself a range of characters, but in many cases the user would not want to flatten a range of strings into a range of characters.
+1
I believe that the projection into the one dimensional space needs to be parameterizable in many more ways than simply the number of levels. I have many concrete use-cases where this is so.
I don't think auto-detecting the flattening depth is the way to go. The closest that makes sense to me is to create a mechanism to "flatten until value_type == whatever", but that can be built as a layer on top of explicitly specifying the flattening depth.
I will not implement auto-detecting flattening depth. The inherent limitations and inflexibility of this design choice make it a non-starter.
I've done this before, it's kind of a PITA and I currently don't like my current implementation and had plans at some point to rework it...
- Jeff
Thank you all for your patience. Regards, Neil Groves

Den 09-04-2011 19:06, Neil Groves skrev:
My earlier discussion of tree containers and n-dimensional containers was meant to invoke thoughts of containers not implemented by the STL that could be leverage, at least some, generic algorithms. From my own experience in high performance mathematical computing, and from discussion with colleagues in this field there are frequently special high-dimensional containers that would benefit from access to these algorithms.
Well, one thing is performance due to efficient iteration over segmented data structure. Another issue is just flattening. About segmentation. One approach is a new type of iterators with specialized algorithms for basically all algorithms. That's a lot of algorithms that has to be implemented again. So how the h*** do we avoid that? Can we avoid that? I'm thinking that some fat iterator with some internal book keeping might help. But then we have the problem of the end iterator taking up much space just because the types must match. Clearly ranges without iterators is superior in this context. -Thorsten
participants (4)
-
Jeffrey Lee Hellrung, Jr.
-
Jeremy Maitin-Shepard
-
Neil Groves
-
Thorsten Ottosen