Abstract STL Container Adaptors

Hi Boosters,
Is there any interest in an abstract container adaptor library (I'll describe what I mean below)? I'm a little surprised if this has never come up in the context of Boost, but I couldn't find anything on the archives. I did find something on stack overflow (http://stackoverflow.com/questions/5329555/abstract-wrapper-for-stl-containe...), but it did not uncover an existing solution.
Motivation:
Sometimes we want to write functions that take a container as an argument. For instance, say I have a graph object implemented in terms of ``class Node``. I may expose an algorithm like this:
void my_algorithm(Node & root, std::set

On 14 March 2013 12:20, Andy Jost
In some cases, particularly when compile times are more critical than execution times, it would be preferable to let the caller choose the set implementation without losing the advantages of separate compilation.
Do you have any benchmarks to show this isn't in the noise?
It seems the implementation of this would be a straightforward application of type erasure. It seems too easy, in fact, which makes me wonder whether I'm missing something.
*Every* operation becomes slow. Algorithms become really slow, if callable at all. For instance, if you had a "wrapper" that models a sequence container, what iterator category do you pick for it? If you pick, say, bidirectional iterator (the minimum needed for vector, deque and list), all of a sudden you can't use a bunch of algorithms (most of the sorting algorithms, shuffle, etc.).
In any case, is this a good idea?
I can't think of a case I've ever had for choosing the container at run time. -- Nevin ":-)" Liber mailto:nevin@eviloverlord.com (847) 691-1404

From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Nevin Liber
On 14 March 2013 12:20, Andy Jost
wrote: In some cases, particularly when compile times are more critical than execution times, it would be preferable to let the caller choose the set implementation without losing the advantages of separate compilation.
Do you have any benchmarks to show this isn't in the noise?
What isn't in the noise? Compile times?
It seems the implementation of this would be a straightforward application of type erasure. It seems too easy, in fact, which makes me wonder whether I'm missing something.
*Every* operation becomes slow. Algorithms become really slow, if callable at all.
I don't see how this is justified. The virtual call certainly adds overhead, but compared to a normal inlined function it's not a huge amount. Algorithms should become slower by a small constant factor.
For instance, if you had a "wrapper" that models a sequence >container, what iterator category do you pick for it?
The iterator category would exactly match that of the underlying container. The goal is *not* to abstract away the differences between different containers; it is to abstract away the differences between different implementations of the same container. So std::std and tr1::unordered_set and std::set<...,MyAllocator> can be used interchangeably, but not std::set and std::vector.
In any case, is this a good idea?
I can't think of a case I've ever had for choosing the container at run time.
That's not the point. The aim is to compile the algorithm only once. As a real-world example (not to far from my scenario) say your project takes five minutes on 20 CPUs to compile, and the algorithm in question consumes less than 0.0001% of the overall execution time. Wouldn't you take a 10x hit in algorithm performance to improve the compile time? -Andy

On Thu, Mar 14, 2013 at 9:10 PM, Andy Jost
From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Nevin Liber
On 14 March 2013 12:20, Andy Jost
wrote: In some cases, particularly when compile times are more critical than execution times, it would be preferable to let the caller choose the set implementation without losing the advantages of separate compilation.
Do you have any benchmarks to show this isn't in the noise?
What isn't in the noise? Compile times?
It seems the implementation of this would be a straightforward application of type erasure. It seems too easy, in fact, which makes me wonder whether I'm missing something.
*Every* operation becomes slow. Algorithms become really slow, if callable at all.
I don't see how this is justified. The virtual call certainly adds overhead, but compared to a normal inlined function it's not a huge amount. Algorithms should become slower by a small constant factor.
Virtual functions calls add very appreciable overhead relative to the numerous small functions that would need runtime polymorphism for the idiom to work. The overhead is particularly severe with super-scalar processor architectures. To some this won't be important of course. It is a justified comment since there have been many older solutions that used classic runtime polymorphism for containers and iterators. An example is the ACE containers and iterators. I believe it is widely believed that this design decision was sub-optimal. The runtime polymorphism on components such as STL containers would occur at very poorly chosen regions of the interface. I suspect the real solution to your problem is to cluster together groups of classes tightly to the standard library classes, and have these loosely coupled together as components of your system. This is likely to yield much better compile-time improvements without sacrificing runtime performance.
For instance, if you had a "wrapper" that models a sequence >container, what iterator category do you pick for it?
The iterator category would exactly match that of the underlying container. The goal is *not* to abstract away the differences between different containers; it is to abstract away the differences between different implementations of the same container. So std::std and tr1::unordered_set and std::set<...,MyAllocator> can be used interchangeably, but not std::set and std::vector.
In any case, is this a good idea?
I can't think of a case I've ever had for choosing the container at run time.
That's not the point. The aim is to compile the algorithm only once. As a real-world example (not to far from my scenario) say your project takes five minutes on 20 CPUs to compile, and the algorithm in question consumes less than 0.0001% of the overall execution time. Wouldn't you take a 10x hit in algorithm performance to improve the compile time?
No! My experience is that superior compile-time improvements are available by other means. It won't give you the best improvement in compile-time and it will impact runtime performance. This is definitely something we should discourage as opposed to supporting and implementing in my opinion. My recommendations for increasing the speed of compilation are: 1. Consider the interface boundaries and the module coupling 2. Check the include directories since bad compile times can be due to an include path to a network folder or some other horror that is easily remedied but might go unsuspected. 3. Consider tools such as CCache, DistCC, Icecream 4. Consider changing compiler. The variation in compile-time performance is huge between different compilers 5. Consider compiler flags and build flags particularly to maximise use of cores. It is often best to exceed the number of processing cores due to the ratio of sleep to running times of the tasks.
-Andy
Regards, Neil Groves

Any other opinions out there?
-----Original Message-----
From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Neil Groves
Sent: Thursday, March 14, 2013 2:32 PM
To: boost@lists.boost.org
Subject: Re: [boost] Abstract STL Container Adaptors
On Thu, Mar 14, 2013 at 9:10 PM, Andy Jost
From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Nevin Liber
On 14 March 2013 12:20, Andy Jost
wrote: In some cases, particularly when compile times are more critical than execution times, it would be preferable to let the caller choose the set implementation without losing the advantages of separate compilation.
Do you have any benchmarks to show this isn't in the noise?
What isn't in the noise? Compile times?
It seems the implementation of this would be a straightforward application of type erasure. It seems too easy, in fact, which makes me wonder whether I'm missing something.
*Every* operation becomes slow. Algorithms become really slow, if callable at all.
I don't see how this is justified. The virtual call certainly adds overhead, but compared to a normal inlined function it's not a huge amount. Algorithms should become slower by a small constant factor.
Virtual functions calls add very appreciable overhead relative to the numerous small functions that would need runtime polymorphism for the idiom to work. The overhead is particularly severe with super-scalar processor architectures. To some this won't be important of course. It is a justified comment since there have been many older solutions that used classic runtime polymorphism for containers and iterators. An example is the ACE containers and iterators. I believe it is widely believed that this design decision was sub-optimal. The runtime polymorphism on components such as STL containers would occur at very poorly chosen regions of the interface. I suspect the real solution to your problem is to cluster together groups of classes tightly to the standard library classes, and have these loosely coupled together as components of your system. This is likely to yield much better compile-time improvements without sacrificing runtime performance.
For instance, if you had a "wrapper" that models a sequence
container, what iterator category do you pick for it?
The iterator category would exactly match that of the underlying container. The goal is *not* to abstract away the differences between different containers; it is to abstract away the differences between different implementations of the same container. So std::std and tr1::unordered_set and std::set<...,MyAllocator> can be used interchangeably, but not std::set and std::vector.
In any case, is this a good idea?
I can't think of a case I've ever had for choosing the container at run time.
That's not the point. The aim is to compile the algorithm only once. As a real-world example (not to far from my scenario) say your project takes five minutes on 20 CPUs to compile, and the algorithm in question consumes less than 0.0001% of the overall execution time. Wouldn't you take a 10x hit in algorithm performance to improve the compile time?
No! My experience is that superior compile-time improvements are available by other means. It won't give you the best improvement in compile-time and it will impact runtime performance. This is definitely something we should discourage as opposed to supporting and implementing in my opinion. My recommendations for increasing the speed of compilation are: 1. Consider the interface boundaries and the module coupling 2. Check the include directories since bad compile times can be due to an include path to a network folder or some other horror that is easily remedied but might go unsuspected. 3. Consider tools such as CCache, DistCC, Icecream 4. Consider changing compiler. The variation in compile-time performance is huge between different compilers 5. Consider compiler flags and build flags particularly to maximise use of cores. It is often best to exceed the number of processing cores due to the ratio of sleep to running times of the tasks.
-Andy
Regards, Neil Groves _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 14 March 2013 16:31, Neil Groves
Virtual functions calls add very appreciable overhead relative to the numerous small functions that would need runtime polymorphism for the idiom to work. The overhead is particularly severe with super-scalar processor architectures. To some this won't be important of course.
Of course, if performance it isn't important, then just use std::set and be done with it. Is there any criterion besides performance that people choose between set and unordered_set where generalization would make any sense? Maybe < vs. hashing/==, but if I were to choose it at runtime, my type would have to implement all three. -- Nevin ":-)" Liber mailto:nevin@eviloverlord.com (847) 691-1404

On 14 March 2013 16:31, Neil Groves
Virtual functions calls add very appreciable overhead relative to the numerous small functions that would need runtime polymorphism for the idiom to work. The overhead is particularly severe with super-scalar processor architectures. To some this won't be important of course.
Of course, if performance it isn't important, then just use std::set and be done with it. Is there any criterion besides performance that people choose between set and unordered_set where generalization would make any sense?
The key question is: who gets to make that decision? ``my_algorithm`` may be happy to use std::set, but it's really the caller's choice. Perhaps the information ``my_algorithm`` needs already lives somewhere in a boost::multi_index_container. Does it really need to be copied just to call ``my_algorithm``? Perhaps I would like to compile ``my_algorithm`` once and stick it in a library file. Does it really need to be recompiled for each concrete container type? -Andy

On Thursday, March 14, 2013, Andy Jost wrote:
On 14 March 2013 16:31, Neil Groves
javascript:;> wrote: Of course, if performance it isn't important, then just use std::set and be done with it. Is there any criterion besides performance that people choose between set and unordered_set where generalization would make any sense?
The key question is: who gets to make that decision? ``my_algorithm`` may be happy to use std::set, but it's really the caller's choice. Perhaps the information ``my_algorithm`` needs already lives somewhere in a boost::multi_index_container. Does it really need to be copied just to call ``my_algorithm``?
I really cannot follow this. If the container is type erased, how do you get that information you are alluding to back out?
Perhaps I would like to compile ``my_algorithm`` once and stick it in a library file. Does it really need to be recompiled for each concrete container type?
There are plenty of papers and implementations of any_iterator out there. Why is that not sufficient for your needs? -- Nevin ":-)" Liber mailto:nevin@eviloverlord.com (847) 691-1404

I really cannot follow this.
If the container is type erased, how do you get that information you are alluding to back out?
Here's a very rough sketch demonstrating just one member (set::count). Many details are ignored, obviously.
template<typename ValueType> struct abstract_set
{
typedef ValueType key_type;
typedef ValueType value_type;
typedef std::size_t size_type;
// etc.
template<typename GenericSet>
abstract_set(GenericSet const & set)
: m_holder(new holder<GenericSet>(set))
{}
size_type count(value_type const & value) const
{ return m_holder->count(value); }
private:
struct holder_base
{
virtual size_t count(value_type const &) const = 0;
};
template<typename GenericSet>
struct holder : holder_base
{
holder(GenericSet const & set) : m_container(set) {}
virtual size_t count(value_type const & value) const
{ return m_container.count(value); }
private:
GenericSet const & m_container;
};
boost::shared_ptr
There are plenty of papers and implementations of any_iterator out there. Why is that not sufficient for your needs?
I want the container, not just an iterator. I might want to test for membership or add elements to the container.

AMDG On 03/14/2013 04:07 PM, Andy Jost wrote:
I really cannot follow this.
If the container is type erased, how do you get that information you are alluding to back out?
Here's a very rough sketch demonstrating just one member (set::count). Many details are ignored, obviously.
<snip>
There are plenty of papers and implementations of any_iterator out there. Why is that not sufficient for your needs?
I want the container, not just an iterator. I might want to test for membership or add elements to the container.
BOOST_TYPE_ERASURE_MEMBER(has_count, count);
template<class T>
using abstract_set =
boost::type_erasure::any

From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Steven Watanabe OK, you had the answer last time I posted to boost. Maybe I should just be sending DMs to you instead :)
BOOST_TYPE_ERASURE_MEMBER(has_count, count); template<class T> using abstract_set = boost::type_erasure::any
, _self&>;
Thanks, very cool! The docs make sense. What I'm talking about could readily be implemented using the TypeErasure library. So would abstract STL container adaptors be a useful thing to add to TypeErasure as a small utility and demonstration?

-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Andy Jost Sent: Friday, March 15, 2013 3:58 PM To: boost@lists.boost.org Subject: Re: [boost] Abstract STL Container Adaptors
From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Steven Watanabe
OK, you had the answer last time I posted to boost. Maybe I should just be sending DMs to you instead :)
BOOST_TYPE_ERASURE_MEMBER(has_count, count); template<class T> using abstract_set = boost::type_erasure::any
, _self&>; Thanks, very cool! The docs make sense. < What I'm talking about could readily be implemented using the TypeErasure library.
So would abstract STL container adaptors be a useful thing to add to TypeErasure as a small utility and demonstration?
Yes. Demonstrations are always useful to make the discussion less abstract. My feeling is that TypeErasure has much wider application that many have appreciated so far. Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On 14 March 2013 16:10, Andy Jost
Do you have any benchmarks to show this isn't in the noise?
What isn't in the noise? Compile times?
Yes. I also don't see how type erasure necessarily reduces this, since a bunch of virtual functions have to be generated at the time you assign the concrete underlying container to the type erased holder, even if you never use any of those calls. Heck, I still don't see how you avoid the templatng issue you seem to be concerned about, since you have to be templated on the value the container is holding. (Or do you plan on type erasing that too?)
For instance, if you had a "wrapper" that models a sequence >container, what iterator category do you pick for it?
The iterator category would exactly match that of the underlying container. Um, std::set has bidirectional iterators, while std::unordered_set only has forward iterators. At this point, I really have no idea what you mean by your above statement.
The goal is *not* to abstract away the differences between different containers; it is to abstract away the differences between different implementations of the same container. So std::std and tr1::unordered_set and std::set<...,MyAllocator> can be used interchangeably
Um, std::set and std::unordered_set are not different implementations of the same container. They have *lots* of differences; besides the ones already mentioned, there is comparison of containers, iterator invalidation rules, etc.
That's not the point. The aim is to compile the algorithm only once.
That may be your aim; my aim is to reduce both run time and compile times of my code.
As a real-world example
Where is the code for this real-world example?
(not to far from my scenario) say your project takes five minutes on 20 CPUs to compile, and the algorithm in question consumes less than 0.0001% of the overall execution time. Wouldn't you take a 10x hit in algorithm performance to improve the compile time?
Let's see the code which has this behavior. -- Nevin ":-)" Liber mailto:nevin@eviloverlord.com (847) 691-1404

On Thu, Mar 14, 2013 at 10:20 AM, Andy Jost
Hi Boosters,
Is there any interest in an abstract container adaptor library (I'll describe what I mean below)? I'm a little surprised if this has never come up in the context of Boost, but I couldn't find anything on the archives. I did find something on stack overflow ( http://stackoverflow.com/questions/5329555/abstract-wrapper-for-stl-containe...), but it did not uncover an existing solution.
Motivation:
Sometimes we want to write functions that take a container as an argument. For instance, say I have a graph object implemented in terms of ``class Node``. I may expose an algorithm like this:
void my_algorithm(Node & root, std::set
const & marked); In this case, ``root`` is the root of a graph, and ``marked`` is a set of marked nodes. The problem with this approach is that it forces a particular choice of the set implementation onto the caller. He may, for example, prefer to use ``tr1::unordered_set``. We could get around that limitation by using a template parameter, like this:
template<typename Set> void my_algorithm(Node & root, Set const & marked);
The problem here is that it forces the implementer of ``my_algorithm`` to expose that code and, potentially, recompile it many times (for different Set types).
In some cases, particularly when compile times are more critical than execution times, it would be preferable to let the caller choose the set implementation without losing the advantages of separate compilation. A simple solution is to write an adaptor that mimics the interface of ``set``. With that, the function call would appear as shown here:
void my_algorithm(Node & root, abstract_set
const & marked); Here, abstract_set
is implicitly convertible from any type having the interface of std::set . It internally holds a reference to some implementation of a suitable set-like object and mediates interactions with the outside world through virtual methods. It seems the implementation of this would be a straightforward application of type erasure. It seems too easy, in fact, which makes me wonder whether I'm missing something.
In any case, is this a good idea?
Despite the other comments, this has a legitimate use case: - The algorithm is code-wise large and called many times on different containers, or, say, embedded within compiled library, so either it isn't practical or isn't possible to inline it. - The dispatch overhead from the type erasure is negligible compared with other operations in the algorithm. - You want to avoid imposing unnecessary memory requirements on the caller, such as forcing them to copy their data into a specific data structure. Off the top of my head, computational geometry algorithms are sometimes of this nature. Additionally, I think this functionality is basically or entirely present in Steven Watanabe's Boost.TypeErasure library [1; warning: docs may be outdate, I just did a Google search]. - Jeff [1] http://steven_watanabe.users.sourceforge.net/type_erasure/libs/type_erasure/...

From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeffrey Lee Hellrung, Jr.
Despite the other comments, this has a legitimate use case: - The algorithm is code-wise large and called many times on different containers, or, say, embedded within compiled library, so either it isn't practical or isn't possible to inline it. - The dispatch overhead from the type erasure is negligible compared with other operations in the algorithm. - You want to avoid imposing unnecessary memory requirements on the caller, such as forcing them to copy their data into a specific data structure.
This is quite aligned with my thinking. I would add one more use case, which is when the algorithm is fixed, but the code calling it is still under development. In that case, separately compiling the algorithm could substantially improve incremental build times.
Off the top of my head, computational geometry algorithms are sometimes of this nature.
Also compiler algorithms.
Additionally, I think this functionality is basically or entirely present in Steven Watanabe's Boost.TypeErasure library [1; warning: docs may be outdate, I just did a Google search].
Thanks, that is helpful!

On 15 March 2013 00:46, Jeffrey Lee Hellrung, Jr.
Additionally, I think this functionality is basically or entirely present in Steven Watanabe's Boost.TypeErasure library [1; warning: docs may be outdate, I just did a Google search].
Documentation from trunk is at: http://boost-sandbox.sourceforge.net/doc/html/boost_typeerasure.html The table of contents are a bit messed up due to a change in the newest docbook stylesheets. I have a fix for that, just haven't committed it yet.

Thanks. Do you know where can I get the most recent source?
-----Original Message-----
From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Daniel James
Sent: Friday, March 15, 2013 9:18 AM
To: boost@lists.boost.org
Subject: Re: [boost] Abstract STL Container Adaptors
On 15 March 2013 00:46, Jeffrey Lee Hellrung, Jr.
Additionally, I think this functionality is basically or entirely present in Steven Watanabe's Boost.TypeErasure library [1; warning: docs may be outdate, I just did a Google search].
Documentation from trunk is at: http://boost-sandbox.sourceforge.net/doc/html/boost_typeerasure.html The table of contents are a bit messed up due to a change in the newest docbook stylesheets. I have a fix for that, just haven't committed it yet. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

AMDG On 03/15/2013 09:41 AM, Andy Jost wrote:
Thanks. Do you know where can I get the most recent source?
In the trunk: svn checkout http://svn.boost.org/svn/boost/trunk (This will get all of Boost. The library's subdirectories are boost/type_erasure and libs/type_erasure) In Christ, Steven Watanabe

Jeffrey Lee Hellrung, Jr. wrote:
On Thu, Mar 14, 2013 at 10:20 AM, Andy Jost
wrote: Hi Boosters,
Is there any interest in an abstract container adaptor library (I'll describe what I mean below)? I'm a little surprised if this has never come up in the context of Boost, but I couldn't find anything on the archives. I did find something on stack overflow ( http://stackoverflow.com/questions/5329555/abstract-wrapper-for-stl-containe...), but it did not uncover an existing solution.
Motivation:
Sometimes we want to write functions that take a container as an argument. For instance, say I have a graph object implemented in terms of ``class Node``. I may expose an algorithm like this:
void my_algorithm(Node & root, std::set
const & marked); In this case, ``root`` is the root of a graph, and ``marked`` is a set of marked nodes. The problem with this approach is that it forces a particular choice of the set implementation onto the caller. He may, for example, prefer to use ``tr1::unordered_set``. We could get around that limitation by using a template parameter, like this:
template<typename Set> void my_algorithm(Node & root, Set const & marked);
The problem here is that it forces the implementer of ``my_algorithm`` to expose that code and, potentially, recompile it many times (for different Set types).
In some cases, particularly when compile times are more critical than execution times, it would be preferable to let the caller choose the set implementation without losing the advantages of separate compilation. A simple solution is to write an adaptor that mimics the interface of ``set``. With that, the function call would appear as shown here:
void my_algorithm(Node & root, abstract_set
const & marked); Here, abstract_set
is implicitly convertible from any type having the interface of std::set . It internally holds a reference to some implementation of a suitable set-like object and mediates interactions with the outside world through virtual methods. It seems the implementation of this would be a straightforward application of type erasure. It seems too easy, in fact, which makes me wonder whether I'm missing something.
In any case, is this a good idea?
Despite the other comments, this has a legitimate use case: - The algorithm is code-wise large and called many times on different containers, or, say, embedded within compiled library, so either it isn't practical or isn't possible to inline it. - The dispatch overhead from the type erasure is negligible compared with other operations in the algorithm. - You want to avoid imposing unnecessary memory requirements on the caller, such as forcing them to copy their data into a specific data structure.
Off the top of my head, computational geometry algorithms are sometimes of this nature.
Additionally, I think this functionality is basically or entirely present in Steven Watanabe's Boost.TypeErasure library [1; warning: docs may be outdate, I just did a Google search].
There are also some algorithms of a nature that are large with many variations and performance critical enough that you can't really afford to instantiate all variations in code with templates and you can't afford the overhead of internal runtime dispatch. A good example of this would be a complex software renderer. There are configurable paths but the path is known at the start of the algorithm and will not change for a given set of data. For example the properties of a vertex that must be interpolated over the face of a triangle in addition to position there may be any/all/none of texture coordinates, normals, colors, etc. You can't really afford any dispatch in code that is in such an inner loop but instantiating all possible options and compile time would lead to a huge code bloat, especially when most of the configurations would never be used in practice. A solution to these types of problems is essentially jit compiling the algorithm to run by fitting together prefabricated blocks depending on all the configuration as if you were doing template instantiation at runtime. I think it would be an interesting area of exploration. http://tinyurl.com/optimizing-pixomatic1 http://tinyurl.com/optimizing-pixomatic2 http://tinyurl.com/optimizing-pixomatic3

On 14-03-2013 18:20, Andy Jost wrote:
Hi Boosters,
Here, abstract_set
is implicitly convertible from any type having the interface of std::set . It internally holds a reference to some implementation of a suitable set-like object and mediates interactions with the outside world through virtual methods. It seems the implementation of this would be a straightforward application of type erasure. It seems too easy, in fact, which makes me wonder whether I'm missing something.
In any case, is this a good idea?
It could be. Its no different than using boost::function<>, that is, it allow us to make a generic interface with decisions deferred till run-time (at the expense of a performance loss). I guess it could be faked in various ways, eg., the argument could be boost::any which is then documented to be one of several set types. -Thorsten

On 18 March 2013 07:29, Thorsten Ottosen
Its no different than using boost::function<>, that is, it allow us to make a generic interface with decisions deferred till run-time (at the expense of a performance loss).
It isn't just *one* generic interface; the underlying iterators have to be type erased as well. While we certainly *can* do this, I see lots of handwaving and not any actual use cases. -- Nevin ":-)" Liber mailto:nevin@eviloverlord.com (847) 691-1404

On 18 March 2013 09:35, Nevin Liber
On 18 March 2013 07:29, Thorsten Ottosen
wrote: Its no different than using boost::function<>, that is, it allow us to make a generic interface with decisions deferred till run-time (at the expense of a performance loss).
It isn't just *one* generic interface; the underlying iterators have to be type erased as well.
To be completely generic, you may need to type erase allocator_type, pointer, const_pointer, reference, const_reference, size_type and difference_type (since they may all differ between containers, and many of those types now come from the underlying allocator), as well as have the ability to pass an allocator in for both holding the type erased container as well as any of the underlying type erased objects. -- Nevin ":-)" Liber mailto:nevin@eviloverlord.com (847) 691-1404

On 18-03-2013 15:46, Nevin Liber wrote:
On 18 March 2013 09:35, Nevin Liber
wrote: On 18 March 2013 07:29, Thorsten Ottosen
wrote: Its no different than using boost::function<>, that is, it allow us to make a generic interface with decisions deferred till run-time (at the expense of a performance loss).
It isn't just *one* generic interface; the underlying iterators have to be type erased as well.
Sure. That has been done: http://www.boost.org/doc/libs/1_53_0/libs/range/doc/html/range/reference/ran...
To be completely generic, you may need to type erase allocator_type, pointer, const_pointer, reference, const_reference, size_type and difference_type (since they may all differ between containers, and many of those types now come from the underlying allocator), as well as have the ability to pass an allocator in for both holding the type erased container as well as any of the underlying type erased objects.
Well, I don't see a need for all the allocator stuff. What the point? Do you have a similar need when using boost::function<> or boost::any? reference/const_reference can be T& and T const&. Perhaps we don't need pointer. One may choose a difference_type that is 64 bit to be generic. -Thorsten

-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Thorsten Ottosen
Well, I don't see a need for all the allocator stuff. What the point? Do you have a similar need when using boost::function<> or boost::any? reference/const_reference can be T& and T const&. Perhaps we don't need pointer. One may choose a difference_type that is 64 bit to be generic.
Maybe looking at some code would help clarify the answers to these questions. I have about 300 lines partially implementing an adaptor for std::set using Boost.TypeErasure, plus a list of questions and problems I encountered. Is that too large to put into an email for this list? If so, where can I drop it? -Andy

AMDG On 03/18/2013 08:30 AM, Andy Jost wrote:
Maybe looking at some code would help clarify the answers to these questions. I have about 300 lines partially implementing an adaptor for std::set using Boost.TypeErasure, plus a list of questions and problems I encountered. Is that too large to put into an email for this list? If so, where can I drop it?
That shouldn't be too large. In Christ, Steven Watanabe

From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Steven Watanabe
On 03/18/2013 08:30 AM, Andy Jost wrote:
Maybe looking at some code would help clarify the answers to these questions. I have about 300 lines partially implementing an adaptor for std::set using Boost.TypeErasure, plus a list of questions and problems I encountered. Is that too large to put into an email for this list? If so, where can I drop it?
That shouldn't be too large.
Ok, code is at the end; questions first. I realize many of these questions may be answered in the documentation.
Linux 2.6.9-89.ELlargesmp x86_64
g++ (GCC) 4.5.2
1) Certain functions require a downcast, of sorts. The overload of std::set<T>::insert taking an iterator as hint is an example. Not just any iterator, but one with exactly the right type is needed. Maybe this is what type_erasure::is_same is for, but I haven't looked into it yet.
2) I start getting errors when the mpl::vector of requirements exceeds 20 elements: "error: wrong number of template arguments (21, should be 20)."
As a blind guess, I tried including boost/mpl/vector/vector30.hpp but it doesn't help. I know there's a good chance the answer is in the MPL docs somewhere, but I haven't had a chance to go deeper.
3) I could not implement swap. If item 1 can be resolved, then this could work in the same way. Still, it seems that two any<> objects of the same type should be swappable (though, it is not obvious how that would be expressed). I couldn't find a way to do it.
4) When members are overloaded based on a constant this, I could only use the const-qualified version. So, I can add a requirement for set<T>::clear, but for set<T>::begin, the return value must be specified as const_iterator.
5) BOOST_TYPE_ERASURE_MEMBER cannot be used in a SEQ_FOR_EACH loop, because it uses SEQ_FOR_EACH internally. It would be handy to have a way to declare several members when the namespace path and prefix (e.g., has_) doesn't change, either by passing the z parameter or using another macro.
6) I could never get the bidirectional_iterator concept from TypeErasure working. I didn't find an example. I ended up writing out the requirements myself. Also, that required the use of relaxed and I didn't understand why.
#define BOOST_TYPE_ERASURE_STL_MEMBER(key,name,arity) \
BOOST_TYPE_ERASURE_MEMBER((boost)(type_erasure)(has_##key),name,arity); \
/**/
/* std::allocator */
BOOST_TYPE_ERASURE_STL_MEMBER(address,address,1)
BOOST_TYPE_ERASURE_STL_MEMBER(allocate1,allocate,1)
BOOST_TYPE_ERASURE_STL_MEMBER(allocate2,allocate,2)
BOOST_TYPE_ERASURE_STL_MEMBER(construct,construct,2)
BOOST_TYPE_ERASURE_STL_MEMBER(deallocate,deallocate,2)
BOOST_TYPE_ERASURE_STL_MEMBER(destroy,destroy,1)
/* Iterators */
BOOST_TYPE_ERASURE_STL_MEMBER(begin,begin,0)
BOOST_TYPE_ERASURE_STL_MEMBER(end,end,0)
BOOST_TYPE_ERASURE_STL_MEMBER(rbegin,rbegin,0)
BOOST_TYPE_ERASURE_STL_MEMBER(rend,rend,0)
/* Capacity */
BOOST_TYPE_ERASURE_STL_MEMBER(empty,empty,0)
BOOST_TYPE_ERASURE_STL_MEMBER(max_size,max_size,0)
BOOST_TYPE_ERASURE_STL_MEMBER(size,size,0)
/* Modifiers */
BOOST_TYPE_ERASURE_STL_MEMBER(clear,clear,0)
BOOST_TYPE_ERASURE_STL_MEMBER(erase1,erase,1)
BOOST_TYPE_ERASURE_STL_MEMBER(erase2,erase,2)
BOOST_TYPE_ERASURE_STL_MEMBER(insert1,insert,1)
BOOST_TYPE_ERASURE_STL_MEMBER(insert2,insert,2)
BOOST_TYPE_ERASURE_STL_MEMBER(swap,swap,1)
/* Observers */
BOOST_TYPE_ERASURE_STL_MEMBER(key_comp,key_comp,0)
BOOST_TYPE_ERASURE_STL_MEMBER(value_comp,value_comp,0)
/* Operations */
BOOST_TYPE_ERASURE_STL_MEMBER(count,count,1)
BOOST_TYPE_ERASURE_STL_MEMBER(equal_range,equal_range,1)
BOOST_TYPE_ERASURE_STL_MEMBER(find,find,1)
BOOST_TYPE_ERASURE_STL_MEMBER(lower_bound,lower_bound,1)
BOOST_TYPE_ERASURE_STL_MEMBER(upper_bound,upper_bound,1)
/* Allocator */
BOOST_TYPE_ERASURE_STL_MEMBER(get_allocator,get_allocator,0)
#define BOOST_TYPE_ERASURE_FWD_TYPEDEFS(z,base,name) typedef base::name name;
#define BOOST_TYPE_ERASURE_STL_TYPEDEFS \
(key_type)(key_compare)(value_compare)(allocator_type) \
(iterator)(const_iterator)(reverse_iterator)(const_reverse_iterator) \
BOOST_TYPE_ERASURE_ALLOCATOR_TYPEDEFS \
/**/
#define BOOST_TYPE_ERASURE_ALLOCATOR_TYPEDEFS \
(value_type)(reference)(const_reference)(pointer)(const_pointer) \
(difference_type)(size_type) \
/**/
namespace boost { namespace type_erasure
{
template<typename T> struct allocator_typedefs
{
typedef T value_type;
typedef T* pointer;
typedef T& reference;
typedef const T* const_pointer;
typedef const T& const_reference;
typedef std::size_t size_type;
typedef std::ptrdiff_t difference_type;
};
template

AMDG On 03/18/2013 10:07 AM, Andy Jost wrote:
From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Steven Watanabe
On 03/18/2013 08:30 AM, Andy Jost wrote:
Maybe looking at some code would help clarify the answers to these questions. I have about 300 lines partially implementing an adaptor for std::set using Boost.TypeErasure, plus a list of questions and problems I encountered. Is that too large to put into an email for this list? If so, where can I drop it?
That shouldn't be too large.
Ok, code is at the end; questions first. I realize many of these questions may be answered in the documentation.
Linux 2.6.9-89.ELlargesmp x86_64 g++ (GCC) 4.5.2
1) Certain functions require a downcast, of sorts. The overload of std::set<T>::insert taking an iterator as hint is an example. Not just any iterator, but one with exactly the right type is needed. Maybe this is what type_erasure::is_same is for, but I haven't looked into it yet.
This would fall under: http://boost-sandbox.sourceforge.net/doc/html/boost_typeerasure/multi.html
2) I start getting errors when the mpl::vector of requirements exceeds 20 elements: "error: wrong number of template arguments (21, should be 20)."
As a blind guess, I tried including boost/mpl/vector/vector30.hpp but it doesn't help. I know there's a good chance the answer is in the MPL docs somewhere, but I haven't had a chance to go deeper.
You need to use the numbered form for >20 elements. i.e. mpl::vector21<...>
3) I could not implement swap. If item 1 can be resolved, then this could work in the same way. Still, it seems that two any<> objects of the same type should be swappable (though, it is not obvious how that would be expressed). I couldn't find a way to do it.
I haven't implemented swap. It would need to look like this:
template<class T>
struct swappable {
static void appy(T& lhs, T& rhs)
{ using std::swap; swap(lhs, rhs); }
};
namespace boost { namespace type_erasure {
// a better match than std::swap
template
4) When members are overloaded based on a constant this, I could only use the const-qualified version. So, I can add a requirement for set<T>::clear, but for set<T>::begin, the return value must be specified as const_iterator.
That's odd. It works for me with msvc-11. I just committed a test case in r83490.
5) BOOST_TYPE_ERASURE_MEMBER cannot be used in a SEQ_FOR_EACH loop, because it uses SEQ_FOR_EACH internally. It would be handy to have a way to declare several members when the namespace path and prefix (e.g., has_) doesn't change, either by passing the z parameter or using another macro.
Sorry. BOOST_PP_SEQ_FOR_EACH is not re-entrant and I really don't want to try to work around it.
6) I could never get the bidirectional_iterator concept from TypeErasure working. I didn't find an example.
It should be:
bidirectional_iterator<_iter, T&>,
same_type
I ended up writing out the requirements myself. Also, that required the use of relaxed and I didn't understand why.
It's because of the default constructor.
any

5) BOOST_TYPE_ERASURE_MEMBER cannot be used in a SEQ_FOR_EACH loop, because it uses SEQ_FOR_EACH internally. It would be handy to have a way to declare several members when the namespace path and prefix (e.g., has_) doesn't change, either by passing the z parameter or using another macro.
Sorry. BOOST_PP_SEQ_FOR_EACH is not re-entrant and I really don't want to try to work around it.
There are two fairly easy ways to work around this. One is to use deferred expressions, like this: #define EMPTY() #define DEFER(x) x EMPTY() #define EXPAND(x) x #define PP_SEQ_FOR_EACH_ID() BOOST_PP_SEQ_FOR_EACH #define FLATTEN_II(r, data, x) x #define FLATTEN_I(r, data, seq) DEFER(PP_SEQ_FOR_EACH_ID)()(FLATTEN_II, ~, seq) #define FLATTEN(seq) EXPAND(BOOST_PP_SEQ_FOR_EACH(FLATTEN_I, ~, seq)) // expands to a b c d e f FLATTEN ( ( (a)(b)(c) ) ( (d)(e)(f) ) ) The second way is to use sequence iteration, like this: #define FLATTEN_II(r, data, x) x #define FLATTEN_I(r, data, seq) BOOST_PP_SEQ_FOR_EACH(FLATTEN_II, ~, seq) #define FLATTEN(seq) BOOST_PP_CAT(FLATTEN_1 seq, _END) #define FLATTEN_1(x) (FLATTEN_I(x)) FLATTEN_2 #define FLATTEN_2(x) (FLATTEN_I(x)) FLATTEN_1 #define FLATTEN_1_END #define FLATTEN_2_END // expands to a b c d e f FLATTEN ( ( (a)(b)(c) ) ( (d)(e)(f) ) ) However, sequence iteration doesn't allow for a data parameter to be passed to it. (Which may not be necessary in some cases) Thanks, Paul Fultz II

The second way is to use sequence iteration, like this:
Sorry, for the sequence iteration it should be more like this: #define FLATTEN_II(r, data, x) x #define FLATTEN_I(seq) BOOST_PP_SEQ_FOR_EACH(FLATTEN_II, ~, seq) #define FLATTEN(seq) BOOST_PP_CAT(FLATTEN_1 seq, _END) #define FLATTEN_1(x) (FLATTEN_I(x)) FLATTEN_2 #define FLATTEN_2(x) (FLATTEN_I(x)) FLATTEN_1 #define FLATTEN_1_END #define FLATTEN_2_END // expands to a b c d e f FLATTEN ( ( (a)(b)(c) ) ( (d)(e)(f) ) )
Thanks, Paul Fultz II

Hi all, what is the code in the sandbox under https://svn.boost.org/svn/boost/sandbox/net/ ? I could not find any documentation about it, just curious what could it be. Regards, Sergei Degtiarev

On 19 March 2013 20:51, Serge Degtiarev
Hi all, what is the code in the sandbox under https://svn.boost.org/svn/boost/sandbox/net/ ? I could not find any documentation about it, just curious what could it be.
I would be interested in the status / history of this too, as I notice there appears to be a (thin and albeit synchronous) wrapper around SCTP. Regards Mark

what is the code in the sandbox under https://svn.boost.org/svn/boost/sandbox/net/ ? I could not find any documentation about it, just curious what could it be.
Do you mean cpp-netlib? http://cpp-netlib.org/latest/index.html I have been using it for some projects and it works great! Christian

No, the code in sandbox looks like low-level thin wrapper over socket API, while the cpp-netlib seems to work on higher level of HTTP, etc.
I would be happy to contribute to the project, but I need to know its goals and motivation first.
Sergei
--- On Wed, 3/20/13, Christian Henning
what is the code in the sandbox under https://svn.boost.org/svn/boost/sandbox/net/ ? I could not find any documentation about it, just curious what could it be.
Do you mean cpp-netlib? http://cpp-netlib.org/latest/index.html I have been using it for some projects and it works great! Christian _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 3/20/2013 1:38 PM, Serge Degtiarev wrote:
No, the code in sandbox looks like low-level thin wrapper over socket API, while the cpp-netlib seems to work on higher level of HTTP, etc. I would be happy to contribute to the project, but I need to know its goals and motivation first. Sergei
You might get more attention if you post a new message without replying to an unrelated posting. Oliver Kowalke looks to be the author of the 'net' code in the sandbox. He's also the author of boost::context library, and a few others IIRC. Jeff

On Mon, Mar 18, 2013 at 11:30 AM, Andy Jost
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Thorsten Ottosen
Well, I don't see a need for all the allocator stuff. What the point? Do you have a similar need when using boost::function<> or boost::any? reference/const_reference can be T& and T const&. Perhaps we don't need pointer. One may choose a difference_type that is 64 bit to be generic.
Maybe looking at some code would help clarify the answers to these questions. I have about 300 lines partially implementing an adaptor for std::set using Boost.TypeErasure, plus a list of questions and problems I encountered. Is that too large to put into an email for this list? If so, where can I drop it?
-Andy
My opinions on all of this, starting with the beginning question of "any interest", etc: - yes - "sort of" - I have had a number of occasions where I felt, for various reasons, that the trade offs between run time and compile time polymorphism favoured run-time, and thus would use - in some sense - a run time polymorphic container. - "sort of" - what do I mean by sort of? Well, I *never* needed a complete container abstraction - I only ever needed a small subset of a container's API to be polymorphic. If I needed to iterate over the container I may have used/written any_iterator or similar. If I needed to add to the container, maybe I abstracted a push-back interface or used a boost::function that could only add. Etc. So 1) I think many of the responses given so far fit into the above answer. ie responses like "yes I could use something like that" really mean I would use *part* of an abstract interface; and "no, why not use any_iterator or function<> or ..." mean I would only use *part* of the abstract interface. 2) I lean toward thinking a full abstract interface is not very useful. To the point of being *wrong*. If I see a function like: determine_foo(AbstractSet * set); Does it really need the *complete* set interface? I highly doubt it. I'd prefer, and as a general rule almost always prefer, interfaces to be defined by the code that *calls* the interface. I would like to know which functions of AbstractSet the function actually calls and requires. (ie interfaces on functions should be like concepts/requirements on templates) So unless you want to break down AbstractSet into its smaller sub-interfaces, I doubt I'd ever use your abstractions. I see with Boost.TypeErasure you have them groups into sections. Maybe those should each be separate interfaces, and then AbstractSet would be the combination of interfaces. Tony P.S. The interfaces should probably match the C++ concepts that the containers model. Maybe once we get concepts (lite?) into C++, we could make an automatic concept-to-runtime-interface mechanism?
participants (15)
-
Andy Jost
-
Christian Henning
-
Daniel James
-
Gottlob Frege
-
Jeff Flinn
-
Jeffrey Lee Hellrung, Jr.
-
Mark Blewett
-
Michael Marcin
-
Neil Groves
-
Nevin Liber
-
Paul A. Bristow
-
paul Fultz
-
Serge Degtiarev
-
Steven Watanabe
-
Thorsten Ottosen