New library in the Vault: ConstantTimeSize

Hi, Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity. You can find the sources, documentation and some test in the Boost Vault. http://www.boostpro.com/vault/index.php?action=downloadfile&filename=constant_time_size.zip&directory=Containers& Any comments are wellcome, Vicente

On Wed, Oct 15, 2008 at 12:29:41AM +0200, vicente.botet wrote:
Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity.
Isn't quasy spelled quasi? At least my dictionaries seem to agree here. -- Lars Viklund | zao@acc.umu.se | 070-310 47 07

----- Original Message ----- From: "Lars Viklund" <zao@acc.umu.se> To: <boost@lists.boost.org> Sent: Wednesday, October 15, 2008 3:25 PM Subject: Re: [boost] New library in the Vault: ConstantTimeSize
On Wed, Oct 15, 2008 at 12:29:41AM +0200, vicente.botet wrote:
Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity.
Isn't quasy spelled quasi? At least my dictionaries seem to agree here.
Thanks, I will correct. Vicente

vicente.botet wrote:
Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity.
I think a wrapper is not a good idea, since you cannot provide O(1) splice if splice is O(n). Why not providing a new policy-based std::list implementation? Since you're at it, you might add full support for allocators, which containers do not have usually, allocator v2 support, move semantics, in-place insertion...

On Wed, Oct 15, 2008 at 5:07 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
vicente.botet wrote:
Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity.
I think a wrapper is not a good idea, since you cannot provide O(1) splice if splice is O(n).
Why not providing a new policy-based std::list implementation? Since you're at it, you might add full support for allocators, which containers do not have usually, allocator v2 support, move semantics, in-place insertion...
I believe that boost::interprocess list already supports all (or most) of that. -- gpd

On Wed, Oct 15, 2008 at 6:28 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
Giovanni Piero Deretta wrote:
I believe that boost::interprocess list already supports all (or most) of that.
Indeed, so that project would probably be more interesting as an addition to the interprocess containers
Interprocess list size, single element splice, all-elements splice are already constant complexity. Only range splice has linear complexity. Probably the project would have little utility there. (BTW, I think that interprocess.list is based on intrusive.list whose size and slice complexity are configurable via policies)
, which should probably also be renamed to have their own library.
Agree, currently they are hidden inside interprocess while they could have much more exposition if they were a top level boost library. -- gpd

----- Original Message ----- From: "Giovanni Piero Deretta" <gpderetta@gmail.com> To: <boost@lists.boost.org> Sent: Wednesday, October 15, 2008 6:47 PM Subject: Re: [boost] New library in the Vault: ConstantTimeSize
On Wed, Oct 15, 2008 at 6:28 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
Giovanni Piero Deretta wrote:
I believe that boost::interprocess list already supports all (or most) of that.
Indeed, so that project would probably be more interesting as an addition to the interprocess containers
Interprocess list size, single element splice, all-elements splice are already constant complexity. Only range splice has linear complexity. Probably the project would have little utility there. (BTW, I think that interprocess.list is based on intrusive.list whose size and slice complexity are configurable via policies)
Yes, it have a template parameter option constant_time_size<>. It's from there that I have taken the name of the library. Vicente

Giovanni Piero Deretta wrote:
Interprocess list size, single element splice, all-elements splice are already constant complexity. Only range splice has linear complexity. Probably the project would have little utility there. (BTW, I think that interprocess.list is based on intrusive.list whose size and slice complexity are configurable via policies)
Correct, Interprocess list has an additional range splice method that takes the length of the range (many times the user can calculate it) and it's of course O(1).
Agree, currently they are hidden inside interprocess while they could have much more exposition if they were a top level boost library.
I plan to do so, but I find no time to do it, but I'm still adding some new features to those containers: trunk SVN Interprocess containers have now placement insertion. Regards, Ion

----- Original Message ----- From: "Ion Gaztañaga" <igaztanaga@gmail.com> To: <boost@lists.boost.org> Sent: Thursday, October 16, 2008 6:03 PM Subject: Re: [boost] New library in the Vault: ConstantTimeSize
Giovanni Piero Deretta wrote:
Interprocess list size, single element splice, all-elements splice are already constant complexity. Only range splice has linear complexity. Probably the project would have little utility there. (BTW, I think that interprocess.list is based on intrusive.list whose size and slice complexity are configurable via policies)
Correct, Interprocess list has an additional range splice method that takes the length of the range (many times the user can calculate it) and it's of course O(1).
This method is also included on my library. Vicente

----- Original Message ----- From: "Mathias Gaunard" <mathias.gaunard@ens-lyon.org> To: <boost@lists.boost.org> Sent: Wednesday, October 15, 2008 5:07 PM Subject: Re: [boost] New library in the Vault: ConstantTimeSize
vicente.botet wrote:
Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity.
I think a wrapper is not a good idea, since you cannot provide O(1) splice if splice is O(n).
You are right, we cannot wrap a splice O(n) and get a O(1). The main goal of the library is to improve the list::size function.
Why not providing a new policy-based std::list implementation?
Well this would be a interesting project, but on the project where I use it I needed to interface with 3pp libraries having std::list as parameters. So I needed to wrap them.
Since you're at it, you might add full support for allocators, which containers do not have usually, allocator v2 support, move semantics, in-place insertion...
Thanks for the suggestion bau this is out of the scope of this mini library ;) Vicente

Mathias Gaunard wrote:
vicente.botet wrote:
Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity.
I think a wrapper is not a good idea, since you cannot provide O(1) splice if splice is O(n).
You can cache size value and keep splice O(1), which will give you O(1) size() up until you call splice. This is much better than having O(n) size() all the time.

----- Original Message ----- From: "Andrey Semashev" <andrey.semashev@gmail.com> To: <boost@lists.boost.org> Sent: Thursday, October 16, 2008 7:01 PM Subject: Re: [boost] New library in the Vault: ConstantTimeSize
Mathias Gaunard wrote:
vicente.botet wrote:
Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity.
I think a wrapper is not a good idea, since you cannot provide O(1) splice if splice is O(n).
You can cache size value and keep splice O(1), which will give you O(1) size() up until you call splice. This is much better than having O(n) size() all the time.
This is exactly what the quasi_constant do on my library. Vicente

on Tue Oct 14 2008, "vicente.botet" <vicente.botet-AT-wanadoo.fr> wrote:
Hi,
Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity.
You can find the sources, documentation and some test in the Boost Vault.
Any comments are wellcome,
Interesting, and probably useful. However, I would like any library using the proposed name to go much further: loop unrolling does not depend on random access, but on having an O(1) size operation, so I would like to see algorithm implementations that take advantage of it. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Boost.ConstantTimeSize defines a configurable wrapper to the stl container list providing a size function with constant, linear or quasy_constant complexity.
Dave Abrahams wrote:
Interesting, and probably useful.
However, I would like any library using the proposed name to go much further: loop unrolling does not depend on random access, but on having an O(1) size operation, so I would like to see algorithm implementations that take advantage of it.
Do we know that loop unrolling of iteration over a linked list is profitable? I would think that a conscientious programmer interested in loop unrolling of linked lists should pursue an analysis to measure the potential benefits that might be realized with current compilers and hardware before implementing a general solution along those lines. This analysis would be somewhat challenging, because it is likely that a toy testcase would fit in L1 cache and show promising results while realistic usage of lists would lead to their elements being fragmented in the heap and a large proportion of L1 and L2 cache misses that dominates runtime for loop iteration. If the long latency cache miss dominates the overhead of checking if itr == list.end() you will realize no benefit from loop unrolling on a modern architecture which can mask the latency of the comparison and overhead of the branch instruction that depends upon it within the latency of the cache miss of the next iteration of the loop under speculative execution. It is conceivable that the benefit would be negligible in the common case. We won't know, however, unless someone does the analysis. Regards, Luke

on Thu Oct 16 2008, "Simonson, Lucanus J" <lucanus.j.simonson-AT-intel.com> wrote:
Do we know that loop unrolling of iteration over a linked list is profitable?
Nope, not for sure.
I would think that a conscientious programmer interested in loop unrolling of linked lists should pursue an analysis to measure the potential benefits that might be realized with current compilers and hardware before implementing a general solution along those lines. This analysis would be somewhat challenging, because it is likely that a toy testcase would fit in L1 cache and show promising results while realistic usage of lists would lead to their elements being fragmented in the heap and a large proportion of L1 and L2 cache misses that dominates runtime for loop iteration. If the long latency cache miss dominates the overhead of checking if itr == list.end() you will realize no benefit from loop unrolling on a modern architecture which can mask the latency of the comparison and overhead of the branch instruction that depends upon it within the latency of the cache miss of the next iteration of the loop under speculative execution. It is conceivable that the benefit would be negligible in the common case. We won't know, however, unless someone does the analysis.
Good points. I first realized the importance of O(1) size in the context of TMP, and translated my conclusions to the runtime world without considering cache effects. Testing is definitely needed. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

----- Original Message ----- From: "David Abrahams" <dave@boostpro.com> To: <boost@lists.boost.org> Sent: Thursday, October 16, 2008 10:55 PM Subject: Re: [boost] New library in the Vault: ConstantTimeSize
Any comments are wellcome,
Interesting, and probably useful.
Thanks.
However, I would like any library using the proposed name to go much further: loop unrolling does not depend on random access, but on having an O(1) size operation, so I would like to see algorithm implementations that take advantage of it.
David, I'm a little bit lost. Could you be more explicit? Which algorithm implementations are you referring to? Call for proposals for a more adequated name for this modest library. Thanks, Vicente

on Thu Oct 16 2008, "vicente.botet" <vicente.botet-AT-wanadoo.fr> wrote:
However, I would like any library using the proposed name to go much further: loop unrolling does not depend on random access, but on having an O(1) size operation, so I would like to see algorithm implementations that take advantage of it.
David, I'm a little bit lost. Could you be more explicit? Which algorithm implementations are you referring to?
range-based for_each, transform, accumulate, etc ...and maybe find, for example. -- Dave Abrahams BoostPro Computing http://www.boostpro.com
participants (8)
-
Andrey Semashev
-
David Abrahams
-
Giovanni Piero Deretta
-
Ion Gaztañaga
-
Lars Viklund
-
Mathias Gaunard
-
Simonson, Lucanus J
-
vicente.botet