vector vs list speed test
I'm just wondering what your thoughts are on my findings. I need a container that meets the following: - forward iteration only - insert to end only - no random access - delete from any position - average size of list will be 64 items I would first say use list over vector. because no random access and delete from any position. I ran some performance tests, which proved otherwise. The test builds a list by inserting to end. And then removes all items from the front until size == 0. (note: removing from index [0] should provide worst case linear complexity for vector) Here are my finding for containers of size 64, 256 and 1024. first result uses vector, 2nd uses list. i ran the test in release mode in ms vc++ 6. Test build/remove 64 0.062 0.234 Test build/remove 256 0.422 0.938 Test build/remove 1024 5.844 3.75 - it seems vector is best for smaller lists. even at 256 items, the vector is twice as fast at building at contant time and deleting and worst case linear time. - is this strange behaviour? is my test correct or do i have a bug? - i am also curious to how this runs on other compilers (esp later version of ms vc++) For those curious, here is my code. feel free to pick at it ;) void TestVectBuildDelete(int redoCount, int listSize) { boost::timer t; for (int i = 0; i < redoCount; i++) { // build the list std::vector<int> vect; for (int j = 0; j < listSize; j++) { vect.push_back(j); //vect.insert(vect.begin(), j); //.push_front(j); } // remove all elements in list while (!vect.empty()) { vect.erase(vect.begin()); } } std::cout << t.elapsed() << std::endl; } void TestListBuildDelete(int redoCount, int listSize) { boost::timer t; for (int i = 0; i < redoCount; i++) { // build the list std::list<int> vect; for (int j = 0; j < listSize; j++) { vect.push_back(j); } // remove all elements in list while (!vect.empty()) { vect.pop_front(); } } std::cout << t.elapsed() << std::endl; } void Test() { std::cout << "Test build/remove 64" << std::endl; TestVectBuildDelete(10000, 64); TestListBuildDelete(10000, 64); std::cout << "Test build/remove 256" << std::endl; TestVectBuildDelete(10000, 256); TestListBuildDelete(10000, 256); std::cout << "Test build/remove 1024" << std::endl; TestVectBuildDelete(10000, 1024); TestListBuildDelete(10000, 1024); }
On 7/12/06, bringiton bringiton
I'm just wondering what your thoughts are on my findings.
I need a container that meets the following: - forward iteration only - insert to end only - no random access - delete from any position - average size of list will be 64 items
I would first say use list over vector. because no random access and delete from any position. I ran some performance tests, which proved otherwise.
The test builds a list by inserting to end. And then removes all items from the front until size == 0. (note: removing from index [0] should provide worst case linear complexity for vector)
Here are my finding for containers of size 64, 256 and 1024. first result uses vector, 2nd uses list. i ran the test in release mode in ms vc++ 6.
Test build/remove 64 0.062 0.234 Test build/remove 256 0.422 0.938 Test build/remove 1024 5.844 3.75
- it seems vector is best for smaller lists. even at 256 items, the vector is twice as fast at building at contant time and deleting and worst case linear time. - is this strange behaviour? is my test correct or do i have a bug? - i am also curious to how this runs on other compilers (esp later version of ms vc++)
For those curious, here is my code. feel free to pick at it ;)
void TestVectBuildDelete(int redoCount, int listSize) { boost::timer t; for (int i = 0; i < redoCount; i++) { // build the list std::vector<int> vect; for (int j = 0; j < listSize; j++) { vect.push_back(j); //vect.insert(vect.begin(), j); //.push_front(j); } // remove all elements in list while (!vect.empty()) { vect.erase(vect.begin()); } } std::cout << t.elapsed() << std::endl; }
void TestListBuildDelete(int redoCount, int listSize) { boost::timer t; for (int i = 0; i < redoCount; i++) { // build the list std::list<int> vect; for (int j = 0; j < listSize; j++) { vect.push_back(j); } // remove all elements in list while (!vect.empty()) { vect.pop_front(); } } std::cout << t.elapsed() << std::endl; }
void Test() { std::cout << "Test build/remove 64" << std::endl; TestVectBuildDelete(10000, 64); TestListBuildDelete(10000, 64);
std::cout << "Test build/remove 256" << std::endl; TestVectBuildDelete(10000, 256); TestListBuildDelete(10000, 256);
std::cout << "Test build/remove 1024" << std::endl; TestVectBuildDelete(10000, 1024); TestListBuildDelete(10000, 1024); }
I've done some more tests. It seems building a list is significantly slower that building a vector. (This is were the discrepancy is coming from). can anyone suggest why this is so? is the list<T>::push_back bottleneck due to memory allocation for each node? *NB sorry this is boost related. but i found it interesting ;)
On 7/11/06, bringiton bringiton
On 7/12/06, bringiton bringiton
wrote: I'm just wondering what your thoughts are on my findings.
I need a container that meets the following: - forward iteration only - insert to end only - no random access - delete from any position - average size of list will be 64 items
I would first say use list over vector. because no random access and delete from any position. I ran some performance tests, which proved otherwise.
- it seems vector is best for smaller lists. even at 256 items, the vector is twice as fast at building at contant time and deleting and worst case linear time. - is this strange behaviour? is my test correct or do i have a bug? - i am also curious to how this runs on other compilers (esp later version of ms vc++)
I've done some more tests. It seems building a list is significantly slower that building a vector. (This is were the discrepancy is coming from). can anyone suggest why this is so? is the list<T>::push_back bottleneck due to memory allocation for each node?
Accessing the heap for every list edit can take significantly more time than the memcpy a vector<int> might do when you delete from the front of a small vector. Try making the list use a allocator from boost::pool, I bet you will see much better results! -- Cory Nelson http://www.int64.org
On 7/12/06, Cory Nelson
On 7/11/06, bringiton bringiton
wrote: On 7/12/06, bringiton bringiton
wrote: I'm just wondering what your thoughts are on my findings.
I need a container that meets the following: - forward iteration only - insert to end only - no random access - delete from any position - average size of list will be 64 items
I would first say use list over vector. because no random access and delete from any position. I ran some performance tests, which proved otherwise.
- it seems vector is best for smaller lists. even at 256 items, the vector is twice as fast at building at contant time and deleting and worst case linear time. - is this strange behaviour? is my test correct or do i have a bug? - i am also curious to how this runs on other compilers (esp later version of ms vc++)
I've done some more tests. It seems building a list is significantly slower that building a vector. (This is were the discrepancy is coming from). can anyone suggest why this is so? is the list<T>::push_back bottleneck due to memory allocation for each node?
Accessing the heap for every list edit can take significantly more time than the memcpy a vector<int> might do when you delete from the front of a small vector. Try making the list use a allocator from boost::pool, I bet you will see much better results!
-- Cory Nelson http://www.int64.org _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
ok results change when i use a larger object. clearly the list is faster Test build/remove 64 0.812 0.531 Test build/remove 256 8.688 2.016 Test build/remove 1024 126.487 11.032 the object i used: class SmallObject { public: SmallObject() { v = 0; s = "hello"; } int v; std::string s; };
On 7/12/06, bringiton bringiton
On 7/12/06, Cory Nelson
wrote: On 7/11/06, bringiton bringiton
wrote: On 7/12/06, bringiton bringiton
wrote: I'm just wondering what your thoughts are on my findings.
I need a container that meets the following: - forward iteration only - insert to end only - no random access - delete from any position - average size of list will be 64 items
I would first say use list over vector. because no random access and delete from any position. I ran some performance tests, which proved otherwise.
- it seems vector is best for smaller lists. even at 256 items, the vector is twice as fast at building at contant time and deleting and worst case linear time. - is this strange behaviour? is my test correct or do i have a bug? - i am also curious to how this runs on other compilers (esp later version of ms vc++)
I've done some more tests. It seems building a list is significantly slower that building a vector. (This is were the discrepancy is coming from). can anyone suggest why this is so? is the list<T>::push_back bottleneck due to memory allocation for each node?
Accessing the heap for every list edit can take significantly more time than the memcpy a vector<int> might do when you delete from the front of a small vector. Try making the list use a allocator from boost::pool, I bet you will see much better results!
-- Cory Nelson http://www.int64.org _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
ok results change when i use a larger object. clearly the list is faster
Test build/remove 64 0.812 0.531 Test build/remove 256 8.688 2.016 Test build/remove 1024 126.487 11.032
the object i used:
class SmallObject { public: SmallObject() { v = 0; s = "hello"; } int v; std::string s; };
however, i only plan on using small datatypes. for anything large, i will use pointers or shared_ptr<T> so the results for the larger object is not very relevant to my needs
bringiton bringiton wrote:
On 7/12/06, bringiton bringiton
wrote: On 7/12/06, Cory Nelson
wrote:
ok results change when i use a larger object. clearly the list is faster
Test build/remove 64 0.812 0.531 Test build/remove 256 8.688 2.016 Test build/remove 1024 126.487 11.032
the object i used:
class SmallObject { public: SmallObject() { v = 0; s = "hello"; } int v; std::string s; };
however, i only plan on using small datatypes. for anything large, i will use pointers or shared_ptr<T>
so the results for the larger object is not very relevant to my needs
FWIW, if the objects are larger and so you would tend to use an std::list<T>, boost::ptr_vector<T> or boost::ptr_deque<T> probably performs better as long as the major application is not insertions in the middle of the data structure. Try it out: http://www.boost.org/libs/ptr_container/doc/ptr_container.html -Thorsten
You can check how a vector's capacity changes, while pushing back new elements: unsigned int capacity=0; std::vector<int> vec; for (unsigned int i=0; i<10000; ++i) { vec.push_back(i); if (vec.capacity()!=capacity) { capacity=vec.capacity(); std::cout << capacity << std::endl; } } output: 1 2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 so it doubles its size whenever it hit its boundary. That's probably why building a vector is faster. On Wed, Jul 12, 2006 at 03:33:43PM +1000, bringiton bringiton wrote:
On 7/12/06, bringiton bringiton
wrote: I'm just wondering what your thoughts are on my findings.
I need a container that meets the following: - forward iteration only - insert to end only - no random access - delete from any position - average size of list will be 64 items
I would first say use list over vector. because no random access and delete from any position. I ran some performance tests, which proved otherwise.
The test builds a list by inserting to end. And then removes all items from the front until size == 0. (note: removing from index [0] should provide worst case linear complexity for vector)
Here are my finding for containers of size 64, 256 and 1024. first result uses vector, 2nd uses list. i ran the test in release mode in ms vc++ 6.
Test build/remove 64 0.062 0.234 Test build/remove 256 0.422 0.938 Test build/remove 1024 5.844 3.75
- it seems vector is best for smaller lists. even at 256 items, the vector is twice as fast at building at contant time and deleting and worst case linear time. - is this strange behaviour? is my test correct or do i have a bug? - i am also curious to how this runs on other compilers (esp later version of ms vc++)
For those curious, here is my code. feel free to pick at it ;)
void TestVectBuildDelete(int redoCount, int listSize) { boost::timer t; for (int i = 0; i < redoCount; i++) { // build the list std::vector<int> vect; for (int j = 0; j < listSize; j++) { vect.push_back(j); //vect.insert(vect.begin(), j); //.push_front(j); } // remove all elements in list while (!vect.empty()) { vect.erase(vect.begin()); } } std::cout << t.elapsed() << std::endl; }
void TestListBuildDelete(int redoCount, int listSize) { boost::timer t; for (int i = 0; i < redoCount; i++) { // build the list std::list<int> vect; for (int j = 0; j < listSize; j++) { vect.push_back(j); } // remove all elements in list while (!vect.empty()) { vect.pop_front(); } } std::cout << t.elapsed() << std::endl; }
void Test() { std::cout << "Test build/remove 64" << std::endl; TestVectBuildDelete(10000, 64); TestListBuildDelete(10000, 64);
std::cout << "Test build/remove 256" << std::endl; TestVectBuildDelete(10000, 256); TestListBuildDelete(10000, 256);
std::cout << "Test build/remove 1024" << std::endl; TestVectBuildDelete(10000, 1024); TestListBuildDelete(10000, 1024); }
I've done some more tests. It seems building a list is significantly slower that building a vector. (This is were the discrepancy is coming from). can anyone suggest why this is so? is the list<T>::push_back bottleneck due to memory allocation for each node?
*NB sorry this is boost related. but i found it interesting ;) _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
On Wed, Jul 12, 2006 at 03:33:43PM +1000, bringiton bringiton wrote:
On 7/12/06, bringiton bringiton
wrote: I'm just wondering what your thoughts are on my findings.
I need a container that meets the following: - forward iteration only - insert to end only - no random access - delete from any position - average size of list will be 64 items
I would first say use list over vector. because no random access and delete from any position. I ran some performance tests, which proved otherwise.
The test builds a list by inserting to end. And then removes all items from the front until size == 0. (note: removing from index [0] should provide worst case linear complexity for vector)
Here are my finding for containers of size 64, 256 and 1024. first result uses vector, 2nd uses list. i ran the test in release mode in ms vc++ 6.
Test build/remove 64 0.062 0.234 Test build/remove 256 0.422 0.938 Test build/remove 1024 5.844 3.75
I played some more and tried a deque. I used your code, just replacing vector, by deque. These are my timings (third value is deque) Test build/remove 64 0.06 0.17 0.06 Test build/remove 256 0.31 0.72 0.23 Test build/remove 1024 2.97 2.82 0.91 So maybe you should go for a deque?
There is a simple rule for choosing a container. If the number of elements in it is not huge then use vector. The reason is that although the complexity of for instance deleting an element from a vector is greater than one from a list, but O-differences (here linear-time vs. logarithmic-time) only matter if the containers are big enough to swamp the constant factor, which for containers of small objects like doubles frequently doesn't happen until after container sizes exceed several thousand elements.
Another factor, which affects the constant Mikhail is talking about, is if
the vector can fit into cache. This will make a huge difference. If you
could allocate your nodes to be close together in memory (ie. use Cory's
suggestion of boost::pool), you should be able to reap more cache coherency
benefit in your list too.
On 7/12/06, Mikhail Kubzin
There is a simple rule for choosing a container. If the number of elements in it is not huge then use vector.
The reason is that although the complexity of for instance deleting an element from a vector is greater than one from a list, but O-differences (here linear-time vs. logarithmic-time) only matter if the containers are big enough to swamp the constant factor, which for containers of small objects like doubles frequently doesn't happen until after container sizes exceed several thousand elements.
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
thanks for the deque tip. i however need to remove from any index (eg
remove the 3d element). i just used the remove head example in the
code to get the worst case senario.
yes, it is interesting that vector performs well when datatypes are
small and there are few items (even when size() is 256).
i will be using boost::shared_ptr<T>. so i'll have to do some tests
with this datatype.
so vector may be the best bet for the following requirements:
- only insert to end
- remove from any index
- average size of 64 items
- frequent iteration
- datatype: boost::shared_ptr<T>
i dont like the fact that vector resizes often when small. to get
around this i may set the capacity to 64. memory shouldn't be an issue
these days...agreed?
i ran some tests on the impact of the default capacity. i however
didnt find much difference in performance.
On 7/13/06, Brian Budge
Another factor, which affects the constant Mikhail is talking about, is if the vector can fit into cache. This will make a huge difference. If you could allocate your nodes to be close together in memory (ie. use Cory's suggestion of boost::pool), you should be able to reap more cache coherency benefit in your list too.
On 7/12/06, Mikhail Kubzin
wrote: There is a simple rule for choosing a container. If the number of elements in it is not huge then use vector.
The reason is that although the complexity of for instance deleting an element from a vector is greater than one from a list, but O-differences (here linear-time vs. logarithmic-time) only matter if the containers are big enough to swamp the constant factor, which for containers of small objects like doubles frequently doesn't happen until after container sizes exceed several thousand elements.
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
participants (6)
-
Boris Breidenbach
-
Brian Budge
-
bringiton bringiton
-
Cory Nelson
-
Mikhail Kubzin
-
Thorsten Ottosen