Sorry for not writing back at this thread.
It's been very informative. So thanks!
Larry,
I had this issue for a hypothetical matrix computing problem, but
nothing much practical.
I'm sticking to smaller vectors now.
Best regards,
Rodrigo
On Fri, Jun 14, 2013 at 3:38 PM, Larry Evans
On 06/14/13 09:08, Larry Evans wrote:
On 06/14/13 04:39, Slava wrote:
On Thu, 13 Jun 2013 14:27:30 +0200, Larry Evans
wrote: It also appears that when TUPLE_SIZE >=100, compile time goes up linearly.
I think that linear remark was wrong. It goes up by a factor of about 2 between 100 and 200 and then by another factor of 2 between 200 and 300. That's exponential IIRC :( Of course I could be misunderstanding again. It happens :(
Take a look at memory usage either, as soon as it starts swapping, it's far from linear. Your peers will curse you for this code. CDT indexer
Yes. Looking at the performance monitor as it was compiling TUPLE_SiZE=500 showed almost all the memory was used and, IIRC, about 40% of swap.
(eclipse) takes a lot time and memory on such constructs. The size of the produced binary (due to debug info for all these small inlined internal helper functions) must not be forgotten. It's hard to justify half-Gb executable to the colleagues.
-- Slava [snip]
Using the slim::accumulate template instead of using python to generate the summation code doesn't make much difference:
tuple_size compile_time 10 2.01 50 3.37 100 6.26 200 18.51 300 43.79 400 109.28 500 195.42
Attached is the driver.
I did try to eyeball a fit with qtiplot which showed:
compile_time = 12*2^(max(0,tuple_size-100)/100)
looked quite similar at tuple_size>=100.
--Larry
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users