
On Thu, 2 Sep 2004 09:24:11 -0400, Arkadiy Vertleyb <vertleyb@hotmail.com> wrote:
"Peder Holt" <peder.holt@gmail.com> wrote
The problem with this is compile times. As the message thread specified above. This will causes the preprocessor to generate a lot of code (for vector1,vector2,vector3,vector4 etc. etc), causing the compile times to plummet as the vector size goes up.
Note that this is done once per compilation unit (and can be easily taken care of by the pre-compiled headers feature on the systems that support them).
When considering performance we should clearly separate what's happenning once per translation unit, and what's happenning on the per-typeof invocation basis.
Please also note that your "compile-time variables" require to instantiate a template (or maybe even 3 templates) every time a variable is set.
Also, you force all users of typeof to work with very large mpl vectors.
I don't think so. As I said, I will most likely stop using BOOST_MPL_LIMIT_VECTOR_SIZE, and use my own limit N to directly work with mpl::vectorN<>.
And as far as I understand MPL, the fact that I am working with, e.g., mpl::vector256<> doesn't prevent other facilities in the same translation unit from using mpl::vector3<>.
Ok. I agree. If you work directly on the vectorN templates, mpl::vector is not affected. Given the need for random access, mpl is probably the best choice after all. I'll do some research on your code (when I get the time, and you have the new version ready) to test some of my ideas to see if I have a point or not :) -- Peder Holt
Regards, Arkadiy
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost