[typeof] MSVC8 limits & preprocessing typeof/vector.hpp

Recently I discovered that setting LIMIT_SIZE > 117 with MSVC8 produces an error: ...typeof/vector.hpp(58) : fatal error C1009: compiler limit : macros nested too deeply That's quite unfortunate because frameworks such as Spirit easily produce type expressions too complex for this setting. Well, we're at the preprocessor's and not the compiler's limits so we can do something about it: Changing the outer loops in typeof/vector.hpp to use BOOST_PP_LOCAL_ITERATE instead of BOOST_PP_REPEAT buys us a new maximum of 238 (and the same error for higher values, so it's still the preprocessor's limit). While testing my hack described above I recognized that preprocessing typeof/vector.hpp takes unpleasantly long, so I decided to kill two birds with one stone by improving performance and pushing the limits even further. That is making typeof/vector.hpp load preprocessed files for LIMIT_SIZE e { 50,100,150,200,250 }. For the maximum: 250 be it - it's getting quite close to the limits of Boost.Preprocessor (and I still don't know the maximum number of template parameters allowed with MSVC8 ;-). Here's my modifications: http://tinyurl.com/839s3 Cheers, Tobias

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Recently I discovered that setting LIMIT_SIZE > 117 with MSVC8 produces an error:
...typeof/vector.hpp(58) : fatal error C1009: compiler limit : macros nested too deeply
That's quite unfortunate because frameworks such as Spirit easily produce type expressions too complex for this setting.
Well, we're at the preprocessor's and not the compiler's limits so we can do something about it:
Changing the outer loops in typeof/vector.hpp to use BOOST_PP_LOCAL_ITERATE instead of BOOST_PP_REPEAT buys us a new maximum of 238 (and the same error for higher values, so it's still the preprocessor's limit).
OK, but this means we have to change the syntax of REGISTER macros to do something like: #include BOOST_TYPEOF_REGISTER_...(...) I remember you suggested this during the review. I am still not quite comfortable with the idea...
While testing my hack described above I recognized that preprocessing typeof/vector.hpp takes unpleasantly long, so I decided to kill two birds with one stone by improving performance and pushing the limits even further. That is making typeof/vector.hpp load preprocessed files for LIMIT_SIZE e { 50,100,150,200,250 }. For the maximum: 250 be it - it's getting quite close to the limits of Boost.Preprocessor (and I still don't know the maximum number of template parameters allowed with MSVC8 ;-).
Here's my modifications:
How much faster is it? Regards, Arkadiy

Arkadiy Vertleyb wrote:
OK, but this means we have to change the syntax of REGISTER macros to do something like:
#include BOOST_TYPEOF_REGISTER_...(...)
I remember you suggested this during the review. I am still not quite comfortable with the idea...
No it's a different pair of shoes I'm talking about here (in fact, having used the library some more I found several situations that require REGISTER_* to be macros, so you can safely forget about my review suggestion). What I'm talking about here is purely an implementation issue and doesn't affect the interface at all (typeof/vector.hpp). I'll attach a patch that highlights the changes needed to make things work for 118 <= LIMIT_SIZE <= 238.
Here's my modifications:
^^^^^ BTW excuse my beautiful english... ;-)
How much faster is it?
Hard to say in general. It depends on the case: bjam run in ${BOOST}/libs/typeof/test ============================================= without preprocessed files: about 2.5 minutes with preprocessed files: about 1.5 minutes original version: about 2.5 minutes * * slightly slower than the 1st Now to the cases that hurt: Test with LIMIT_SIZE = 117 ========================== without preprocessed files: 45 seconds with preprocessed files: 13 seconds original version: 52 seconds Test with LIMIT_SIZE = 238 ========================== without preprocessed files: about 5.5 minutes with preprocessed files: about 0.5 minutes original version: <limit exceeded> Test with LIMIT_SIZE = 250 ========================== without preprocessed files: <limit exceeded> with preprocessed files: about 0.5 minutes original version: <limit exceeded> More benchmarks needed? Just download the archive and copy the contents of the folder therein to ${BOOST}/boost/typeof. Setting the macro BOOST_TYPEOF_PREPROCESSING_MODE enforces preprocessing, otherwise preprocessed files are used. Boost.Typeof rocks... Cheers, Tobias Index: boost/typeof/vector.hpp =================================================================== RCS file: /cvsroot/boost/boost/boost/typeof/vector.hpp,v retrieving revision 1.2 diff -u -r1.2 vector.hpp --- boost/typeof/vector.hpp 9 Dec 2005 03:55:38 -0000 1.2 +++ boost/typeof/vector.hpp 6 Jan 2006 20:41:51 -0000 @@ -12,7 +12,9 @@ #include <boost/preprocessor/repeat_from_to.hpp> #include <boost/preprocessor/cat.hpp> #include <boost/preprocessor/inc.hpp> +#include <boost/preprocessor/dec.hpp> #include <boost/preprocessor/comma_if.hpp> +#include <boost/preprocessor/iteration/local.hpp> #ifndef BOOST_TYPEOF_LIMIT_SIZE # define BOOST_TYPEOF_LIMIT_SIZE 50 @@ -20,7 +22,7 @@ // iterator -#define BOOST_TYPEOF_spec_iter(z, n, _)\ +#define BOOST_TYPEOF_spec_iter(n)\ template<class V>\ struct v_iter<V, mpl::int_<n> >\ {\ @@ -31,7 +33,9 @@ namespace boost { namespace type_of { template<class V, class Pos> struct v_iter; // not defined - BOOST_PP_REPEAT(BOOST_TYPEOF_LIMIT_SIZE, BOOST_TYPEOF_spec_iter, ~) + #define BOOST_PP_LOCAL_MACRO BOOST_TYPEOF_spec_iter + #define BOOST_PP_LOCAL_LIMITS (0,BOOST_PP_DEC(BOOST_TYPEOF_LIMIT_SIZE)) + #include BOOST_PP_LOCAL_ITERATE() }} #undef BOOST_TYPEOF_spec_iter @@ -44,7 +48,7 @@ #define BOOST_TYPEOF_typedef_fake_item(z, n, _)\ typedef mpl::int_<1> item ## n; -#define BOOST_TYPEOF_define_vector(z, n, _)\ +#define BOOST_TYPEOF_define_vector(n)\ template<BOOST_PP_ENUM_PARAMS(n, class P) BOOST_PP_COMMA_IF(n) class T = void>\ struct vector ## n\ {\ @@ -55,7 +59,10 @@ namespace boost { namespace type_of { - BOOST_PP_REPEAT(BOOST_PP_INC(BOOST_TYPEOF_LIMIT_SIZE), BOOST_TYPEOF_define_vector, ~) + #define BOOST_PP_LOCAL_MACRO BOOST_TYPEOF_define_vector + #define BOOST_PP_LOCAL_LIMITS (0,BOOST_TYPEOF_LIMIT_SIZE) + #include BOOST_PP_LOCAL_ITERATE() + }} #undef BOOST_TYPEOF_typedef_item @@ -64,7 +71,7 @@ // push_back -#define BOOST_TYPEOF_spec_push_back(z, n, _)\ +#define BOOST_TYPEOF_spec_push_back(n)\ template<BOOST_PP_ENUM_PARAMS(n, class P) BOOST_PP_COMMA_IF(n) class T>\ struct push_back<BOOST_PP_CAT(boost::type_of::vector, n)<BOOST_PP_ENUM_PARAMS(n, P)>, T>\ {\ @@ -76,7 +83,9 @@ namespace boost { namespace type_of { template<class V, class T> struct push_back; // not defined - BOOST_PP_REPEAT(BOOST_TYPEOF_LIMIT_SIZE, BOOST_TYPEOF_spec_push_back, ~) + #define BOOST_PP_LOCAL_MACRO BOOST_TYPEOF_spec_push_back + #define BOOST_PP_LOCAL_LIMITS (0,BOOST_PP_DEC(BOOST_TYPEOF_LIMIT_SIZE)) + #include BOOST_PP_LOCAL_ITERATE() }} #undef BOOST_TYPEOF_spec_push_back

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Arkadiy Vertleyb wrote:
OK, but this means we have to change the syntax of REGISTER macros to do something like:
#include BOOST_TYPEOF_REGISTER_...(...)
I remember you suggested this during the review. I am still not quite comfortable with the idea...
No it's a different pair of shoes I'm talking about here (in fact, having used the library some more I found several situations that require REGISTER_* to be macros, so you can safely forget about my review suggestion).
What I'm talking about here is purely an implementation issue and doesn't affect the interface at all (typeof/vector.hpp). I'll attach a patch that highlights the changes needed to make things work for 118 <= LIMIT_SIZE <= 238.
Of course, I already realized that I was saying some nonsence :-(
Here's my modifications: ^^^^^ BTW excuse my beautiful english... ;-)
How much faster is it?
Hard to say in general. It depends on the case:
bjam run in ${BOOST}/libs/typeof/test ============================================= without preprocessed files: about 2.5 minutes with preprocessed files: about 1.5 minutes original version: about 2.5 minutes *
* slightly slower than the 1st
Now to the cases that hurt:
Test with LIMIT_SIZE = 117 ========================== without preprocessed files: 45 seconds with preprocessed files: 13 seconds original version: 52 seconds
Test with LIMIT_SIZE = 238 ========================== without preprocessed files: about 5.5 minutes with preprocessed files: about 0.5 minutes original version: <limit exceeded>
Test with LIMIT_SIZE = 250 ========================== without preprocessed files: <limit exceeded> with preprocessed files: about 0.5 minutes original version: <limit exceeded>
Very good.
More benchmarks needed?
No. Can you commit this? Just put a comment into vector.hpp about what needs to be done when it changes (how to generate the preprocessed files), and add your copyright to vector.hpp and generated files. If you have some tests, can you add them, too? Regards, Arkadiy

Arkadiy Vertleyb wrote:
No. Can you commit this?
Well, then someone would have to give me CVS write access, I guess. My Sourceforge user name is t_schwinger. While having CVS access could be a good thing in general, it's probably easier to use a ZIP, this time.
Just put a comment into vector.hpp about what needs to be done when it changes (how to generate the preprocessed files), and add your copyright to vector.hpp and generated files.
Just added some comments and updated the ZIP (I only put my copyright into the files involved in preprocessing, because there is none of my code in the preprocessed files). I put the preprocessing script to $(BOOST_ROOT)/libs/typeof/tools. If you want it to live somewhere else the relative path to the destination files inside of it has to be updated (there are comments inside).
If you have some tests, can you add them, too?
The programs I used for testing (except those in libs/typeof/test already) are heavily based on Boost.Spirit and Boost.Preprocessor, so IMO they are not too well suited to be official Boost.Typeof tests. Since vector50.hpp is generated by the same code as vector250.hpp I don't even know if any further testing is really that necessary. If so, adding more-or-less arbitrary tests with LIMIT_SIZE e { 100,150,200,250 } will do. -- Tobias

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
While having CVS access could be a good thing in general, it's probably easier to use a ZIP, this time.
For some reason it generates empty files... What I did was to replace vector.hpp, copy preprocess.pl into libs/typeof/tools, and run the pipe version of your example. It reported files being written, but all it wrote was copyright info... My knowledge of perl is equal to zero :-( Can you verify that this is indeed working for you before I try to figure out what's wrong? Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
While having CVS access could be a good thing in general, it's probably
easier to
use a ZIP, this time.
For some reason it generates empty files...
What I did was to replace vector.hpp, copy preprocess.pl into libs/typeof/tools, and run the pipe version of your example. It reported files being written, but all it wrote was copyright info...
Yeah it's all my fault, sorry. I just found out I messed up the preprocessing code by changing the name of a macro (but not everywhere). vector.hpp (from line 24 on) should read: #if defined(BOOST_TYPEOF_PP_INCLUDE_EXTERNAL) # undef BOOST_TYPEOF_PP_INCLUDE_EXTERNAL // <-- this line was bad # undef BOOST_TYPEOF_VECTOR_HPP_INCLUDED The ZIP has been updated with a corrected version too.
My knowledge of perl is equal to zero :-(
Then I should probably demystify that script for you: The script beautifies and stores the code contained within. After preprocessing the perl interpreter sees something like that as the main program: $sewer = <<'some_crypic_end_marker' < code from external files we #include and which we don't wanna store stands here > some_cryptic_end_marker ; $sewer = ''; &write_down("my_preprocessed_file.hpp",<<'some_crypric_end_marker' < the preprocessed code we want to keep > [...] some_cryptic_end_marker ); &write_down("another_preprocessed_file.hpp",<<'some_crypric_end_marker' < more preprocessed code we want to keep > [...] some_cryptic_end_marker ); [...] Which discards an inline section, a.k.a. a here-document: a string literal spanning multiple lines (of C++ in this case). And then invokes the subroutine 'write_down' with its first argument being the file name and the second argument being the content to store (another here-document). The subroutine write_down runs a regex over the code (passed in via the second parameter) to remove all the stuff we don't want our preprocessed code to contain and stores it in a file with a license header. That's all (the rest in there is just portable path twiddling and error handling)! -- Tobias

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
vector.hpp (from line 24 on) should read:
#if defined(BOOST_TYPEOF_PP_INCLUDE_EXTERNAL)
# undef BOOST_TYPEOF_PP_INCLUDE_EXTERNAL // <-- this line was bad # undef BOOST_TYPEOF_VECTOR_HPP_INCLUDED
Now all the files are OK except vector50.hpp, which is empty again :-( Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
Now all the files are OK except vector50.hpp, which is empty again :-(
If something can go wrong it will. (One version of Murphy's law) Curse! To avoid further embarrassement, today's upadte has been tested properly (you never know)... There was another typo in preprocess.pl #include <boost/mpl/vector.hpp> should be: #include <boost/typeof/vector.hpp> -- Tobias

"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
There was another typo in preprocess.pl
#include <boost/mpl/vector.hpp>
should be:
#include <boost/typeof/vector.hpp>
Now it's working... However I ran into a problem that I think needs to be addressed somehow, namely even if LIMIT_SIZE is set to, say, 60, the vector100.hpp gets included, and vc71 fails due to max number of template parameters exeeded (although it can be worked around by defining the preprocessing mode). It's possible that other compilers might have similar problem. It should be possible to define a macro (or header) which would create vector specializations from m to n, and then use this macro to generate preprocessed files. Then vector.hpp would look something like this: #if LIMIT_SIZE < 50 REGISTER_VECTOR(0, LIMIT_SIZE) #elif LIMIT_SIZE < 100 #include <boost/typeof/vector50.hpp> REGISTER_VECTOR(50, LIMIT_SIZE) #elif ... This would become slower for intermediate numbers, since up to 50 specializations might need to be generated on the fly... Thoughts? Regards, Arkadiy

Arkadiy Vertleyb wrote:
"Tobias Schwinger" <tschwinger@neoscientists.org> wrote
There was another typo in preprocess.pl
#include <boost/mpl/vector.hpp>
should be:
#include <boost/typeof/vector.hpp>
Now it's working...
It was about time ;-).
However I ran into a problem that I think needs to be addressed somehow, namely even if LIMIT_SIZE is set to, say, 60, the vector100.hpp gets included, and vc71 fails due to max number of template parameters exeeded (although it can be worked around by defining the preprocessing mode).
It's possible that other compilers might have similar problem.
It should be possible to define a macro (or header) which would create vector specializations from m to n, and then use this macro to generate preprocessed files. Then vector.hpp would look something like this:
#if LIMIT_SIZE < 50 REGISTER_VECTOR(0, LIMIT_SIZE) #elif LIMIT_SIZE < 100 #include <boost/typeof/vector50.hpp> REGISTER_VECTOR(50, LIMIT_SIZE) #elif ...
This would become slower for intermediate numbers, since up to 50 specializations might need to be generated on the fly...
Thoughts?
Well, I just implemented this kind of "partial preprocessing". Benchmarks: LIMIT_SIZE | seconds -----------|--------- 49 & | 5 50 & | 2 149 | 36 150 * | 19 199 | 50 200 * | 23 238 # | 49 (&) only #include <boost/typeof/typeof.hpp> used for these cases (*) no preprocessing needed in these cases (#) maximum, due to MSVC8 preprocessor limits The (tested ;-) ) code: http://tinyurl.com/9qq5h Adding preprocessed files for the exact maxima of the compilers in question with the previous version could be an alternative. Of course this technique can be applied to the "partial preprocessing" version as well (in case it turns out there is a compiler which allows e.g. at most 199 template parameters, for instance). Regards, Tobias
participants (2)
-
Arkadiy Vertleyb
-
Tobias Schwinger