[PREDEF] Review for the Boost.Predef library by Rene Riviera

The review of BOOST.PREDEF by Rene Riviera is scheduled between Monday, February 20th to February 29th. ================ Contents & Scope ================ BOOST.PREDEF defines a set of compiler, architecture, operating system, and library version numbers from the information it can gather of C++ predefined macros or those defined in generally available headers. The idea for this library grew out of a proposal to extend the Boost Config library to provide more, and consistent, information than the feature definitions it supports. What follows is an edited version of that brief proposal. =============== How to get it ? =============== Sources and documentation can be retrieved from http://tinyurl.com/73n6a3k ================ Writing a Review ================ The reviews and all comments should be submitted to the developers list, and the email should have "[PREDEF] Review" at the beginning of the subject line to make sure it's not missed. Please explicitly state in your review whether the library should be accepted. The general review checklist: - What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Did you try to use the library? With what compiler? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain? And finally, every review should answer this question: - Do you think the library should be accepted as a Boost library? Be sure to say this explicitly so that your other comments don't obscure your overall opinion.

On 2/18/2012 8:08 AM, Joel Falcou wrote:
=============== How to get it ? =============== Sources and documentation can be retrieved from http://tinyurl.com/73n6a3k
You can also find the sources and docs in the sandbox <http://svn.boost.org/svn/boost/sandbox/predef/>. Browsable docs at <http://svn.boost.org/svn/boost/sandbox/predef/libs/predef/doc/html/index.html>. Rene. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Hi, I guess it is ok to ask about details even if the library isn't in review yet, right? I'm not an expert in architecture so I hope I'm not asking something obvious. My understanding is that for example if you're targetting classic PC hardware, you'll get BOOST_ARCHITECTURE_X86 for 32Bit OS BOOST_ARCHITECTURE_AMD64 for 64Bit OS (running on 64bit processor). That assume that the hardware have a "normal" Intel or AMD processor. Then if it doesn't, it can be other kind of architecture (ARM or other listed in the defines) Would it be possible to also provide macros that would tell what is the "bit size" (I don't know the correct word here) without the architecture name? Something like BOOST_ARCHITECTURE_BITS_32 BOOST_ARCHITECTURE_BITS_64 Again, maybe I'm being naive about this, but I'm thinking about code that should behave differently under those conditions and shouldn't have to check all the corresponding architecture macro to detect wether it's 32 or 64 bits. Anyway, it's a nice library that would allow me to not have to set some (certainly flawed) macro myself. Joël Lamotte

On 2/18/2012 8:36 AM, Klaim - Joël Lamotte wrote:
I guess it is ok to ask about details even if the library isn't in review yet, right?
Sure.. As it certainly won't prevent me from answering them ;-)
I'm not an expert in architecture so I hope I'm not asking something obvious.
My understanding is that for example if you're targeting classic PC hardware, you'll get BOOST_ARCHITECTURE_X86 for 32Bit OS BOOST_ARCHITECTURE_AMD64 for 64Bit OS (running on 64bit processor).
If you are targeting 64Bit you are more likely to get both BOOST_ARCHITECTURE_AMD64 and BOOST_ARCHITECTURE_IA64.
That assume that the hardware have a "normal" Intel or AMD processor. Then if it doesn't, it can be other kind of architecture (ARM or other listed in the defines)
Would it be possible to also provide macros that would tell what is the "bit size" (I don't know the correct word here) without the architecture name? Something like BOOST_ARCHITECTURE_BITS_32 BOOST_ARCHITECTURE_BITS_64
It is possible.. But the question of what size you are measuring comes up first. The bit size could refer to pointer size, integer size, float size, register size, or the "word" size. Which suggest that you would want a more specific set of macros for the various measured types. Ideally it would be better to base such macros on something natively defined by the compiler.. As determining these from the BOOST_ARCHITECTURE_* macros would be a bit a chore.
Again, maybe I'm being naive about this, but I'm thinking about code that should behave differently under those conditions and shouldn't have to check all the corresponding architecture macro to detect wether it's 32 or 64 bits.
Not totally naive, and certainly a good abstraction idea.
Anyway, it's a nice library that would allow me to not have to set some (certainly flawed) macro myself.
Thanks. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On Sat, Feb 18, 2012 at 03:01:49PM -0600, Rene Rivera wrote:
On 2/18/2012 8:36 AM, Klaim - Joël Lamotte wrote:
I guess it is ok to ask about details even if the library isn't in review yet, right?
Sure.. As it certainly won't prevent me from answering them ;-)
I'm not an expert in architecture so I hope I'm not asking something obvious.
My understanding is that for example if you're targeting classic PC hardware, you'll get BOOST_ARCHITECTURE_X86 for 32Bit OS BOOST_ARCHITECTURE_AMD64 for 64Bit OS (running on 64bit processor).
If you are targeting 64Bit you are more likely to get both BOOST_ARCHITECTURE_AMD64 and BOOST_ARCHITECTURE_IA64.
The set of 64-bit architectures is a bit error-prone to enumerate explicitly in user code, as the set will inevitably be either incomplete or stale. Just consider SPARC, IBM's POWER, MIPS, and not to mention the most glorious DEC Alpha. -- Lars Viklund | zao@acc.umu.se

On 02/18/2012 10:18 PM, Lars Viklund wrote:
On Sat, Feb 18, 2012 at 03:01:49PM -0600, Rene Rivera wrote:
On 2/18/2012 8:36 AM, Klaim - Joël Lamotte wrote:
I guess it is ok to ask about details even if the library isn't in review yet, right?
Sure.. As it certainly won't prevent me from answering them ;-)
I'm not an expert in architecture so I hope I'm not asking something obvious.
My understanding is that for example if you're targeting classic PC hardware, you'll get BOOST_ARCHITECTURE_X86 for 32Bit OS BOOST_ARCHITECTURE_AMD64 for 64Bit OS (running on 64bit processor).
If you are targeting 64Bit you are more likely to get both BOOST_ARCHITECTURE_AMD64 and BOOST_ARCHITECTURE_IA64.
The set of 64-bit architectures is a bit error-prone to enumerate explicitly in user code, as the set will inevitably be either incomplete or stale.
Just consider SPARC, IBM's POWER, MIPS, and not to mention the most glorious DEC Alpha.
Couldn't you just use static_if(sizeof(void*) == 8) for this? ;)

I'm not an expert in architecture so I hope I'm not asking something obvious.
My understanding is that for example if you're targeting classic PC hardware, you'll get BOOST_ARCHITECTURE_X86 for 32Bit OS BOOST_ARCHITECTURE_AMD64 for 64Bit OS (running on 64bit processor).
If you are targeting 64Bit you are more likely to get both BOOST_ARCHITECTURE_AMD64 and BOOST_ARCHITECTURE_IA64.
The set of 64-bit architectures is a bit error-prone to enumerate explicitly in user code, as the set will inevitably be either incomplete or stale.
Just consider SPARC, IBM's POWER, MIPS, and not to mention the most glorious DEC Alpha.
Couldn't you just use static_if(sizeof(void*) == 8) for this? ;)
Not at preprocessing time. Regards, Nate

On Sat, Feb 18, 2012 at 9:23 AM, Rene Rivera <grafikrobot@gmail.com> wrote:
On 2/18/2012 8:08 AM, Joel Falcou wrote:
=============== How to get it ? =============== Sources and documentation can be retrieved from http://tinyurl.com/73n6a3k
You can also find the sources and docs in the sandbox <http://svn.boost.org/svn/boost/sandbox/predef/>. Browsable docs at <http://svn.boost.org/svn/boost/sandbox/predef/libs/predef/doc/html/index.html>.
Rene.
BOOST_VERSION_NUMBER() should be renamed to BOOST_MAKE_VERSION_NUMBER() or similar. Tony

on Sat Feb 18 2012, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
On 2/18/2012 8:08 AM, Joel Falcou wrote:
=============== How to get it ? =============== Sources and documentation can be retrieved from http://tinyurl.com/73n6a3k
You can also find the sources and docs in the sandbox <http://svn.boost.org/svn/boost/sandbox/predef/>. Browsable docs at <http://svn.boost.org/svn/boost/sandbox/predef/libs/predef/doc/html/index.html>.
One thing I don't see is any distinction between the target being compiled for and the host on which the compilation is happening. You could make an argument that no such distinction is needed, but in the absence of such an argument I suggest changes like BOOST_ARCHITECTURE_XXX => BOOST_TARGET_ARCHITECTURE_XXX Also, when BOOST_TARGET_ARCHITECTURE_XXX is 1, shouldn't we also have #define BOOST_TARGET_ARCHITECTURE XXX ? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 2/27/2012 1:20 PM, Dave Abrahams wrote:
on Sat Feb 18 2012, Rene Rivera<grafikrobot-AT-gmail.com> wrote:
On 2/18/2012 8:08 AM, Joel Falcou wrote:
=============== How to get it ? =============== Sources and documentation can be retrieved from http://tinyurl.com/73n6a3k
You can also find the sources and docs in the sandbox <http://svn.boost.org/svn/boost/sandbox/predef/>. Browsable docs at <http://svn.boost.org/svn/boost/sandbox/predef/libs/predef/doc/html/index.html>.
One thing I don't see is any distinction between the target being compiled for and the host on which the compilation is happening. You could make an argument that no such distinction is needed, but in the absence of such an argument I suggest changes like BOOST_ARCHITECTURE_XXX => BOOST_TARGET_ARCHITECTURE_XXX
I don't think I made such an argument :-) The argument is way simpler than that. None of the compilers I'm aware of tell you *directly* what host architecture they are running if it happens to be different from the target architecture. The best you can do is some user defined predef, or something derived from running something like uname. As for the name.. I'm open to suggestions. But as was mentioned ealier the name is rather long already. Which many people already object to. In general I'm all for detecting as much as we can from the compilers. Assuming that it's something that is generally possible. As detecting something for one or two compilers seems like a waste of development time. But then, the Predef library is also about giving users a way to define such detected version numbers on their own. Which is something that seems to have been lost. So I'm pointing it out for clarity.
Also, when BOOST_TARGET_ARCHITECTURE_XXX is 1, shouldn't we also have
#define BOOST_TARGET_ARCHITECTURE XXX
?
What would the "XXX" be in the define? ..The only option I can see is defining a string literal. Which is only useful for informational debugging uses. And not the more general decision testing. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

on Tue Feb 28 2012, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
On 2/27/2012 1:20 PM, Dave Abrahams wrote:
One thing I don't see is any distinction between the target being compiled for and the host on which the compilation is happening. You could make an argument that no such distinction is needed, but in the absence of such an argument I suggest changes like BOOST_ARCHITECTURE_XXX => BOOST_TARGET_ARCHITECTURE_XXX
I don't think I made such an argument :-) The argument is way simpler than that. None of the compilers I'm aware of tell you *directly* what host architecture they are running if it happens to be different from the target architecture. The best you can do is some user defined predef, or something derived from running something like uname.
As for the name.. I'm open to suggestions. But as was mentioned ealier the name is rather long already. Which many people already object to.
Also, when BOOST_TARGET_ARCHITECTURE_XXX is 1, shouldn't we also have
#define BOOST_TARGET_ARCHITECTURE XXX
?
What would the "XXX" be in the define? ..The only option I can see is defining a string literal. Which is only useful for informational debugging uses. And not the more general decision testing.
Good points. I don't think I was thinking very clearly :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Hello, This is my review:
- What is your evaluation of the design?
I personally do not like it for two reasons: When you include a header like <boost/predef.h> you get 103 files included... 103 stats, fopen, fclose, read parse and so on. More then that, consider build systems that should stat 103 files for each cpp file compiled. I know that this library created as "header only" but Boost has enough header-only configurations and not a single normal compile time configuration. I assume that better configuration can be created using compilation rather then explicit defines test, also it would be much more expendable for future directions. I think it would be great if a single file like <boost/conf_predef.h> would include all definitions. Something that can be generated at build time and it would include all needed defines making the "include" much faster and easier to read. Also I'd merge multiple files like boost/prefef/os/* to a single one to improve compilation times. Also generating compiled time header would allow to to extend the macros to new definitions that can't be done today for example: - BOOST_POINTER_SIZE 4 or 8 (or even 2) - BOOST_WCHAR_T_SIZE 2 or 4 And so on. Basically two things I'd like to see improved: 1. Do not create 103 files but several maybe bigger files. 2. Add compile time configuration generation for stuff that can't be generated using headers only.
- What is your evaluation of the implementation?
I think some stuff can be more fine grained. -------------------------------------------- For example OS Version for Windows... There are macros like _WIN32_WINNT (AFAIR) that gives you quite nice OS version. Another thing is for example libstdc++ version. I'd rather expect to have for example SO version like 6.x - similar to libstdc++.so.6.x It can be configured by mapping year -> version. Or use compiler version - library binded to specific compiler. It is not really intuitive because I do not remember in what year gcc-4.5 was released but I do know when for example unique_ptr was introduced. In this case I think it can be derived from gcc version and the fact that libstc++ library is used. Other things like for example LANGUAGE_STDCPP... We already have __cplusplus that does almost exactly the same as this macro. Why do we need it and what added value it gives us? Architecture ------------ I notices several problems: 1. Arm has several architectures ARM, ARMEL should be distinguished 2. There are MIPS and MIPSEL - Little Endian MIPS (debian has both for example) I'd suggest to check carefully all acrchitecures. Compiler: --------- There is no "Cygwin" compiler or "MINGW" compiler, it is GCC. Should be fixed OS: --- - I'd rather like to See Darwin then MACOS - MACOS not good. It should be MACOSCLASSIC and MACOS or even better: MACOS and DARWIN Mac OS 9 and Mac OS 10 share amlost nothing, on the other hand Darwin and Mac OX 10 is almost the same. Testing... --------- 1. Unfortunately I see none on this. I understand that is is not simple but you may probably use some tools like `uname` and compare their output any maybe use for example bjam toolset and bjam os and compare it with detected compiler. From what I can see all tests are manual. It is not good. 2. I'd like to read how did you tested this project... I don't see any of it in the documentation. Have you setup at least several VMs using for example qemu with different architectures? Have to tested this on several OSs?
- What is your evaluation of the documentation?
It needs to be improved, for example what BOOST_LIBSTD_CXX version means? Does BOOST_OS_WINDOWS defined when CYGWIN is used Only from looking at the source I could figure this out. I think each macro should display exact meaning for user who is interested.
- What is your evaluation of the potential usefulness of the library?
It should be very useful. Because it standardizes different macros and makes it easier to figure out for example how to check if we are using Solaris
- Did you try to use the library? With what compiler? Did you have any problems?
Yes, I did and I had some problems. 1. Almost all tests do not even compile on Cygwin... 2. Under mingw gcc-4.5.3 I see following output of info_as_cpp For example: ** Detected ** .... BOOST_LIBSTD_GNU = 410400028 (41,4,28) | GNU ... How should I treat it? However I mostly read the code and the documentation.
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
2-3 hours of code and documentation reading + writing this e-mail.
- Are you knowledgeable about the problem domain?
Yes, I needed such macros many times and created such configurations.
And finally, every review should answer this question:
- Do you think the library should be accepted as a Boost library?
Be sure to say this explicitly so that your other comments don't obscure your overall opinion.
Unfortunately, No. But I strongly recommend to improve the library and submit it for further review. Problems: 1. Too many macros are not fine grained as they should be or they generate unexpected or surprising definitions. 2. Documentation must be improved because for now it is very unclear. 3. Many problems with different definitions like there is no such thing like Cygwin compiler should be fixed. 4. 103 files in headers that would slow down the build process. 5. No real test suite exists So general direction is good but the library is far from being production ready. Artyom Beilis -------------- CppCMS - C++ Web Framework: http://cppcms.com/ CppDB - C++ SQL Connectivity: http://cppcms.com/sql/cppdb/

On Mon, Feb 20, 2012 at 7:06 PM, Artyom Beilis <artyomtnk@yahoo.com> wrote:
For example OS Version for Windows... There are macros like _WIN32_WINNT (AFAIR) that gives you quite nice OS version.
_WIN32_WINNT, WINVER, NTDDI_VERSION, etc are macros that are defined by user code to state what the minimum OS being targeted is. If it is not defined by user code it will default to the SDK version. (e.g. If you're using the Vista SDK it will default to Vista and make available all the Vista APIs) I'm not quite sure whether you meant you wanted to detect the OS that the code is currently being compiled under, or the OS being targeted, but these macros can only be used for the latter. (I assume that's what you meant anyway as I can't think of a use-case for the former, but I figured I'd check anyway...) If detecting the version being targeted is what you meant, then I agree that could be useful, as the state of the macros is kind-of messy*, and even if you define NTDDI_VERSION, in some cases it won't correctly define _WIN32_WINNT and/or WINVER, despite MSDN saying that it should. But I digress... * http://blogs.msdn.com/b/oldnewthing/archive/2007/04/11/2079137.aspx
There is no "Cygwin" compiler or "MINGW" compiler, it is GCC. Should be fixed
I think that it is useful, even if it's in the wrong category. I also think it could be improved. Specifically, I would like to be able to differentiate between MinGW and MinGW-w64.

On 02/20/2012 01:56 PM, Joshua Boyce wrote:
I think that it is useful, even if it's in the wrong category. I also think it could be improved. Specifically, I would like to be able to differentiate between MinGW and MinGW-w64.
It doesn't really make sense. Just use BOOST_CXX_GCC && BOOST_OS_WINDOWS.

On 2/20/2012 2:06 AM, Artyom Beilis wrote:
This is my review:
Thank you for taking the time to do a review. Much appreciated!
- What is your evaluation of the design?
I personally do not like it for two reasons:
When you include a header like<boost/predef.h> you get 103 files included... 103 stats, fopen, fclose, read parse and so on.
More then that, consider build systems that should stat 103 files for each cpp file compiled.
That thought did cross my mind when deciding on the header structure. But, given how many compilers support some form of precompiled inclusion I decided that it was OK to go with the modular arrangement. Especially for the much easier maintainability of the library. One option I can think of is to provide a single concatenated header that is generated from the current modular headers. This, of course is a bit more work on the library side, and is considerably harder for users to humanly understand one big concatenated header file. Hence this is something that I would seriously consider adjusting if the number of headers becomes a real measurable problem. Which also suggests that I should add some header parsing performance tests.
I know that this library created as "header only" but Boost has enough header-only configurations and not a single normal compile time configuration.
I do mention in the future tasks for the library that having some form of pre-compile external configuration as a way to add configuration for external libraries would be a good idea. But the scope, and hence goals, of the current set of definitions is limited to preprocessor defined symbols.
I assume that better configuration can be created using compilation rather then explicit defines test, also it would be much more expendable for future directions.
For just all the definitions that this library currently does I can't see how a compiled test could do any better. Since the compile test would do exactly what the library currently does as the definitions the library is based on are equivalent to what you could discover outside of the preprocessor.
I think it would be great if a single file like<boost/conf_predef.h> would include all definitions. Something that can be generated at build time and it would include all needed defines making
the "include" much faster and easier to read.
I disagree that this would make it easier to read. If one has to generate multiple "conf_predef.h" files, because one is dealing with multiple configurations, then it becomes less readable as one is trying to keep in your head, and manage with the build system, multiple versions of the same file. I've been there with auto-conf libraries, and written build system management for this, and it's a management pain to juggle the multi-conf design (from a user point of view).
Also I'd merge multiple files like boost/prefef/os/* to a single one to improve compilation times.
I mentioned above how I would consider doing that. Of course the drawback is that then one ends up with a large single file that one has to search around in to find what you may be looking for. Hence making documentation paramount :-)
Also generating compiled time header would allow to to extend the macros to new definitions that can't be done today for example:
- BOOST_POINTER_SIZE 4 or 8 (or even 2) - BOOST_WCHAR_T_SIZE 2 or 4
There is a fine line between the current goal of this library vs. what the goals of the Boost Config library. Currently this library is not about defining "features" as that is what Boost Config does. I'm not objecting to having the above definitions.. Just need to point out that..
And so on.
The "so on" would be subject to the goals ;-)
Basically two things I'd like to see improved:
1. Do not create 103 files but several maybe bigger files.
If I'm going to do that, I'd create just one file. After all if file system performance is the reason for doing it then it makes more sense to eliminate as much of it as possible.
2. Add compile time configuration generation for stuff that can't be generated using headers only.
Yes, as mentioned in the "todo" section.
- What is your evaluation of the implementation?
I think some stuff can be more fine grained. --------------------------------------------
For example OS Version for Windows... There are macros like _WIN32_WINNT (AFAIR) that gives you quite nice OS version.
It certainly would be nice to deal with all the complexity of the Windows version definitions. I'll add that to some future release.
Another thing is for example libstdc++ version. I'd rather expect to have for example SO version like 6.x - similar to libstdc++.so.6.x It can be configured by mapping year -> version.
I shied away from doing that as it then becomes a continual maintenance task. It especially means that users will be put in a situation where the predef library will lag behind in having mapped version definitions. Of course, ideally libstd++ would supply such a mapped version number directly and I could use that. But I failed to find such a definition.
Or use compiler version - library binded to specific compiler.
That is not sufficient. You can have a compiler that can target multiple configurations and hence this version may not have any relevance to the library one is actually compiling to. Unless you meant something else?
It is not really intuitive because I do not remember in what year
gcc-4.5 was released but I do know when for example unique_ptr was introduced.
Actually you'd have to look up more than just the year. You would need the exact date to be sure you have to correct condition as any particular feature may have been added late in a year where there are multiple releases as has been the case for all gcc release <http://gcc.gnu.org/releases.html>.
In this case I think it can be derived from gcc version and the fact that libstc++ library is used.
Other things like for example LANGUAGE_STDCPP... We already have __cplusplus that does almost exactly the same as this macro. Why do we need it and what added value it gives us?
I think the same can be argued for all the definitions that this library provides. It gives us the ability to check for the version, or just presence, with a *single* *consistent* *readable* interface.
Architecture ------------
I notices several problems:
1. Arm has several architectures ARM, ARMEL should be distinguished
Sorry if I don't know.. But what is ARMEL is exactly in relation to ARM? In particular how does it relate to the various ARM chips produced <http://en.wikipedia.org/wiki/ARM_architecture>? And how would I determine an ARMEL?
2. There are MIPS and MIPSEL - Little Endian MIPS (debian has both for example)
Same question as above <http://en.wikipedia.org/wiki/MIPS_architecture>. Hm.. Or is "EL" some tag for specifying the target endian-ness? Because if it is, adding an orthogonal endian definition is something I'd like to do in the near future. Although, like the "word" size, is not something as easily done as just enumerating architectures.
I'd suggest to check carefully all acrchitecures.
That's always the case for all the predefs ;-)
Compiler: ---------
There is no "Cygwin" compiler or "MINGW" compiler, it is GCC. Should be fixed
Right. My mistake there. Makes more sense to test for OS_CYGWIN and OS_WINDOWS.
OS: ---
- I'd rather like to See Darwin then MACOS - MACOS not good.
It should be MACOSCLASSIC and MACOS or even better:
MACOS and DARWIN
Mac OS 9 and Mac OS 10 share amlost nothing, on the other hand Darwin and Mac OX 10 is almost the same.
Well given recent development it might make more sense to just use MACOS and OSX. But yes, I'll consider better naming for those. Although given that the current definitions only cover 9.0 vs 10.0, it might not yield much of a gain in clarity.
Testing... ---------
1. Unfortunately I see none on this.
There is testing for at least the functionality of the version macro and the decomposition utility macros. So I object to the "none" characterization. Testing for this library was something I asked about previously on this list. And there were not any really attainable solutions. The solutions boiled down to either simulate the target configuration or human verification. Unfortunately the former is essentially a tautological test as the test is specifying both the definition and test. Human verification has the benefits that it is distributed and in this use case doesn't need to be repeated.
I understand that is is not simple but
Indeed :-\
you may probably use some tools like `uname` and compare their output any maybe use for example bjam toolset and bjam os and compare it with detected compiler.
No I can't. Any external mechanism would almost certainly test the host configuration not the target configuration. And the commonplace cross-compilation world we now live in this is not something that can be dismissed. Hence the only likely external test would involve running the compilers themselves and those would require prior knowledge of what you are targeting. And, consequently, likely running into the tautological test problem.
From what I can see all tests are manual. It is not good.
Not all tests.
2. I'd like to read how did you tested this project... I don't see any of it in the documentation.
I could certainly add such documentation. Although it would be continually expanding.
Have you setup at least several VMs using for example qemu with different architectures? Have to tested this on several OSs?
The current testing only involved the automated testing I mention above for the version and decomposition macros. Plus testing in the platforms that I have immediate access to: Windows, OSX, Linux (OpenSUSE), and iOS.
- What is your evaluation of the documentation?
It needs to be improved, for example what BOOST_LIBSTD_CXX version means?
I do point to the libraries web site <http://libcxx.llvm.org/>. Is that not enough to answer what it is? And the macros does say "If available version number as major, minor, and patch". Although now that I reread the definition it's actually only version and patch that are available.
Does BOOST_OS_WINDOWS defined when CYGWIN is used
I'll add a note about that when I find out if gcc on cygwin defines the windows macro.
Only from looking at the source I could figure this out. I think each macro should display exact meaning for user who is interested.
Hm, exact meaning would boil down to including the source in the documentation. But it would be worth it to list which predefined macros each macro uses in it's implementation. That should cover most cases without making the documentation just parrot the source.
1. Almost all tests do not even compile on Cygwin...
Could you elaborate? What cygwin version are you trying?
2. Under mingw gcc-4.5.3 I see following output of info_as_cpp
For example:
** Detected ** .... BOOST_LIBSTD_GNU = 410400028 (41,4,28) | GNU ...
How should I treat it?
The docs say "Version number available as year (from 1970), month, and day." I.e. the above says libstdc++ released on 2011/04/28.
- Do you think the library should be accepted as a Boost library?
Unfortunately, No.
Sorry to hear that..
But I strongly recommend to improve the library and submit it for further review.
Problems:
1. Too many macros are not fine grained as they should be or they generate unexpected or surprising definitions.
Do you have more examples than the ones you mentioned above?
2. Documentation must be improved because for now it is very unclear.
All documentation can always be improved ;-) Did my answers above address this for you?
3. Many problems with different definitions like there is no such thing like Cygwin compiler should be fixed. 4. 103 files in headers that would slow down the build process. 5. No real test suite exists
Is there more I can expand on to address those? -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

When you include a header like<boost/predef.h>
you get 103 files included... 103 stats, fopen, fclose, read parse and so on.
More then that, consider build systems that should stat 103 files for each cpp file compiled.
One option I can think of is to provide a single concatenated header that is generated from the current modular headers. This, of course is a bit more work on the library side, and is considerably harder for users to humanly understand one big concatenated header file.
But easier to compiler... :-)
Hence this is something that I would seriously consider adjusting if the number of headers becomes a real measurable problem. Which also suggests that I should add some header parsing performance tests.
The problem is not header parsing performance only. Every build system checks dependencies, that means it stats() all the files it makes the process slower. So even if the process of parsing and stating takes only 0.1s it becomes critical for many files.
For just all the definitions that this library currently does I can't see how a compiled test could do any better.
The point is that 99% of the stuff is already available from Boost.Config so in current version it becomes yet another Boost.Config.
the "include" much faster and easier to read.
I disagree that this would make it easier to read. If one has to generate multiple "conf_predef.h" files, because one is dealing with multiple configurations,
See, when you deal with multiple configurations you still need multiple libraries and even headers that are not compatible. See almost all big libraries around using "configuration defined headers" I think Boost is the only "big" library that does not generate some "config.h" But I understand that this is out of the current scope and this is not what prevented from me to vote Yes.
And so on.
The "so on" would be subject to the goals ;-)
So on is for every configuration part that can't be done using header only solution... And this is huge.
1. Do not create 103 files but several maybe bigger files.
If I'm going to do that, I'd create just one file. After all if file system performance is the reason for doing it then it makes more sense to eliminate as much of it as possible.
Ok...
Or use compiler version - library binded to specific compiler.
That is not sufficient. You can have a compiler that can target multiple configurations and hence this version may not have any relevance to the library one is actually compiling to. Unless you meant something else?
Generally gcc uses his own library unless it uses some 3rd part software like STLPort. So with gcc version + test if we using libstdc++ gives us exact version we need. It has nothing to do with multiple configurations. For example libstdc++ is released with a compiler and documentation of the version relates to the compiler.
It is not really intuitive because I do not remember in what year
gcc-4.5 was released but I do know when for example unique_ptr was introduced.
Actually you'd have to look up more than just the year You would need the exact date to be sure you have to correct condition as any particular feature may have been added late in a year where there are multiple releases as has been the case for all gcc release <http://gcc.gnu.org/releases.html>.
More then that there can be a situation where year(gcc-4.6.0) < year(gcc-4.5.7 - maintenance release) While actually from library point of view version(gcc-4.6.0) > version(gcc-4.5.7 - maintenance release) So it just become inconsistent from features point of view
Architecture ------------
I notices several problems:
1. Arm has several architectures ARM, ARMEL should be distinguished
Sorry if I don't know.. But what is ARMEL is exactly in relation to ARM? In particular how does it relate to the various ARM chips produced <http://en.wikipedia.org/wiki/ARM_architecture>? And how would I determine an ARMEL?
I don't know how to distinguish... But in general, for example Debian distinguish between two architectures, technically from CPU point of view they are the same but they use totally different ABI so from user perspective they are different incompatible architectures.
2. There are MIPS and MIPSEL - Little Endian MIPS (debian has both for example)
Same question as above <http://en.wikipedia.org/wiki/MIPS_architecture>. Hm.. Or is "EL" some tag for specifying the target endian-ness? Because if it is, adding an orthogonal endian definition is something I'd like to do in the near future. Although, like the "word" size, is not something as easily done as just enumerating architectures.
Same as in case of ARM, from user point of view they are incompatible architectures.
- I'd rather like to See Darwin then MACOS - MACOS not good.
Well given recent development it might make more sense to just use MACOS and OSX. But yes, I'll consider better naming for those. Although given that the current definitions only cover 9.0 vs 10.0, it might not yield much of a gain in clarity.
That is why MACOS - for classis Mac and DARWIN for Mac OS 10,11 etc. BTW for example I use some VM with Darwin 8 (== Mac OS 10.4) it is not Mac OS but it is apple operating system.
Testing... ---------
1. Unfortunately I see none on this.
There is testing for at least the functionality of the version macro and the decomposition utility macros. So I object to the "none" characterization.
These are very basic and cosmetic tests that do not actually test the core functionality
Testing for this library was something I asked about previously on this list. And there were not any really attainable solutions. The solutions boiled down to either simulate the target configuration or human verification. Unfortunately the former is essentially a tautological test as the test is specifying both the definition and test. Human verification has the benefits that it is distributed and in this use case doesn't need to be repeated.
Small examples: 1. You can use Boost.Build that already defines almost 80% of that parameters for target os and check. 2. You can do particular checks when target==host and use uname - at least for native builds it would be checked and for non-native you can assume that definitions would not change during cross compilation. There are more ideas. I understand that they would not solve all use cases and live some cases unsolved but at least it would give some automated testing for core system.
I understand that is is not simple but
Indeed :-\
This library should not be simple ;-)
The current testing only involved the automated testing I mention above for the version and decomposition macros. Plus testing in the platforms that I have immediate access to: Windows, OSX, Linux (OpenSUSE), and iOS.
I would strongly recommend to take a look on OS Zoo and qemu. It gives you at least 4-5 architectures (x86, x86_86, mips(el), arm, sparc) you can also install OpenSolaris and FreeBSD in VM and of course Darwin 8 So if all your tests are manual at least make sure they work as expected.
1. Almost all tests do not even compile on Cygwin...
Could you elaborate? What cygwin version are you trying?
1.7 I'll give you an error when I'll get to a PC with Cygwin.
2. Under mingw gcc-4.5.3 I see following output of info_as_cpp
For example:
** Detected ** .... BOOST_LIBSTD_GNU = 410400028 (41,4,28) | GNU ...
How should I treat it?
The docs say "Version number available as year (from 1970), month, and day." I.e. the above says libstdc++ released on 2011/04/28.
Ok it was not clear to me I'd expect in such case something like: (2011,4,28) which is much more clear. But ok I understand this now.
- Do you think the library should be accepted as a Boost library?
Unfortunately, No.
Sorry to hear that..
But I strongly recommend to improve the library and submit it for further review.
Problems:
1. Too many macros are not fine grained as they should be or they generate unexpected or surprising definitions.
Do you have more examples than the ones you mentioned above?
At least when I looked at something that I'm familiar with gcc - ok linux - ok libstdc++ - bad versioning cxx_cygwin - no such thing os_windows 1 and not version I found that at least 50% of them had given unexpected results. others I just could not check, that is why I asked on what platforms have you manually tested the library. So I can't say how more problems there are but they are too many for a small test case that I looked at.
2. Documentation must be improved because for now it is very unclear.
All documentation can always be improved ;-) Did my answers above address this for you?
The problem that 80% of the macros have so brief documentation that I can't really understand how to use them, what version to check, what ranges examples etc. The particular examples were only examples of general problem.
3. Many problems with different definitions like there is no such thing like Cygwin compiler should be fixed. 4. 103 files in headers that would slow down the build process. 5. No real test suite exists
Is there more I can expand on to address those?
I mostly think that much more deep testing required. I mean all macro that can be tested using available tools should be tested and result validated with users. I don't expect that for example you'll be able to test OpenVMS or z/390 but there is no reason that basic things would behave unexpectedly. I understand that it would increase the amount of the work by 200-300% if not more but my current feeling that the library does not seems to be ready I'm sorry, it is a very good direction but I feel that more work is required that is why I don't think it is ready for Boost. Artyom Beilis -------------- CppCMS - C++ Web Framework: http://cppcms.com/ CppDB - C++ SQL Connectivity: http://cppcms.com/sql/cppdb/

5. No real test suite exists
Is there more I can expand on to address those?
I mostly think that much more deep testing required. I mean all macro that can be tested using available tools should be tested and result validated with users.
I don't expect that for example you'll be able to test OpenVMS or z/390 but there is no reason that basic things would behave unexpectedly.
I explain it now little bit deeper what I think currently is wrong: I tested info_as_cpp on my own small server + compiler farm a use to check CppCMS, so far: Ok -- Linux x86 g++ Solaris g++ Solaris g++/StlPort Not Ok ------- Linux x86_64 g++ (IA64 detected) Linux x86_64 g++/stlport (libstdc++ and IA64 detected) Linux x86_64 clang (gcc and ia64 detected) Darwin x86 g++ (Unix is not detected) Solaris x64 sunCC (does not build) FreeBSD x86 g++ (does not build) Let's start: ------------ Linux x86_64 (Ubuntu 11.10) g++-4.6.1 ------------------------------------- Ok BOOST_ARCHITECTURE_AMD64 = 1 (0,0,1) | American Micro Devices AMD 64 Not OK! BOOST_ARCHITECTURE_IA64 = 1 (0,0,1) | Intel IA-64 I do not run Itanium! Linux x86_64 (Ubuntu 11.10) clang-2.9 ------------------------------------- (IA64 as well) Ok BOOST_CXX_CLANG = 20900000 (2,9,0) | Clang Not Ok BOOST_CXX_GNUC = 40200001 (4,2,1) | Gnu GCC C/C++ I don't use gcc! Linux x86_64 (Ubuntu 11.10) g++-4.6.1/Stlport ---------------------------------------------- (IA64 as well) Not Ok BOOST_LIBSTD_GNU = 410900003 (41,9,3) | GNU Ok BOOST_LIBSTD_STLPORT = 50200001 (5,2,1) | STLport I don't think BOOST_LIBSTD_GNU should be defined as I use STLPort Darwin 8.0.1 x86 g++4.0.0 ------------------------- Ok BOOST_OS_MACOS = 100000000 (10,0,0) | Mac OS Not Ok! BOOST_OS_UNIX = 0 | Unix Environment Darwin/Mac OS X is Unix. Open Solaris x86 ----------------- gcc-3.4 and sunCC/stlport ok But sunCC -I ../../.. info_as_cpp.cpp boost/predef/library/std/roguewave.h, line 37: Error: The function "BOOST_PREDEF_MAKE_FF_FF_FF" must have a prototype. But I'm ok with that as Boost does not work with SunCC without STLPort FreeBSD 8.0 x86 ---------------- g++ -I ../../.. info_as_cpp.cpp In file included from ../../../boost/predef/os.h:14, from ../../../boost/predef.h:15, from info_as_cpp.cpp:62: ../../../boost/predef/os/bsd.h:152: error: 'BOOST_OS_DRAGONFLY_BSD' was not declared in this scope ../../../boost/predef/os/bsd.h:154: error: 'BOOST_OS_BSDI_BSD' was not declared in this scope ../../../boost/predef/os/bsd.h:155: error: 'BOOST_OS_NET_BSD' was not declared in this scope ../../../boost/predef/os/bsd.h:156: error: 'BOOST_OS_OPEN_BSD' was not declared in this scope --------------------------- So that is my real problem. --------------------------- Artyom Beilis

On 02/21/2012 11:48 AM, Artyom Beilis wrote:
Linux x86_64 g++/stlport (libstdc++ and IA64 detected) Linux x86_64 clang (gcc and ia64 detected)
The structure of the library should already enforce mutually exclusive values and prevent this from happening. The fact that it doesn't is a sign of poor design and means that the library essentially needs to be rewritten.

----- Original Message -----
From: Artyom Beilis <artyomtnk@yahoo.com> To: "boost@lists.boost.org" <boost@lists.boost.org> Cc: Sent: Tuesday, February 21, 2012 12:48 PM Subject: Re: [boost] [PREDEF] Review for the Boost.Predef library by Rene Riviera
5. No real test suite exists
Is there more I can expand on to address those?
I mostly think that much more deep testing required. I mean all macro that can be tested using available tools should be tested and result validated with users.
I don't expect that for example you'll be able to test OpenVMS or z/390 but there is no reason that basic things would behave unexpectedly.
I explain it now little bit deeper what I think currently is wrong:
I tested info_as_cpp on my own small server + compiler farm a use to check CppCMS, so far:
Ok -- Linux x86 g++ Solaris g++ Solaris g++/StlPort
Not Ok ------- Linux x86_64 g++ (IA64 detected) Linux x86_64 g++/stlport (libstdc++ and IA64 detected) Linux x86_64 clang (gcc and ia64 detected) Darwin x86 g++ (Unix is not detected) Solaris x64 sunCC (does not build) FreeBSD x86 g++ (does not build)
Let's start: ------------
[snip]
Hello Rene, I'd be glad to hear a response on these points. Thank you. Artyom Beilis -------------- CppCMS - C++ Web Framework: http://cppcms.com/ CppDB - C++ SQL Connectivity: http://cppcms.com/sql/cppdb/

On 02/21/2012 11:09 AM, Artyom Beilis wrote:
When you include a header like<boost/predef.h>
you get 103 files included... 103 stats, fopen, fclose, read parse and so on.
More then that, consider build systems that should stat 103 files for each cpp file compiled.
One option I can think of is to provide a single concatenated header that is generated from the current modular headers. This, of course is a bit more work on the library side, and is considerably harder for users to humanly understand one big concatenated header file.
But easier to compiler... :-)
I am strongly opposed to this! First of all, it is the job of the compiler to compile. In theory, it doesn't matter how many files he needs to open to get the job done. However, it matters for us humans. Is this just over pessimistic, premature optimization?
Hence this is something that I would seriously consider adjusting if the number of headers becomes a real measurable problem. Which also suggests that I should add some header parsing performance tests.
The problem is not header parsing performance only. Every build system checks dependencies, that means it stats() all the files it makes the process slower.
So even if the process of parsing and stating takes only 0.1s it becomes critical for many files.
What you are suggesting effectively sounds like throwing all kinds of structure and file organization away that makes sense for us mere humans just to compile a little faster. Please, back it up with some proof. Additionally, i would like to add that a fine grained header file organization also leads to less includes. One might always need everything defined in the library in one TU. I claim that having those big headers is slowing the process down. So, how big is the impact? Did you do some real measurements?

On 02/22/2012 09:28 AM, Thomas Heller wrote:
What you are suggesting effectively sounds like throwing all kinds of structure and file organization away that makes sense for us mere humans just to compile a little faster. Please, back it up with some proof. Additionally, i would like to add that a fine grained header file organization also leads to less includes. One might always need everything defined in the library in one TU. I claim that having those big headers is slowing the process down.
Detecting the compiler is a global process, and requires knowledge about all the other compilers. In particular, order in which compilers are detected matters so that Clang is not detected as GCC. So it makes sense to only have one header for all compilers; all other approaches are likely to be dangerous or broken. Having one header per compiler doesn't make sense. It should never be allowed to include just one compiler header.

On 2/23/2012 4:52 AM, Mathias Gaunard wrote:
On 02/22/2012 09:28 AM, Thomas Heller wrote:
What you are suggesting effectively sounds like throwing all kinds of structure and file organization away that makes sense for us mere humans just to compile a little faster. Please, back it up with some proof. Additionally, i would like to add that a fine grained header file organization also leads to less includes. One might always need everything defined in the library in one TU. I claim that having those big headers is slowing the process down.
Detecting the compiler is a global process, and requires knowledge about all the other compilers.
In particular, order in which compilers are detected matters so that Clang is not detected as GCC.
So it makes sense to only have one header for all compilers; all other approaches are likely to be dangerous or broken.
Having one header per compiler doesn't make sense. It should never be allowed to include just one compiler header.
The current Boost.Config has one header per compiler so obviously it does make sense. Another header attempts to figure out the compiler being used and then the particular compiler's header is included. What is so arcane about that ? The words 'always' and 'never' should always be banned from computer programming discussions. They are words which are never correct to use.

On 02/24/2012 03:33 AM, Edward Diener wrote:
The current Boost.Config has one header per compiler so obviously it does make sense. Another header attempts to figure out the compiler being used and then the particular compiler's header is included. What is so arcane about that ?
Those are not "real" headers. You're not allowed to ever include one of those directly. Including any of those directly would open a whole can of worms. They don't even have include guards or anything of the sort. It could be argued that the extension should be changed from .hpp to .inl.

On 2/24/2012 7:49 AM, Mathias Gaunard wrote:
On 02/24/2012 03:33 AM, Edward Diener wrote:
The current Boost.Config has one header per compiler so obviously it does make sense. Another header attempts to figure out the compiler being used and then the particular compiler's header is included. What is so arcane about that ?
Those are not "real" headers. You're not allowed to ever include one of those directly.
Including any of those directly would open a whole can of worms. They don't even have include guards or anything of the sort.
It could be argued that the extension should be changed from .hpp to .inl.
Or rather, .ipp, as that's what other Boost libraries use for internal headers. Or inside a detail directory. I should also point out that the same also really applies to my library. The non-top level headers could be rearranged as implementation details. And I mentioned before, could be moved so that concatenated headers are used. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On 2/23/2012 8:33 PM, Edward Diener wrote:
On 2/23/2012 4:52 AM, Mathias Gaunard wrote:
On 02/22/2012 09:28 AM, Thomas Heller wrote:
What you are suggesting effectively sounds like throwing all kinds of structure and file organization away that makes sense for us mere humans just to compile a little faster. Please, back it up with some proof. Additionally, i would like to add that a fine grained header file organization also leads to less includes. One might always need everything defined in the library in one TU. I claim that having those big headers is slowing the process down.
Detecting the compiler is a global process, and requires knowledge about all the other compilers.
In particular, order in which compilers are detected matters so that Clang is not detected as GCC.
So it makes sense to only have one header for all compilers; all other approaches are likely to be dangerous or broken.
Having one header per compiler doesn't make sense. It should never be allowed to include just one compiler header.
The current Boost.Config has one header per compiler so obviously it does make sense. Another header attempts to figure out the compiler being used and then the particular compiler's header is included. What is so arcane about that ?
I think the objection with the way Predef does it is that in order to have all the negative definitions (#define XYZ 0) all the headers for a particular category need to be included. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

I have a couple of comments about Boost.Predef: Undefined vs null ================= You define macros for the pertinent platform (OS, CPU, compiler, standard) and set their value to the appropriate version number. If no version number could be determined, 0.0.1 is used instead. But you also define macros for all non-pertinent platforms and set their value to 0.0.0. I know that you are doing this to allow the compiler (rather than the preprocessor) to use the macros. However, this design decision makes the by far most common use case (#ifdef MACRO) more difficult in order to support a rare use case (if (MACRO)). This is not how developers are used to working with platform macros, and it will be an eternal source of confusion. I encourage you to reconsider this design decision, and instead only define macros that are relevant for the platform. This is more in line with the expectations of your future users. Version layout ============== You encode the version number as decimal numbers in the range from 0 to 1000000000. If you encode them as hexadecimal numbers instead, you can utilize the full range from 0 to 2^32. Extension macros ================ Some macros, such as __GNUC__, _MSC_VER, and _WIN32 serve a dual purpose. They are partly used to identify a platform, and partly to enable certain extensions in header files. Other compilers will therefore define these macros as well when they want access to these extensions. For instance clang defines __GNUC__ (and related macros). If you wish BOOST_CXX_GNUC only to be defined for GCC, then you need to exclude all other compilers that may set the __GNUC__ macro. For example: #if defined(__GNUC__) && !defined(__clang__) As your predef/compiler/gcc.h file has to know the clang macro too, you might as well put all compiler macros into a single file (as discussed earlier in the thread.) C++ version =========== Your choice of using 0.0.YYYY as the version layout for C++ standards is quite unusual. I understand that you are doing it for technical reasons (the patch number is the only entry that can contain four digits numbers), but I still think that a more intuitive solution is needed. One solution could be to use an encoding similar to your BOOST_PREDEF_MAKE_YYYY_MM_DD macro. Another solution could be to define an explicit macro for each supported standard. E.g. #if __cplusplus - 0 >= 199711L # define BOOST_LANGUAGE_CXX98 1 #endif Indentation of preprocessor directives ====================================== Old preprocessors will only recognize directives if these is a hash (#) in the first column of the line. The most portable way of indenting directives is therefore to put the hash sign in the first column and then put whitespaces between the hash sign and the rest of the directive. E.g. "# if" instead of " #if". Else if ======= You use the #elif directive in some files. Notice that older Borland preprocessors will fail with that directive. The portable way to do else-ifs is the more tedious alternative: #if MACRO #else # if MACRO2 # endif #endif Apropos else-if, in compiler/greenhills.h you are using the incorrect "#else if defined" construct. Credit ====== As the main author of the predef.sourceforge.net pages, it is pretty clear to me that you have extracted much information from those pages, so I think that a reference to those pages may be in order.

On 2/26/2012 9:06 AM, Bjorn Reese wrote:
I have a couple of comments about Boost.Predef: [...]
I'll get to these, and the other posts tomorrow night. But wanted to mention that..
Credit ====== As the main author of the predef.sourceforge.net pages, it is pretty clear to me that you have extracted much information from those pages, so I think that a reference to those pages may be in order.
Yes, it is my intent to credit everyone. But I was waiting until after the review as I wanted to avoid having to do this part of the docs multiple times. Since I need to account for all the people contributing to the review. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On 2/26/2012 9:06 AM, Bjorn Reese wrote:
I have a couple of comments about Boost.Predef:
Undefined vs null ================= You define macros for the pertinent platform (OS, CPU, compiler, standard) and set their value to the appropriate version number. If no version number could be determined, 0.0.1 is used instead.
But you also define macros for all non-pertinent platforms and set their value to 0.0.0.
Right.
I know that you are doing this to allow the compiler (rather than the preprocessor) to use the macros.
Actually, that's not the core reason..
However, this design decision makes the by far most common use case (#ifdef MACRO) more difficult in order to support a rare use case (if (MACRO)). This is not how developers are used to working with platform macros, and it will be an eternal source of confusion.
I guess that depends on the developers. The reasons the macros are always defined, to zero when not detected, is to reduce the amount of checks when writing conditionals. At least in Boost, the common use case goes something like this: #if defined(_some_compiler_) && (_some_compiler_ < 123) //... #endif Or the longer equivalent: #ifdef _some_compiler_ #if (_some_compiler_ < 123) ... #endif #endif This is because many times we are interested in a particular version number not just if it's one platform vs. another.
I encourage you to reconsider this design decision, and instead only define macros that are relevant for the platform. This is more in line with the expectations of your future users.
Version layout ============== You encode the version number as decimal numbers in the range from 0 to 1000000000. If you encode them as hexadecimal numbers instead, you can utilize the full range from 0 to 2^32.
It's a good thing it's easy to change when one has a single version definition macro :-) At the time of choosing the 2/2/5 split it wasn't a problem as I wasn't including some of the predefs that could make use of say a bigger major number, i.e. the ones that are dates. If we switched to using hex and keep at least the current allowed range it would have to be at least FF/FE/1FFFF, i.e. a decimal range of [0,255]-[0,127]-[0,131071]. Which gives some slightly larger ranges in each number, but not enough to get a year into the major number. But it might be worth it just for the patch number range increase.
Extension macros ================ Some macros, such as __GNUC__, _MSC_VER, and _WIN32 serve a dual purpose. They are partly used to identify a platform, and partly to enable certain extensions in header files. Other compilers will therefore define these macros as well when they want access to these extensions. For instance clang defines __GNUC__ (and related macros).
Right.
If you wish BOOST_CXX_GNUC only to be defined for GCC,
Which is the distinction between *actual* and *emulated* I mentioned in the other post tonight.
then you need to exclude all other compilers that may set the __GNUC__ macro. For example:
#if defined(__GNUC__) && !defined(__clang__)
As your predef/compiler/gcc.h file has to know the clang macro too, you might as well put all compiler macros into a single file (as discussed earlier in the thread.)
That doesn't follow from the argument. It is possible to instead have the clang.h file both define the Clang compiler, and undef all the other *emulation* predefs. Or more useful, define separate emulation predefs also. It's even possible to arrange it so that this works regardless of inclusion order of the gcc.h and clang.h headers. This has the benefit of keeping the knowledge about the real compiler in the real compilers' header only. Making it likely to be maintained accurately.
C++ version =========== Your choice of using 0.0.YYYY as the version layout for C++ standards is quite unusual. I understand that you are doing it for technical reasons (the patch number is the only entry that can contain four digits numbers), but I still think that a more intuitive solution is needed.
One solution could be to use an encoding similar to your BOOST_PREDEF_MAKE_YYYY_MM_DD macro.
Another solution could be to define an explicit macro for each supported standard. E.g.
#if __cplusplus - 0 >= 199711L # define BOOST_LANGUAGE_CXX98 1 #endif
Yes, I'm not that happy with the choice of 0.0.YYYY either. And I'm leaning towards standardizing all the predefs with even partial dates to be the YYYY.MM.DD - 1970.0.0 format.
Indentation of preprocessor directives ====================================== Old preprocessors will only recognize directives if these is a hash (#) in the first column of the line. The most portable way of indenting directives is therefore to put the hash sign in the first column and then put whitespaces between the hash sign and the rest of the directive. E.g. "# if" instead of " #if".
Even if I've been programming for a long time.. I don't even remember having to deal with those. But if they are still sufficiently used, and worth supporting, I'm willing to adjust them.
Else if ======= You use the #elif directive in some files. Notice that older Borland preprocessors will fail with that directive. The portable way to do else-ifs is the more tedious alternative:
#if MACRO #else # if MACRO2 # endif #endif
At least I've run into that one a few times :-) And I'm also willing to adjust accordingly to increase portability.. But wouldn't this only be needed for the parts that would be considered by the boarland PP. I.e. the common parts and the borland header itself? (Assuming the others bail after the first #if)
Apropos else-if, in compiler/greenhills.h you are using the incorrect "#else if defined" construct.
Making a not of that, thanks. OK, ran out of time for tonight :-( Other replies, after the C/C++UG meeting tomorrow night. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

----- Original Message -----
From: Rene Rivera <grafikrobot@gmail.com>
I guess that depends on the developers. The reasons the macros are always defined, to zero when not detected, is to reduce the amount of checks when writing conditionals. At least in Boost, the common use case goes something like this:
#if defined(_some_compiler_) && (_some_compiler_ < 123) //... #endif
And in your case it would be #if boost_some_compiler < boost_version(1,2,3) Would give you true if the boost_some_compiler == 0 so you will have to write #if boost_some_compiler < 0 && boost_some_compiler < boost_version(1,2,3) ... I don't see too much savings. Artyom

Rene Rivera wrote:
On 2/26/2012 9:06 AM, Bjorn Reese wrote:
But you also define macros for all non-pertinent platforms and set their value to 0.0.0.
Right. [snip]
However, this design decision makes the by far most common use case (#ifdef MACRO) more difficult in order to support a rare use case (if (MACRO)). This is not how developers are used to working with platform macros, and it will be an eternal source of confusion.
I guess that depends on the developers. The reasons the macros are always defined, to zero when not detected, is to reduce the amount of checks when writing conditionals. At least in Boost, the common use case goes something like this:
#if defined(_some_compiler_) && (_some_compiler_ < 123) //... #endif [snip] This is because many times we are interested in a particular version number not just if it's one platform vs. another.
However, if _some_compiler_ is always set, to 0.0.0 if to nothing else, then the first test is useless. Besides, I frequently write conditional code of the following form: #if defined _some_compiler_ //... #endif That is, the version is not relevant to me, either because I can assume a limited version range, or because I've rejected irrelevant versions elsewhere. If _some_compiler_ is always defined, even if to 0.0.0, then such tests are not possible. Did I miss something? _____ Rob Stewart robert.stewart@sig.com Software Engineer using std::disclaimer; Dev Tools & Components Susquehanna International Group, LLP http://www.sig.com ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

On 2012-02-28 08:09, Rene Rivera wrote:
I guess that depends on the developers. The reasons the macros are
Not really. There are many projects that use the pre-defined compiler macros, and all that I have seen leave the non-pertinent platform macros undefined. In most cases the projects only need to check for the presence/absence of a platform macro, and only in more special cases do they need to check the version number. Boost itself may be an exception, but if Boost.Predef is going to be available to everybody, then it should be optimized for the common case.
#if defined(_some_compiler_) && (_some_compiler_ < 123) //... #endif
If you want a compact syntax then use: #if (_some_compiler_ - 0 < 123) //... #endif
Which is the distinction between *actual* and *emulated* I mentioned in the other post tonight.
Yes. However, in that reply you mentioned the possibility of added predefs for the emulated compiler. I do not think that is needed as the emulated stuff is only enabled to gain access to extensions in header files that assume that only gcc supports those extensions. This use of the macros is really a bad design. A better design has been opted by Posix, where there are two sets of macros: _POSIX_VERSION : Tells you the actual version being used. _POSIX_SOURCE : You tell the compiler to use a specific version. See: http://pubs.opengroup.org/onlinepubs/007904975/functions/xsh_chap02_02.html But I am digressing...
That doesn't follow from the argument. It is possible to instead have the clang.h file both define the Clang compiler, and undef all the other *emulation* predefs. Or more useful, define separate emulation predefs
The point that I was trying to raise was that with separate files you would have to use other macros besides the platform you want to check for in the single file (e.g. either gcc.h would have to use __clang__ or or clang.h would have to use __GNUC__). So since you have a dependency between these platforms, the idea of keeping everything neatly separated in indidual files is diluted. This was only a minor point though. File access and speed of inclusion, as mentioned by several people, is still the major issue with the separate files.

On 02/21/2012 08:17 AM, Rene Rivera wrote:
More then that, consider build systems that should stat 103 files for each cpp file compiled.
That thought did cross my mind when deciding on the header structure. But, given how many compilers support some form of precompiled inclusion I decided that it was OK to go with the modular arrangement. Especially for the much easier maintainability of the library. One option I can think of is to provide a single concatenated header that is generated from the current modular headers. This, of course is a bit more work on the library side, and is considerably harder for users to humanly understand one big concatenated header file. Hence this is something that I would seriously consider adjusting if the number of headers becomes a real measurable problem. Which also suggests that I should add some header parsing performance tests.
Bryce Lelbach did a similar library to yours: <https://github.com/brycelelbach/detect> It only contains 4 short files + 1 file to include them all and is perfectly maintainable. If you cut the line explicitly between mutually-exclusive macros and others that are not, then this structure works very well. To me it doesn't make sense if the library has more than 10 files. It is a sign there is a big design and scalability problem there.

FWIW some musings based on a quick look at the library* and other peoples comments. - I understand the rational that led to the library structure, but it does seem a little too much, especially when all headers are included all the time. It occurs that it might be better to have a single NULL definition header which defines all definitions of a given type (e.g. compiler) as 0 or equivalent then each platform includes a single header of each type which would redefine any supported macros. This would also mean a single location of all macros would exist in code which would be handy for quick checks (though I realise this is not a compelling argument!). - The above assume a NULL defined style macro rather than an undefined macro style – as has been mentioned I think perhaps undefined macros are the better choice for these very low level definitions. I’d suggest it’s likely that if you have to use these for areas of the code then the code being blocked off is likely not legal in most other cases (and so the alternative C++ compiler “if” usage would not be possible anyway and so removes the only reason I can see to have things as they are). - Why does the library use division and modulo as a means of extracting values from version numbers? To me this is inherently confusing, easy to misread, and error prone. Using shifts and hex masks would make much more sense (at least in some cases), but I appreciate I may be in the minority here. - The version macros seem to be missing some obvious, um, versions, if they are intended to be used by users and to extend the library it would be good to see all “obvious” versions supported out of the box (e.g. “NN”, “NN_NN_NN_NN” and I note that Windows version numbers would require a “NNNN_NN_NN”) - C++ language version is all well and good, but it doesn’t help much when trying to work out if a compiler supports a specific feature. It may be beyond the (initial?) scope of the library, but it’d be nice to see some definitions for language features (I realise some are available elsewhere in boost, but they would seem fit for predef IMO). (So things like varadic templates or partial template specialisation, etc.) - I’d add my vote to shorter names (e.g. ARCH) - Finally I very much prefer “CPP” to “CXX” - .cpp seems all but standard as file extension to have macros with CXX rather than CPP seems odd (but I understand why it might be better and this is purely a personal preference). Just my thoughts :oP Iain *Although I've looked at the library a bit I have not used it - I'm very interested in it though.

- The above assume a NULL defined style macro rather than an undefined macro style – as has been mentioned I think perhaps undefined macros are the better choice for these very low level definitions. I’d suggest it’s likely that if you have to use these for areas of the code then the code being blocked off is likely not legal in most other cases (and so the alternative C++ compiler “if” usage would not be possible anyway and so removes the only reason I can see to have things as they are). Ahem - this point can be safely ignored :o)
I actually knew this wasn't the "only" reason to have things as they are even before it was spelled out (in another post) - honest! I blame a sleep addled brain. However, the choice between undefined and null defined is perhaps best left to the user? So, a set of BOOST_CXX_GCC and associated BOOST_CXX_GCC_VERSION might be worth doing - the first is defined only on when the compiler is GCC (or emulates GCC) the second is always defined, but null when not GCC. Best of both worlds, no? Iain

On 27/02/2012 17:02, Iain Denniston wrote:
- The version macros seem to be missing some obvious, um, versions, if they are intended to be used by users and to extend the library it would be good to see all “obvious” versions supported out of the box (e.g. “NN”, “NN_NN_NN_NN” and I note that Windows version numbers would require a “NNNN_NN_NN”)
Oh for crying out loud - seems I needed more caffeine/sleep before I posted this. Sigh. NN_NN_NN_NN should be NN_NN_NN_00 and that is actually what Windows version numbers need. Also want to add that these macros are not really very clearly named, though its not clear what would be a better naming scheme - my best effort so far is something like: BOOST_PREDEF_XYYY_TO_X_0_YYY BOOST_PREDEF_XXYYZZ00_TO_XX_YY_ZZ If the expectation is that predef/these macros are to be taken up by others an expanded to further things, then I think these macros need to be better. If they are expected to remain internal to the library then I'm just being picky (and am happy to be ignored) :P Iain
participants (15)
-
Artyom Beilis
-
Bjorn Reese
-
Dave Abrahams
-
Edward Diener
-
Gottlob Frege
-
Iain Denniston
-
Joel Falcou
-
Joshua Boyce
-
Klaim - Joël Lamotte
-
Lars Viklund
-
Mathias Gaunard
-
Nathan Ridge
-
Rene Rivera
-
Stewart, Robert
-
Thomas Heller