
When you include a header like<boost/predef.h>
you get 103 files included... 103 stats, fopen, fclose, read parse and so on.
More then that, consider build systems that should stat 103 files for each cpp file compiled.
One option I can think of is to provide a single concatenated header that is generated from the current modular headers. This, of course is a bit more work on the library side, and is considerably harder for users to humanly understand one big concatenated header file.
But easier to compiler... :-)
Hence this is something that I would seriously consider adjusting if the number of headers becomes a real measurable problem. Which also suggests that I should add some header parsing performance tests.
The problem is not header parsing performance only. Every build system checks dependencies, that means it stats() all the files it makes the process slower. So even if the process of parsing and stating takes only 0.1s it becomes critical for many files.
For just all the definitions that this library currently does I can't see how a compiled test could do any better.
The point is that 99% of the stuff is already available from Boost.Config so in current version it becomes yet another Boost.Config.
the "include" much faster and easier to read.
I disagree that this would make it easier to read. If one has to generate multiple "conf_predef.h" files, because one is dealing with multiple configurations,
See, when you deal with multiple configurations you still need multiple libraries and even headers that are not compatible. See almost all big libraries around using "configuration defined headers" I think Boost is the only "big" library that does not generate some "config.h" But I understand that this is out of the current scope and this is not what prevented from me to vote Yes.
And so on.
The "so on" would be subject to the goals ;-)
So on is for every configuration part that can't be done using header only solution... And this is huge.
1. Do not create 103 files but several maybe bigger files.
If I'm going to do that, I'd create just one file. After all if file system performance is the reason for doing it then it makes more sense to eliminate as much of it as possible.
Ok...
Or use compiler version - library binded to specific compiler.
That is not sufficient. You can have a compiler that can target multiple configurations and hence this version may not have any relevance to the library one is actually compiling to. Unless you meant something else?
Generally gcc uses his own library unless it uses some 3rd part software like STLPort. So with gcc version + test if we using libstdc++ gives us exact version we need. It has nothing to do with multiple configurations. For example libstdc++ is released with a compiler and documentation of the version relates to the compiler.
It is not really intuitive because I do not remember in what year
gcc-4.5 was released but I do know when for example unique_ptr was introduced.
Actually you'd have to look up more than just the year You would need the exact date to be sure you have to correct condition as any particular feature may have been added late in a year where there are multiple releases as has been the case for all gcc release <http://gcc.gnu.org/releases.html>.
More then that there can be a situation where year(gcc-4.6.0) < year(gcc-4.5.7 - maintenance release) While actually from library point of view version(gcc-4.6.0) > version(gcc-4.5.7 - maintenance release) So it just become inconsistent from features point of view
Architecture ------------
I notices several problems:
1. Arm has several architectures ARM, ARMEL should be distinguished
Sorry if I don't know.. But what is ARMEL is exactly in relation to ARM? In particular how does it relate to the various ARM chips produced <http://en.wikipedia.org/wiki/ARM_architecture>? And how would I determine an ARMEL?
I don't know how to distinguish... But in general, for example Debian distinguish between two architectures, technically from CPU point of view they are the same but they use totally different ABI so from user perspective they are different incompatible architectures.
2. There are MIPS and MIPSEL - Little Endian MIPS (debian has both for example)
Same question as above <http://en.wikipedia.org/wiki/MIPS_architecture>. Hm.. Or is "EL" some tag for specifying the target endian-ness? Because if it is, adding an orthogonal endian definition is something I'd like to do in the near future. Although, like the "word" size, is not something as easily done as just enumerating architectures.
Same as in case of ARM, from user point of view they are incompatible architectures.
- I'd rather like to See Darwin then MACOS - MACOS not good.
Well given recent development it might make more sense to just use MACOS and OSX. But yes, I'll consider better naming for those. Although given that the current definitions only cover 9.0 vs 10.0, it might not yield much of a gain in clarity.
That is why MACOS - for classis Mac and DARWIN for Mac OS 10,11 etc. BTW for example I use some VM with Darwin 8 (== Mac OS 10.4) it is not Mac OS but it is apple operating system.
Testing... ---------
1. Unfortunately I see none on this.
There is testing for at least the functionality of the version macro and the decomposition utility macros. So I object to the "none" characterization.
These are very basic and cosmetic tests that do not actually test the core functionality
Testing for this library was something I asked about previously on this list. And there were not any really attainable solutions. The solutions boiled down to either simulate the target configuration or human verification. Unfortunately the former is essentially a tautological test as the test is specifying both the definition and test. Human verification has the benefits that it is distributed and in this use case doesn't need to be repeated.
Small examples: 1. You can use Boost.Build that already defines almost 80% of that parameters for target os and check. 2. You can do particular checks when target==host and use uname - at least for native builds it would be checked and for non-native you can assume that definitions would not change during cross compilation. There are more ideas. I understand that they would not solve all use cases and live some cases unsolved but at least it would give some automated testing for core system.
I understand that is is not simple but
Indeed :-\
This library should not be simple ;-)
The current testing only involved the automated testing I mention above for the version and decomposition macros. Plus testing in the platforms that I have immediate access to: Windows, OSX, Linux (OpenSUSE), and iOS.
I would strongly recommend to take a look on OS Zoo and qemu. It gives you at least 4-5 architectures (x86, x86_86, mips(el), arm, sparc) you can also install OpenSolaris and FreeBSD in VM and of course Darwin 8 So if all your tests are manual at least make sure they work as expected.
1. Almost all tests do not even compile on Cygwin...
Could you elaborate? What cygwin version are you trying?
1.7 I'll give you an error when I'll get to a PC with Cygwin.
2. Under mingw gcc-4.5.3 I see following output of info_as_cpp
For example:
** Detected ** .... BOOST_LIBSTD_GNU = 410400028 (41,4,28) | GNU ...
How should I treat it?
The docs say "Version number available as year (from 1970), month, and day." I.e. the above says libstdc++ released on 2011/04/28.
Ok it was not clear to me I'd expect in such case something like: (2011,4,28) which is much more clear. But ok I understand this now.
- Do you think the library should be accepted as a Boost library?
Unfortunately, No.
Sorry to hear that..
But I strongly recommend to improve the library and submit it for further review.
Problems:
1. Too many macros are not fine grained as they should be or they generate unexpected or surprising definitions.
Do you have more examples than the ones you mentioned above?
At least when I looked at something that I'm familiar with gcc - ok linux - ok libstdc++ - bad versioning cxx_cygwin - no such thing os_windows 1 and not version I found that at least 50% of them had given unexpected results. others I just could not check, that is why I asked on what platforms have you manually tested the library. So I can't say how more problems there are but they are too many for a small test case that I looked at.
2. Documentation must be improved because for now it is very unclear.
All documentation can always be improved ;-) Did my answers above address this for you?
The problem that 80% of the macros have so brief documentation that I can't really understand how to use them, what version to check, what ranges examples etc. The particular examples were only examples of general problem.
3. Many problems with different definitions like there is no such thing like Cygwin compiler should be fixed. 4. 103 files in headers that would slow down the build process. 5. No real test suite exists
Is there more I can expand on to address those?
I mostly think that much more deep testing required. I mean all macro that can be tested using available tools should be tested and result validated with users. I don't expect that for example you'll be able to test OpenVMS or z/390 but there is no reason that basic things would behave unexpectedly. I understand that it would increase the amount of the work by 200-300% if not more but my current feeling that the library does not seems to be ready I'm sorry, it is a very good direction but I feel that more work is required that is why I don't think it is ready for Boost. Artyom Beilis -------------- CppCMS - C++ Web Framework: http://cppcms.com/ CppDB - C++ SQL Connectivity: http://cppcms.com/sql/cppdb/