[1.35.0] Intel compiler - does anyone care?

The great bulk of remaining 1.35.0 release test issues are with the Intel compiler. See http://beta.boost.org/development/tests/release/developer/issues.html Does anyone really care about this compiler? Should we drop it from the high priority list? If there are folks who do care, perhaps they could suggest a plan to eliminate the failures, or at least shed light on why so many tests are failing. --Beman

Beman Dawes:
If there are folks who do care, perhaps they could suggest a plan to eliminate the failures, or at least shed light on why so many tests are failing.
Most failures seem caused by the standard library which for some reason fails on code like #include <algorithm> int main() { char a[ 10 ], b[ 10 ]; std::copy( a, a + 10, b ); } I don't have the compiler here to verify that the above fails, but if it does, we probably need to submit it to Intel and Dinkumware and see what they have to say about it.

Peter Dimov wrote:
Beman Dawes:
If there are folks who do care, perhaps they could suggest a plan to eliminate the failures, or at least shed light on why so many tests are failing.
Most failures seem caused by the standard library which for some reason fails on code like
#include <algorithm>
int main() { char a[ 10 ], b[ 10 ]; std::copy( a, a + 10, b ); }
I don't have the compiler here to verify that the above fails, but if it does, we probably need to submit it to Intel and Dinkumware and see what they have to say about it.
FWIW, the above code works fine with ICC 9.1 on Windows. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld:
Peter Dimov wrote:
Beman Dawes:
If there are folks who do care, perhaps they could suggest a plan to eliminate the failures, or at least shed light on why so many tests are failing.
Most failures seem caused by the standard library which for some reason fails on code like
#include <algorithm>
int main() { char a[ 10 ], b[ 10 ]; std::copy( a, a + 10, b ); }
I don't have the compiler here to verify that the above fails, but if it does, we probably need to submit it to Intel and Dinkumware and see what they have to say about it.
FWIW, the above code works fine with ICC 9.1 on Windows.
The failures are with 10.0, though, which seems to use Dinkumware 4.05 according to config_info.

Most failures seem caused by the standard library which for some reason fails on code like
#include <algorithm>
int main() { char a[ 10 ], b[ 10 ]; std::copy( a, a + 10, b ); }
I don't have the compiler here to verify that the above fails, but if it does, we probably need to submit it to Intel and Dinkumware and see what they have to say about it.
FWIW, the above code works fine with ICC 9.1 on Windows.
Fails on ICC 10.0.25 on Windows 32bit with: \test.cpp(6): error: more than one instance of overloaded function "std::copy" matches the argument list: function template "_OutElem *__cdecl std::copy(_InIt, _InIt, _OutElem (&)[_Size])" function template "std::_Enable_if<<expression>, _OutIt>::_Result __cdecl std::copy(_InIt, _InIt, _OutIt)" argument types are: (char [10], char *, char [10]) std::copy( a, a + 10, b ); [Compiles OK as a MSVC 2005 project but fails as above when converted to an Intel project] I certainly care because ICC generates the best vectorised code in many of our apps. Use it mainly with Linux 64 bit and *very* occasionally Windows 32 bit. Last time I tried to help out with regression testing (admittedly ad-hoc as I can't dedicate a resource) it wasn't really seen as necessary since there was already reasonable ICC coverage. Has this changed? Paul

Paul Baxter schrieb:
Most failures seem caused by the standard library which for some reason fails on code like
#include <algorithm>
int main() { char a[ 10 ], b[ 10 ]; std::copy( a, a + 10, b ); }
I don't have the compiler here to verify that the above fails, but if it does, we probably need to submit it to Intel and Dinkumware and see what they have to say about it. FWIW, the above code works fine with ICC 9.1 on Windows.
I've seen this with ICC 9.1 as well, maybe a newer version introduced this problem? At least I have seen exactly the same error with std::copy and ICC 9.1.<some recent update for ICC after VS2005 SP1> - which version did you use for trying? Cheers, Anteru

Paul Baxter wrote:
Most failures seem caused by the standard library which for some reason fails on code like
#include <algorithm>
int main() { char a[ 10 ], b[ 10 ]; std::copy( a, a + 10, b ); }
I don't have the compiler here to verify that the above fails, but if it does, we probably need to submit it to Intel and Dinkumware and see what they have to say about it. FWIW, the above code works fine with ICC 9.1 on Windows.
Fails on ICC 10.0.25 on Windows 32bit with:
\test.cpp(6): error: more than one instance of overloaded function "std::copy" matches the argument list:
function template "_OutElem *__cdecl std::copy(_InIt, _InIt, _OutElem (&)[_Size])" function template "std::_Enable_if<<expression>, _OutIt>::_Result __cdecl std::copy(_InIt, _InIt, _OutIt)" argument types are: (char [10], char *, char [10]) std::copy( a, a + 10, b );
Thanks for the confirmation.
[Compiles OK as a MSVC 2005 project but fails as above when converted to an Intel project]
I certainly care because ICC generates the best vectorised code in many of our apps. Use it mainly with Linux 64 bit and *very* occasionally Windows 32 bit.
Last time I tried to help out with regression testing (admittedly ad-hoc as I can't dedicate a resource) it wasn't really seen as necessary since there was already reasonable ICC coverage. Has this changed?
The problem with ICC isn't the test coverage. Rather, no one seems to take any interest in the results, post fixed or workarounds for Boost code, or submit bug reports to Intel or Dinkumware. --Beman

Peter Dimov wrote:
Beman Dawes:
If there are folks who do care, perhaps they could suggest a plan to eliminate the failures, or at least shed light on why so many tests are failing.
Most failures seem caused by the standard library which for some reason fails on code like
#include <algorithm>
int main() { char a[ 10 ], b[ 10 ]; std::copy( a, a + 10, b ); }
I don't have the compiler here to verify that the above fails,
It fails for me, as well as several others. Would you (and any other Boost developers who want to comment) be more inclined to test regularly with the Intel compiler and report bugs to them if we could get you a license?
but if it does, we probably need to submit it to Intel and Dinkumware and see what they have to say about it.
I've emailed a contact at Intel. If the past is any indication, they will fix the problem, but probably later rather than sooner. Thanks for reducing this to a simple test case! --Beman

On Jan 4, 2008, at 9:35 AM, Beman Dawes wrote:
Would you (and any other Boost developers who want to comment) be more inclined to test regularly with the Intel compiler and report bugs to them if we could get you a license?
Intel offers free licenses for non-commercial use of their C++ compiler (at least the Linux version). Details at: http://www.intel.com/cd/software/products/asmo-na/eng/219771.htm a FAQ list defining non-commercial is at: http://www.intel.com/cd/software/products/asmo-na/eng/219692.htm IANAL, but my reading is that this wouldn't cover people at Boost Consulting as they (hopefully) get compensated for services related to Boost, but most of everybody else should be ok. But it would be worthwhile to have a chat with the Intel folks, and see if they're willing to make an exception for them as well. A few free licenses to the right people would benefit Intel, both in terms of bug reports and workarounds for making Boost compile cleanly under the Intel C++ compiler. Regards, Maurizio

Maurizio Vitale wrote:
On Jan 4, 2008, at 9:35 AM, Beman Dawes wrote:
Would you (and any other Boost developers who want to comment) be more inclined to test regularly with the Intel compiler and report bugs to them if we could get you a license?
Intel offers free licenses for non-commercial use of their C++ compiler (at least the Linux version).
Yep, but not their Windows compiler.
But it would be worthwhile to have a chat with the Intel folks, and see if they're willing to make an exception for them as well. A few free licenses to the right people would benefit Intel, both in terms of bug reports and workarounds for making Boost compile cleanly under the Intel C++ compiler.
They've already been kind enough to give us two or three licenses, but those are about to expire, and I was wondering if it would be worthwhile to ask for a few more when I apply for the renewals. --Beman

Beman Dawes wrote:
The great bulk of remaining 1.35.0 release test issues are with the Intel compiler. See http://beta.boost.org/development/tests/release/developer/issues.html
Does anyone really care about this compiler? Should we drop it from the high priority list?
If there are folks who do care, perhaps they could suggest a plan to eliminate the failures, or at least shed light on why so many tests are failing.
Intel-win seems to have a significant bug around partial function template ordering. A possible workaround is: #define _SECURE_SCL 0 Regards, -- Shunsuke Sogame

Intel-win seems to have a significant bug around partial function template ordering. A possible workaround is:
#define _SECURE_SCL 0
Works fine in v10.0.25 win32 with this added at the top of the file Will try with a newer Windows/Linux version tomorrow if possible, to see if this bug has gone away in v10.1

Paul Baxter wrote:
Intel-win seems to have a significant bug around partial function template ordering. A possible workaround is:
#define _SECURE_SCL 0
Works fine in v10.0.25 win32 with this added at the top of the file
Will try with a newer Windows/Linux version tomorrow if possible, to see if this bug has gone away in v10.1
Jennifer Jiang at Intel has tried Peter's test program with their latest nightly build, and it is still failing. She has forwarded the test to their front-end engineers. Since we probably want to apply a fix that works with current compilers in the field, we probably want to do something like this: #if defined(BOOST_INTEL) && defined(BOOST_DINKUMWARE_STDLIB) # define _SECURE_SCL 0 #endif This should be applied to test programs, not to headers files. If it were applied to header files (or added to the bjam toolset for Intel), it would affect user code. Some users might actually want the Microsoft mandated checking for their own code. Comments? --Beman

Beman Dawes:
Since we probably want to apply a fix that works with current compilers in the field, we probably want to do something like this:
#if defined(BOOST_INTEL) && defined(BOOST_DINKUMWARE_STDLIB) # define _SECURE_SCL 0 #endif
This should be applied to test programs, not to headers files. If it were applied to header files (or added to the bjam toolset for Intel), it would affect user code. Some users might actually want the Microsoft mandated checking for their own code.
Comments?
I don't agree. In principle, the tests should be a faithful representation of reality; if a certain snippet of user code fails, so should the corresponding test. We can declare that we only support _SECURE_SCL=0, and run the tests with that defined, but we should *not* hack the tests to pass under _SECURE_SCL=1 with the full knowledge that user code will fail under the same conditions. (Our other option is to patch the libraries for EDG == 3 && _SECURE_SCL.)

This should be applied to test programs, not to headers files. If it were applied to header files (or added to the bjam toolset for Intel), it would affect user code. Some users might actually want the Microsoft mandated checking for their own code.
Comments?
I don't agree. In principle, the tests should be a faithful representation of reality; if a certain snippet of user code fails, so should the corresponding test.
We can declare that we only support _SECURE_SCL=0, and run the tests with that defined, but we should *not* hack the tests to pass under _SECURE_SCL=1 with the full knowledge that user code will fail under the same conditions.
Previous suggestions ( http://garrys-brain.blogspot.com/2006/10/boost-library-and-visual-studio-200... ) have included adding this definition (and others) as part of the bjam compiler options for intel's compiler. If a user wants to build his program with different compiler options they'd need to look at the ramifications. What's wrong with having that as the recommended compiler options? Paul

Paul Baxter wrote:
This should be applied to test programs, not to headers files. If it were applied to header files (or added to the bjam toolset for Intel), it would affect user code. Some users might actually want the Microsoft mandated checking for their own code.
Comments? I don't agree. In principle, the tests should be a faithful representation of reality; if a certain snippet of user code fails, so should the corresponding test.
We can declare that we only support _SECURE_SCL=0, and run the tests with that defined, but we should *not* hack the tests to pass under _SECURE_SCL=1 with the full knowledge that user code will fail under the same conditions.
Previous suggestions ( http://garrys-brain.blogspot.com/2006/10/boost-library-and-visual-studio-200... ) have included adding this definition (and others) as part of the bjam compiler options for intel's compiler. If a user wants to build his program with different compiler options they'd need to look at the ramifications.
What's wrong with having that as the recommended compiler options?
I guess that's OK. The only people who would be inconvenienced are those who *want* the _SECURE_SCL checks. --Beman

This should be applied to test programs, not to headers files. If it were applied to header files (or added to the bjam toolset for Intel), it would affect user code. Some users might actually want the Microsoft mandated checking for their own code.
Comments?
I don't agree. In principle, the tests should be a faithful representation of reality; if a certain snippet of user code fails, so should the corresponding test.
We can declare that we only support _SECURE_SCL=0, and run the tests with that defined, but we should *not* hack the tests to pass under _SECURE_SCL=1 with the full knowledge that user code will fail under the same conditions.
Previous suggestions ( http://garrys-brain.blogspot.com/2006/10/boost-library-and-visual-studio-200... ) have included adding this definition (and others) as part of the bjam compiler options for intel's compiler. If a user wants to build his program with different compiler options they'd need to look at the ramifications. What's wrong with having that as the recommended compiler options? Paul

Peter Dimov wrote:
Beman Dawes:
Since we probably want to apply a fix that works with current compilers in the field, we probably want to do something like this:
#if defined(BOOST_INTEL) && defined(BOOST_DINKUMWARE_STDLIB) # define _SECURE_SCL 0 #endif
This should be applied to test programs, not to headers files. If it were applied to header files (or added to the bjam toolset for Intel), it would affect user code. Some users might actually want the Microsoft mandated checking for their own code.
Comments?
I don't agree. In principle, the tests should be a faithful representation of reality; if a certain snippet of user code fails, so should the corresponding test.
We can declare that we only support _SECURE_SCL=0, and run the tests with that defined, but we should *not* hack the tests to pass under _SECURE_SCL=1 with the full knowledge that user code will fail under the same conditions.
I'm on the fence, but am willing to try that approach. It is simpler that anything that involves modifying each library.
(Our other option is to patch the libraries for EDG == 3 && _SECURE_SCL.)
Patch the standard library? That doesn't sound attractive. Patch the Boost libraries? Bill Plauger suggested that as a possibility in a private email:
3) Avoid the problem by making the first two arguments the same type, as in std::copy(a + 0, a + 10, b), or by making the third argument a non-array type, as in std::copy(a, a + 10, &b[0]).
That seems pretty messy. Let's go with the <define>_SECURE_SCL=0 approach, at least for now. I'll change the toolset, or get someone else to do it if I can't figure out how. Thanks, --Beman

Beman Dawes <bdawes <at> acm.org> writes:
Let's go with the <define>_SECURE_SCL=0 approach, at least for now. I'll change the toolset, or get someone else to do it if I can't figure out how.
Thanks,
--Beman
Hi, I have been following this with interest since I've faced problems trying to build CVS trunk as well as 1.34.1 with Intel 10.1.013 on Win32, VC71 STL. While my STL is thankfully pre-_SECURE_SCL, turning it off would appear to be the best course since it has performance implications as well. Only thing is that it is not possible to mix libraries with mismatched values for this flag, so the user has to explicitly set this in their programs as well (because default is on) - thus it would probably be better to include the #define in config.hpp, if Dinkum libraries are used with non-msvc toolset. Whilst this is being looked at, can you please investigate why bjam does not build *-iw-gd-mt-1_3[45].libs (i.e. using debug multi-threaded runtime) for any of the targets, with Intel 10? And it does not pass on the compiler options passed with cxxflags (like /QxO, /Ow etc). I gave up on bjam yesterday and built regex, system, etc in the IDE. Ta Amit

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost- bounces@lists.boost.org] On Behalf Of Beman Dawes Sent: Friday, January 04, 2008 4:17 PM To: boost@lists.boost.org Subject: Re: [boost] [1.35.0] Intel compiler - does anyone care?
That seems pretty messy.
Let's go with the <define>_SECURE_SCL=0 approach, at least for now. I'll change the toolset, or get someone else to do it if I can't figure out how.
I have serious concerns on this approach. Intel C++ is NOT a standalone product and has to be used together with MSVC. If _SECURE_SCL is ONLY specified with code compiled with Intel, it will almost for sure result in crashes when lined with code compiled without _SECURE_SECL=0 defined (we had this painful experience). Due to various reasons (compiler stability, compiler time, and inability to compile MFC/ATL etc.), we only use Intel to compile our most performance critical code and use MSVC for the rest of the code. I suspect this usage is pretty common. We also patched autolink to always link to MSVC compiled boost libraries. My opinion is to either patch the code (what we did) or leave it alone and let user deal with it. I do not think it is such a bad idea running regression intel tests with _SECURE_SCL defined. The important thing is that the macro needs to be defined consistently. Regards, Sean

Sean Huang schrieb:
always link to MSVC compiled boost libraries. My opinion is to either patch the code (what we did) or leave it alone and let user deal with it. I do not think it is such a bad idea running regression intel tests with _SECURE_SCL defined. The important thing is that the macro needs to be defined consistently.
Well, the problem is a new user will expect that he can just link against a boost library from MSVC for example and freely switch between the Intel and Microsoft Compiler -- which is clearly not possible if Intel-generated libraries require the use of _SECURE_SCL=0. Even with clear documentation, lots of people will run into this, as -- for the better or worse -- _SECURE_SCL=1 by default. The best is to report this to Intel/Dinkumware, and let them sort it out as it affects basically every user who has the bought the Intel Compiler for MSVC -- which has been done already IIRC. Cheers, Anteru
participants (9)
-
Amit
-
Anteru
-
Beman Dawes
-
Maurizio Vitale
-
Paul Baxter
-
Peter Dimov
-
Sean Huang
-
shunsuke
-
Stefan Seefeld