[integer] Recent changes introduced failures on many platforms

The two tests integer_mask_test and integer_test have been broken by the recent commits to integer on nearly all platforms. (Probably all but Windows) The error message is: In file included from ../libs/integer/test/integer_test.cpp:29: ../boost/integer.hpp:154:39: error: invalid suffix "ui64" on integer constant ../libs/integer/test/integer_test.cpp:115:39: error: invalid suffix "ui64" on integer constant ../libs/integer/test/integer_test.cpp:215:39: error: invalid suffix "ui64" on integer constant Regards, Markus

On Jul 21, 2008, at 3:31 AM, Markus Schöpflin wrote:
The two tests integer_mask_test and integer_test have been broken by the recent commits to integer on nearly all platforms. (Probably all but Windows) The error message is:
Actually my non-Windows machine compiles the code just fine....
In file included from ../libs/integer/test/integer_test.cpp:29: ../boost/integer.hpp:154:39: error: invalid suffix "ui64" on integer constant ../libs/integer/test/integer_test.cpp:115:39: error: invalid suffix "ui64" on integer constant ../libs/integer/test/integer_test.cpp:215:39: error: invalid suffix "ui64" on integer constant
..But it seems that not every platform that doesn't use the __int64 types ignores the constant. I copied that code from some other file (I forgot which one.); presumably, that author couldn't find a ULONG_MAX equivalent for the __int64 type family. If there is one, or a better alternative that never chokes on non-Windows systems, can you tell me? I wonder if my compiler is too permissive in ignoring the constant, or the others are too strict in interpreting it. Which compilers are affected? (I'm using Apple's special GCC 4.0.1, Mac OS X 10.4 PowerPC.) This is on the trunk, right? I never intended this to go into the current release branch. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Daryle Walker <darylew <at> hotmail.com> writes:
..But it seems that not every platform that doesn't use the __int64 types ignores the constant. I copied that code from some other file (I forgot which one.); presumably, that author couldn't find a ULONG_MAX equivalent for the __int64 type family. If there is one, or a better alternative that never chokes on non-Windows systems, can you tell me?
at least linux/gcc has a constant named "ULONG_LONG_MAX" defined in <limits.h>

On Jul 22, 2008, at 4:25 AM, RenéBürgel wrote:
Daryle Walker <darylew <at> hotmail.com> writes:
..But it seems that not every platform that doesn't use the __int64 types ignores the constant. I copied that code from some other file (I forgot which one.); presumably, that author couldn't find a ULONG_MAX equivalent for the __int64 type family. If there is one, or a better alternative that never chokes on non-Windows systems, can you tell me?
at least linux/gcc has a constant named "ULONG_LONG_MAX" defined in <limits.h>
It's been past time that I (or someone else) unify all the particular types, constants, and macros pertaining to the double-long types. Furthermore, since the double-long and the __int64 types are mutually exclusive (AFAIK), the unifying items should shadow the __int64 family instead when that's defined in lieu of the double-long types. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Daryle Walker wrote:
...But it seems that not every platform that doesn't use the __int64 types ignores the constant. I copied that code from some other file (I forgot which one.); presumably, that author couldn't find a ULONG_MAX equivalent for the __int64 type family. If there is one, or a better alternative that never chokes on non-Windows systems, can you tell me?
I wonder if my compiler is too permissive in ignoring the constant, or the others are too strict in interpreting it. Which compilers are affected? (I'm using Apple's special GCC 4.0.1, Mac OS X 10.4 PowerPC.) This is on the trunk, right? I never intended this to go into the current release branch.
#if defined(BOOST_HAS_LONG_LONG) Then use unsigned long long, and uLL as the suffix. #elif defined(BOOST_HAS_MS_INT64) Then use __int64 and the ui64 suffix. #endif HTH, John.

Daryle Walker wrote:
On Jul 21, 2008, at 3:31 AM, Markus Schöpflin wrote:
[...]
I wonder if my compiler is too permissive in ignoring the constant, or the others are too strict in interpreting it. Which compilers are affected? (I'm using Apple's special GCC 4.0.1, Mac OS X 10.4 PowerPC.) This is on the trunk, right? I never intended this to go into the current release branch.
Just have a look at http://beta.boost.org/development/tests/trunk/developer/integer.html, I think most of the failures for integer_mask_test are somehow caused by this. And yes, I was talking about the trunk, not the release branch. Thanks, Markus

On Jul 22, 2008, at 6:09 AM, Markus Schöpflin wrote:
Daryle Walker wrote:
I wonder if my compiler is too permissive in ignoring the constant, or the others are too strict in interpreting it. Which compilers are affected? (I'm using Apple's special GCC 4.0.1, Mac OS X 10.4 PowerPC.) This is on the trunk, right? I never intended this to go into the current release branch.
Just have a look at http://beta.boost.org/development/tests/trunk/ developer/integer.html, I think most of the failures for integer_mask_test are somehow caused by this.
Some of the compilers had weird errors reading in the preprocessor marker '#' as regular code. I suspect that some compilers don't like preprocessor selection within a macro-function call, which is what I'm doing. I'll fix that first and we'll see what happens. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Daryle Walker wrote: [...]
Some of the compilers had weird errors reading in the preprocessor marker '#' as regular code. I suspect that some compilers don't like preprocessor selection within a macro-function call, which is what I'm doing. I'll fix that first and we'll see what happens.
I noticed that you just checked in the change. On my platform (Tru64/CXX) I still get the following errors: cxx: Error: ../../../boost/integer.hpp, line 158: extra text after expected end of number (extrachrnum) #elif defined(BOOST_HAS_MS_INT64) && (0xFFFFFFFFFFFFFFFFui64 > ULONG_MAX) ---------------------------------------------------------^ cxx: Error: integer_test.cpp, line 115: extra text after expected end of number (extrachrnum) #elif defined(BOOST_HAS_MS_INT64) && (0xFFFFFFFFFFFFFFFFui64 > ULONG_MAX) ---------------------------------------------------------^ cxx: Error: integer_test.cpp, line 215: extra text after expected end of number (extrachrnum) #elif defined(BOOST_HAS_MS_INT64) && (0xFFFFFFFFFFFFFFFFui64 > ULONG_MAX) ---------------------------------------------------------^ Thanks, Markus

On Jul 23, 2008, at 3:26 AM, Markus Schöpflin wrote:
I noticed that you just checked in the change. On my platform (Tru64/CXX) I still get the following errors:
cxx: Error: ../../../boost/integer.hpp, line 158: extra text after expected end of number (extrachrnum) #elif defined(BOOST_HAS_MS_INT64) && (0xFFFFFFFFFFFFFFFFui64 > ULONG_MAX) ---------------------------------------------------------^ cxx: Error: integer_test.cpp, line 115: extra text after expected end of number (extrachrnum) #elif defined(BOOST_HAS_MS_INT64) && (0xFFFFFFFFFFFFFFFFui64 > ULONG_MAX) ---------------------------------------------------------^ cxx: Error: integer_test.cpp, line 215: extra text after expected end of number (extrachrnum) #elif defined(BOOST_HAS_MS_INT64) && (0xFFFFFFFFFFFFFFFFui64 > ULONG_MAX) ---------------------------------------------------------^
I encapsulated the double-long and __int64 type families under a single interface in the hidden header "boost/detail/ extended_integer.hpp" and updated my test code to match. If you haven't re-compiled already, check it out. The nice thing is, if I still haven't fixed the problem, there's now only _one_ place that has to be adjusted. Wait, you running on a non-Windows system, right? So the only problem was that your compiler was still trying to parse the "0xFFFFFFFFFFFFFFFFui64" for comparisons even though that line is ignored otherwise. If so, then the new sub-header should be sufficient since that text is more isolated now. (Your code will see a more-compatible interpretation of "BOOST_UXINT_MAX" instead.) -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Daryle Walker wrote: [...]
I encapsulated the double-long and __int64 type families under a single interface in the hidden header "boost/detail/extended_integer.hpp" and updated my test code to match. If you haven't re-compiled already, check it out. The nice thing is, if I still haven't fixed the problem, there's now only _one_ place that has to be adjusted.
OK, the preprocessor problems now are gone.
Wait, you running on a non-Windows system, right? So the only problem was that your compiler was still trying to parse the "0xFFFFFFFFFFFFFFFFui64" for comparisons even though that line is ignored otherwise. If so, then the new sub-header should be sufficient since that text is more isolated now. (Your code will see a more-compatible interpretation of "BOOST_UXINT_MAX" instead.)
Yes, it's a non-Windows system. Now I have the following issues: 1) I need to increase the maximum number of pending instantiations, it seems that the default value of the compiler (64) is not enough. This is also true for the acc toolset, AFAICT. 2) I get a number of compile time errors along the lines of: cxx: Error: integer_test.cpp, line 623: expression must have a constant value (exprnotconst) BOOST_CHECK_EQUAL( static_cast<typename ----^ cxx: Error: ../../../boost/integer.hpp, line 375: class "boost::maximum_unsigned_integral<<error-constant>>" has no member "type" (notmember) detected during instantiation of class "boost::uint_value_t<Value> [with Value=<error-constant>]" at line 623 of "integer_test.cpp" typedef typename maximum_unsigned_integral<Value>::type least; ---------------------------------------------------------^ It looks like all EDG based compilers are flagging this error, see for example http://tinyurl.com/58s3zc. 3) There are quite a few warning about truncation of values or sign changes, where I can't tell whether they are expected or not. Regards, Markus

On Jul 28, 2008, at 4:36 AM, Markus Schöpflin wrote:
Now I have the following issues:
1) I need to increase the maximum number of pending instantiations, it seems that the default value of the compiler (64) is not enough. This is also true for the acc toolset, AFAICT.
That's the price of exhaustive testing. Have you tried setting CONTROL_FULL_COUNTS to 0? (That preprocessor flag is made to be overridable by your command-line parameters.) That'll reduce the number of bit-count cases to just 8 or so, but you might miss some important cases (like bit-count == 0).
2) I get a number of compile time errors along the lines of:
cxx: Error: integer_test.cpp, line 623: expression must have a constant value (exprnotconst) BOOST_CHECK_EQUAL( static_cast<typename ----^ cxx: Error: ../../../boost/integer.hpp, line 375: class "boost::maximum_unsigned_integral<<error-constant>>" has no member "type" (notmember) detected during instantiation of class "boost::uint_value_t<Value> [with Value=<error-constant>]" at line 623 of "integer_test.cpp" typedef typename maximum_unsigned_integral<Value>::type least; ---------------------------------------------------------^
It looks like all EDG based compilers are flagging this error, see for example http://tinyurl.com/58s3zc.
I've tried redoing how the compile-time integral constants are made. Try out revision 47852.
3) There are quite a few warning about truncation of values or sign changes, where I can't tell whether they are expected or not.
I saw those in the same URL you gave. Those tests are supposed to set either the highest bit or the highest mantissa and/or sign bit, roll off the highest bit in a shift while adding the old bit value back in, then confirm that the value didn't change. (If I missed the highest bit, then it wouldn't roll off and the value will change.) Any compiler that warns about bits rolling off or transitioning between the mantissa and sign areas will complain. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Daryle Walker wrote:
On Jul 28, 2008, at 4:36 AM, Markus Schöpflin wrote:
Now I have the following issues:
1) I need to increase the maximum number of pending instantiations, it seems that the default value of the compiler (64) is not enough. This is also true for the acc toolset, AFAICT.
That's the price of exhaustive testing. Have you tried setting CONTROL_FULL_COUNTS to 0? (That preprocessor flag is made to be overridable by your command-line parameters.) That'll reduce the number of bit-count cases to just 8 or so, but you might miss some important cases (like bit-count == 0).
As long as its only needed for running the regression tests, I'll just go ahead and modify the compiler parameters in my local configuration file.
2) I get a number of compile time errors along the lines of:
cxx: Error: integer_test.cpp, line 623: expression must have a constant value (exprnotconst) BOOST_CHECK_EQUAL( static_cast<typename ----^ cxx: Error: ../../../boost/integer.hpp, line 375: class "boost::maximum_unsigned_integral<<error-constant>>" has no member "type" (notmember) detected during instantiation of class "boost::uint_value_t<Value> [with Value=<error-constant>]" at line 623 of "integer_test.cpp" typedef typename maximum_unsigned_integral<Value>::type least; ---------------------------------------------------------^
It looks like all EDG based compilers are flagging this error, see for example http://tinyurl.com/58s3zc.
I've tried redoing how the compile-time integral constants are made. Try out revision 47852.
OK, it works now. All integer tests are passing for me.
3) There are quite a few warning about truncation of values or sign changes, where I can't tell whether they are expected or not.
I saw those in the same URL you gave. Those tests are supposed to set either the highest bit or the highest mantissa and/or sign bit, roll off the highest bit in a shift while adding the old bit value back in, then confirm that the value didn't change. (If I missed the highest bit, then it wouldn't roll off and the value will change.) Any compiler that warns about bits rolling off or transitioning between the mantissa and sign areas will complain.
OK, so these are to be expected and can be ignored. Thanks for your work, Markus

Markus Schöpflin wrote:
As long as its only needed for running the regression tests, I'll just go ahead and modify the compiler parameters in my local configuration file.
Since setting CONTROL_FULL_COUNTS to zero fixes compilation error on HP-UX/aC++ also, would not it make sense to set the macro in the source -- integer_test.cpp -- for all affected compilers? Thanks, Boris
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Markus Schopflin Sent: Tuesday, July 29, 2008 4:47 AM To: boost@lists.boost.org Subject: Re: [boost] [integer] Recent changes introduced failures on many platforms
Daryle Walker wrote:
On Jul 28, 2008, at 4:36 AM, Markus Schöpflin wrote:
Now I have the following issues:
1) I need to increase the maximum number of pending instantiations, it seems that the default value of the compiler (64) is not enough. This is also true for the acc toolset, AFAICT.
That's the price of exhaustive testing. Have you tried setting CONTROL_FULL_COUNTS to 0? (That preprocessor flag is made to be overridable by your command-line parameters.) That'll reduce the number of bit-count cases to just 8 or so, but you might miss some important cases (like bit-count == 0).
As long as its only needed for running the regression tests, I'll just go ahead and modify the compiler parameters in my local configuration file.
2) I get a number of compile time errors along the lines of:
cxx: Error: integer_test.cpp, line 623: expression must have a constant value (exprnotconst) BOOST_CHECK_EQUAL( static_cast<typename ----^ cxx: Error: ../../../boost/integer.hpp, line 375: class "boost::maximum_unsigned_integral<<error-constant>>" has no member "type" (notmember) detected during instantiation of class "boost::uint_value_t<Value> [with Value=<error-constant>]" at line 623 of "integer_test.cpp" typedef typename maximum_unsigned_integral<Value>::type least; ---------------------------------------------------------^
It looks like all EDG based compilers are flagging this error, see for example http://tinyurl.com/58s3zc.
I've tried redoing how the compile-time integral constants are made. Try out revision 47852.
OK, it works now. All integer tests are passing for me.
3) There are quite a few warning about truncation of
changes, where I can't tell whether they are expected or not.
I saw those in the same URL you gave. Those tests are supposed to set either the highest bit or the highest mantissa and/or sign bit, roll off the highest bit in a shift while adding the old bit value back in, then confirm that the value didn't change. (If I missed
values or sign the highest
bit, then it wouldn't roll off and the value will change.) Any compiler that warns about bits rolling off or transitioning between the mantissa and sign areas will complain.
OK, so these are to be expected and can be ignored.
Thanks for your work, Markus
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Gubenko, Boris schrieb:
Markus Schöpflin wrote:
As long as its only needed for running the regression tests, I'll just go ahead and modify the compiler parameters in my local configuration file.
Since setting CONTROL_FULL_COUNTS to zero fixes compilation error on HP-UX/aC++ also, would not it make sense to set the macro in the source -- integer_test.cpp -- for all affected compilers?
As Daryle said that setting this macro might result in missing some important test cases, I opted for increasing the number of pending instantiations in my user configuration. If there is a supported Boost.Build feature (is there?) to specify a maximum number of pending instantiations, we could even add this to the Jamfile for the test in questions. Or is there a specific reason that you don't want to set this compiler parameter for the regression tests? Regards, Markus

Markus Schöpflin schrieb: [...]
If there is a supported Boost.Build feature (is there?) to specify a maximum number of pending instantiations, we could even add this to the Jamfile for the test in questions.
There is such a feature, it has been added 6 days ago... feature.feature c++-template-depth : [ numbers.range 128 1024 : 128 ] [ numbers.range 20 1000 : 10 ] # Maximum template instantiation depth guaranteed # for ANSI/ISO C++ conforming programs. 17 : incidental propagated ; As a side note, this makes me wonder what the default value for this feature actually is? 128? And could the first range perhaps be modified to 64 1024 : 64? But anyway, I think that adding c++-template-depth=65 (Is 65 enough? 64 is not, 128 is for sure.) to the integer test is the best solution for this, provided that we add support for this feature to the acc and hp_cxx toolsets, which seems fairly easy. What do you think? Markus

Markus Schöpflin wrote:
Or is there a specific reason that you don't want to set this compiler parameter for the regression tests?
No specific reason. You said that you're going to define the macro in a (cxx-specific) configuration file and I just thought that defining it in one place for different compilers is better. Adding a feature to "Jamfile for the test in questions" is also a good solution. Thanks, Boris
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Markus Schöpflin Sent: Tuesday, July 29, 2008 5:00 PM To: boost@lists.boost.org Subject: Re: [boost] [integer] Recent changes introduced failures on many platforms
Gubenko, Boris schrieb:
Markus Schöpflin wrote:
As long as its only needed for running the regression tests, I'll just go ahead and modify the compiler parameters in my local configuration file.
Since setting CONTROL_FULL_COUNTS to zero fixes compilation error on HP-UX/aC++ also, would not it make sense to set the macro in the source -- integer_test.cpp -- for all affected compilers?
As Daryle said that setting this macro might result in missing some important test cases, I opted for increasing the number of pending instantiations in my user configuration.
If there is a supported Boost.Build feature (is there?) to specify a maximum number of pending instantiations, we could even add this to the Jamfile for the test in questions.
Or is there a specific reason that you don't want to set this compiler parameter for the regression tests?
Regards, Markus
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Gubenko, Boris wrote:
Markus Schöpflin wrote:
Or is there a specific reason that you don't want to set this compiler parameter for the regression tests?
No specific reason. You said that you're going to define the macro in a (cxx-specific) configuration file and I just thought that defining it in one place for different compilers is better. Adding a feature to "Jamfile for the test in questions" is also a good solution.
Ah, this has been a misunderstanding then. I was talking about adding <cxxflags>"-pending_instantiations 128" to my configuration file, not the macro definition. Markus

On Jul 29, 2008, at 3:41 PM, Gubenko, Boris wrote:
Markus Schöpflin wrote:
As long as its only needed for running the regression tests, I'll just go ahead and modify the compiler parameters in my local configuration file.
Since setting CONTROL_FULL_COUNTS to zero fixes compilation error on HP-UX/aC++ also, would not it make sense to set the macro in the source -- integer_test.cpp -- for all affected compilers?
The CONTROL_FULL_COUNTS setting is there so I wouldn't go insane waiting while compile-checking any little tweaks I add, not for general use. (The compile times went from 2 to 20 minutes when I switched from sampled to comprehensive testing.) I always use comprehensive tests before I commit a change. (I found at least one last-second error that way.) -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Markus Schöpflin wrote: [...]
1) I need to increase the maximum number of pending instantiations, it seems that the default value of the compiler (64) is not enough. This is also true for the acc toolset, AFAICT.
[...] As Boost.Build has recently gained support for specifying the maximum recursion depth in a compiler independent manner, I suggest the following patch for the integer/test Jamfile. OK to commit? Markus Index: Jamfile.v2 =================================================================== --- Jamfile.v2 (revision 47904) +++ Jamfile.v2 (working copy) @@ -7,7 +7,7 @@ test-suite integer : [ run cstdint_test.cpp ] [ run integer_test.cpp - /boost/test//boost_unit_test_framework ] + /boost/test//boost_unit_test_framework : : : <c++-template-depth>70 ] [ run integer_traits_test.cpp /boost/test//boost_test_exec_monitor/<link>static ] [ run integer_mask_test.cpp

On Jul 31, 2008, at 3:58 AM, Markus Schöpflin wrote:
Markus Schöpflin wrote:
[...]
1) I need to increase the maximum number of pending instantiations, it seems that the default value of the compiler (64) is not enough. This is also true for the acc toolset, AFAICT.
[...]
As Boost.Build has recently gained support for specifying the maximum recursion depth in a compiler independent manner, I suggest the following patch for the integer/test Jamfile.
OK to commit?
Markus Index: Jamfile.v2 =================================================================== --- Jamfile.v2 (revision 47904) +++ Jamfile.v2 (working copy) @@ -7,7 +7,7 @@ test-suite integer : [ run cstdint_test.cpp ] [ run integer_test.cpp - /boost/test//boost_unit_test_framework ] + /boost/test//boost_unit_test_framework : : : <c++- template-depth>70 ] [ run integer_traits_test.cpp /boost/test//boost_test_exec_monitor/<link>static ] [ run integer_mask_test.cpp
What happens on systems, like mine, that already have sufficient recursive depth? Will specifying a maximum lower that the default actually lower the setting? If so, then this addition could be dangerous. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Daryle Walker wrote:
What happens on systems, like mine, that already have sufficient recursive depth? Will specifying a maximum lower that the default actually lower the setting? If so, then this addition could be dangerous.
If your toolset supports setting the recursion depth (gcc, qcc, acc, and hp_cxx at the moment), it will be set to the value specified. So yes, it might lower the default setting. But why should this be dangerous? The recursion depth needed to compile a program is independent of the toolset, isn't it? So if for a given compiler a value lower than the default value is used, there should be no harm. Markus

Daryle, we still should come to a decision regarding this issue. Can't we just make the change and check if there are any new test failures caused by it? Markus Schöpflin wrote:
Daryle Walker wrote:
What happens on systems, like mine, that already have sufficient recursive depth? Will specifying a maximum lower that the default actually lower the setting? If so, then this addition could be dangerous.
If your toolset supports setting the recursion depth (gcc, qcc, acc, and hp_cxx at the moment), it will be set to the value specified. So yes, it might lower the default setting.
But why should this be dangerous? The recursion depth needed to compile a program is independent of the toolset, isn't it? So if for a given compiler a value lower than the default value is used, there should be no harm.
Markus

On Aug 4, 2008, at 3:27 AM, Markus Schöpflin wrote:
we still should come to a decision regarding this issue. Can't we just make the change and check if there are any new test failures caused by it?
Markus Schöpflin wrote:
What happens on systems, like mine, that already have sufficient recursive depth? Will specifying a maximum lower that the default actually lower the setting? If so, then this addition could be dangerous. If your toolset supports setting the recursion depth (gcc, qcc, acc, and hp_cxx at the moment), it will be set to the value specified. So yes, it might lower the default setting. But why should this be dangerous? The recursion depth needed to compile a program is independent of the toolset, isn't it? So if for a given compiler a value lower than the default value is used,
Daryle Walker wrote: there should be no harm.
Is the depth actually independent of the toolset? Also, the number of full test cases is dependent on how large uintmax_t is; what happens when computers get bigger and/or use a different value outside of the 8/16/32/64-bit mindset? Is the problem affecting every test computer with these compilers, or just yours? If we use a default value for a parameter, it can increase as the creator updates the product; if we fix the value, then the burden of vigilance falls to us. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

AMDG Daryle Walker wrote:
Is the depth actually independent of the toolset? Also, the number of full test cases is dependent on how large uintmax_t is; what happens when computers get bigger and/or use a different value outside of the 8/16/32/64-bit mindset? Is the problem affecting every test computer with these compilers, or just yours? If we use a default value for a parameter, it can increase as the creator updates the product; if we fix the value, then the burden of vigilance falls to us.
If the default value is too small with any toolset, than we need to keep it updated. There is no guarantee that compiler vendors will keep their default values in sync with the requirements of Boost tests, either. Does it cause any harm to make the value larger than needed? It doesn't matter if the value is smaller than the default as long it is big enough for the test case to pass. In Christ, Steven Watanabe

Daryle Walker wrote:
On Aug 4, 2008, at 3:27 AM, Markus Schöpflin wrote:
we still should come to a decision regarding this issue. Can't we just make the change and check if there are any new test failures caused by it?
Markus Schöpflin wrote:
Daryle Walker wrote:
What happens on systems, like mine, that already have sufficient recursive depth? Will specifying a maximum lower that the default actually lower the setting? If so, then this addition could be dangerous. If your toolset supports setting the recursion depth (gcc, qcc, acc, and hp_cxx at the moment), it will be set to the value specified. So yes, it might lower the default setting. But why should this be dangerous? The recursion depth needed to compile a program is independent of the toolset, isn't it? So if for a given compiler a value lower than the default value is used, there should be no harm. Is the depth actually independent of the toolset?
I was going to say that the recursion depth is a function of the code and not of the compiler, but I did some tests and I got surprising results: GCC 4.2.3 on a 32bit Linux system needs a recursion depth of 76 to compile the test successfully. (The -ftemplate-depth-NN flag was introduced with gcc 2.8.0 and defaulted to 17 (which is the value required by the C++ standard), but the default has been increased since then. To which value I cannot say, and the default is undocumented. It has been hard-coded in Boost to 128 for quite a few years now.) HP CXX 7.1 on a 64bit Tru64 system needs a recursion depth of 66 to compile the test successfully. (The default value here is 64.) I still believe that the recursion depth is a function of the code, and not the compiler, but there seems to be a difference in how the recursion depth is calculated.
Also, the number of full test cases is dependent on how large uintmax_t is; what happens when computers get bigger and/or use a different value outside of the 8/16/32/64-bit mindset?
Well, you would have to increase the depth, of course.
Is the problem affecting every test computer with these compilers, or just yours?
It is affecting every C++ compiler that has a limit for the maximum recursion depth. (Keep in mind that the C++ standard requires a maximum supported template recursion depth of 17.)
If we use a default value for a parameter, it can increase as the creator updates the product; if we fix the value, then the burden of vigilance falls to us.
Well, Boost has lived with hard-coding an arbitrary value for the template recursion depth for g++ for years. A standard conforming compiler sticking to the required instantiation depth of 17 wouldn't be able to compile the test at all. Therefore I think it's better to make it explicit in the Jamfile that this parameter needs tuning, if possible. I'm not set on any particular value of the parameter, if the hardcoded 128 has worked for GCC for years, why not use <c++-template-depth>128 then? Markus

Quoting Markus Schöpflin <markus.schoepflin@comsoft.de>:
I still believe that the recursion depth is a function of the code, and not the compiler, but there seems to be a difference in how the recursion depth is calculated.
It is a function of code, but bear in mind the code might be different for different compilers. Compiler workarounds in Boost code and different implementations of the the standard library are two possible reasons for different template instantiation depths. Pete

AMDG Peter Bartlett wrote:
Quoting Markus Schöpflin <markus.schoepflin@comsoft.de>:
I still believe that the recursion depth is a function of the code, and not the compiler, but there seems to be a difference in how the recursion depth is calculated.
It is a function of code, but bear in mind the code might be different for different compilers. Compiler workarounds in Boost code and different implementations of the the standard library are two possible reasons for different template instantiation depths.
Compilers don't have to instantiate templates in the same order even if the code is identical. This could change the maximum depth because of memoization. In Christ, Steven Watanabe
participants (8)
-
Daryle Walker
-
Gubenko, Boris
-
John Maddock
-
Markus Schöpflin
-
Markus Schöpflin
-
Peter Bartlett
-
René Bürgel
-
Steven Watanabe