[log] MSVC 9 build hangs
Hi, Many test results for Boost.Log on teeks99-03b-win2008-64on64 [1] show that the build aborts due to timeout. From the test setup description (2 GiB of RAM, 2 CPUs, 4 build jobs) it seems that the problem could be in the insufficient system resources. 4 jobs building Boost.Log can easily exceed 2 gigs of RAM, especially considering some of the memory is reserved by OS and other programs. If there isn't any swap, the build may be just aborting, and if there is, it's probably performing dead slow. And 2 cores might also be too limiting. Could the tester take a look what is happening on that setup? If it's not a resource problem, then I would appreciate any information on the problem. [1] http://www.boost.org/development/tests/trunk/developer/log.html
On Sat, Aug 31, 2013 at 7:03 AM, Andrey Semashev
Hi,
Many test results for Boost.Log on teeks99-03b-win2008-64on64 [1] show that the build aborts due to timeout. From the test setup description (2 GiB of RAM, 2 CPUs, 4 build jobs) it seems that the problem could be in the insufficient system resources. 4 jobs building Boost.Log can easily exceed 2 gigs of RAM, especially considering some of the memory is reserved by OS and other programs. If there isn't any swap, the build may be just aborting, and if there is, it's probably performing dead slow. And 2 cores might also be too limiting.
Could the tester take a look what is happening on that setup? If it's not a resource problem, then I would appreciate any information on the problem.
[1] http://www.boost.org/development/tests/trunk/developer/log.html
It has 6GB of ram, not 2, I need to update the info page on it. As to the 4 jobs, are you referring to the -j4 option? Tom
On Saturday 31 August 2013 10:18:52 Tom Kent wrote:
On Sat, Aug 31, 2013 at 7:03 AM, Andrey Semashev
wrote: Hi,
Many test results for Boost.Log on teeks99-03b-win2008-64on64 [1] show that the build aborts due to timeout. From the test setup description (2 GiB of RAM, 2 CPUs, 4 build jobs) it seems that the problem could be in the insufficient system resources. 4 jobs building Boost.Log can easily exceed 2 gigs of RAM, especially considering some of the memory is reserved by OS and other programs. If there isn't any swap, the build may be just aborting, and if there is, it's probably performing dead slow. And 2 cores might also be too limiting.
Could the tester take a look what is happening on that setup? If it's not a resource problem, then I would appreciate any information on the problem.
[1] http://www.boost.org/development/tests/trunk/developer/log.html
It has 6GB of ram, not 2, I need to update the info page on it.
Hmm, then lack of memory should not be an issue. Is CPU count actual? Also, is the timeout applied to the whole library compilation or some particular stage? Or the whole Boost?
As to the 4 jobs, are you referring to the -j4 option?
Yes.
On Saturday 31 August 2013 19:27:41 you wrote:
On Saturday 31 August 2013 10:18:52 Tom Kent wrote:
On Sat, Aug 31, 2013 at 7:03 AM, Andrey Semashev
wrote: Hi,
Many test results for Boost.Log on teeks99-03b-win2008-64on64 [1] show that the build aborts due to timeout. From the test setup description (2 GiB of RAM, 2 CPUs, 4 build jobs) it seems that the problem could be in the insufficient system resources. 4 jobs building Boost.Log can easily exceed 2 gigs of RAM, especially considering some of the memory is reserved by OS and other programs. If there isn't any swap, the build may be just aborting, and if there is, it's probably performing dead slow. And 2 cores might also be too limiting.
Could the tester take a look what is happening on that setup? If it's not a resource problem, then I would appreciate any information on the problem.
[1] http://www.boost.org/development/tests/trunk/developer/log.html
It has 6GB of ram, not 2, I need to update the info page on it.
Hmm, then lack of memory should not be an issue. Is CPU count actual?
Also, is the timeout applied to the whole library compilation or some particular stage? Or the whole Boost?
As to the 4 jobs, are you referring to the -j4 option?
Yes.
I can see the tests are green now. Thanks for fixing it. Could you tell what the problem was?
I didn't do anything.
On Sun, Sep 1, 2013 at 3:46 AM, Andrey Semashev
On Saturday 31 August 2013 19:27:41 you wrote:
On Saturday 31 August 2013 10:18:52 Tom Kent wrote:
On Sat, Aug 31, 2013 at 7:03 AM, Andrey Semashev
wrote: Hi,
Many test results for Boost.Log on teeks99-03b-win2008-64on64 [1] show that the build aborts due to timeout. From the test setup description (2 GiB of RAM, 2 CPUs, 4 build jobs) it seems that the problem could be in the insufficient system resources. 4 jobs building Boost.Log can easily exceed 2 gigs of RAM, especially considering some of the memory is reserved by OS and other programs. If there isn't any swap, the build may be just aborting, and if there is, it's probably performing dead slow. And 2 cores might also be too limiting.
Could the tester take a look what is happening on that setup? If it's not a resource problem, then I would appreciate any information on the problem.
[1] http://www.boost.org/development/tests/trunk/developer/log.html
It has 6GB of ram, not 2, I need to update the info page on it.
Hmm, then lack of memory should not be an issue. Is CPU count actual?
Also, is the timeout applied to the whole library compilation or some particular stage? Or the whole Boost?
As to the 4 jobs, are you referring to the -j4 option?
Yes.
I can see the tests are green now. Thanks for fixing it. Could you tell what the problem was?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Sun, Sep 1, 2013 at 4:57 PM, Andrey Semashev
On Sunday 01 September 2013 07:12:33 Tom Kent wrote:
On Sun, Sep 1, 2013 at 3:46 AM, Andrey Semashev
I can see the tests are green now. Thanks for fixing it. Could you tell what the problem was?
I didn't do anything.
So this is not a stable problem. Too bad.
Now msvc- 11.0 tests have failed due to timeout. Could you take a look why is that happening?
I've got it at a spot where I can run stuff manually now, but I'm not sure
how to manually kickoff that test. Can you tell me what command would
re-create the test manually?
On Mon, Sep 2, 2013 at 2:22 AM, Andrey Semashev
On Sun, Sep 1, 2013 at 4:57 PM, Andrey Semashev
wrote: On Sunday 01 September 2013 07:12:33 Tom Kent wrote:
On Sun, Sep 1, 2013 at 3:46 AM, Andrey Semashev
I can see the tests are green now. Thanks for fixing it. Could you
tell
what the problem was?
I didn't do anything.
So this is not a stable problem. Too bad.
Now msvc- 11.0 tests have failed due to timeout. Could you take a look why is that happening?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Wed, Sep 4, 2013 at 4:01 PM, Tom Kent
I've got it at a spot where I can run stuff manually now, but I'm not sure how to manually kickoff that test. Can you tell me what command would re-create the test manually?
I think you should be able to run tests by simply invoking b2 in the tests directory. I.e.: cd libs\log\test b2 --toolset=msvc-11.0 -j4 address-model=64 if the command line on the test stand info page [1] is accurate. This command won't check for timeout though. I'm not sure how to measure time from the command line on Windows. [1] http://www.boost.org/development/tests/trunk/teeks99-03d-win2008-64on64.html
On Wednesday 04 September 2013 16:24:00 you wrote:
On Wed, Sep 4, 2013 at 4:01 PM, Tom Kent
wrote: I've got it at a spot where I can run stuff manually now, but I'm not sure how to manually kickoff that test. Can you tell me what command would re-create the test manually?
I think you should be able to run tests by simply invoking b2 in the tests directory. I.e.:
cd libs\log\test b2 --toolset=msvc-11.0 -j4 address-model=64
if the command line on the test stand info page [1] is accurate.
This command won't check for timeout though. I'm not sure how to measure time from the command line on Windows.
[1] http://www.boost.org/development/tests/trunk/teeks99-03d-win2008-64on64.html
Any news on this?
I thought it got better, have you seen it again?
On Tue, Sep 10, 2013 at 1:05 PM, Andrey Semashev
On Wednesday 04 September 2013 16:24:00 you wrote:
On Wed, Sep 4, 2013 at 4:01 PM, Tom Kent
wrote: I've got it at a spot where I can run stuff manually now, but I'm not sure how to manually kickoff that test. Can you tell me what command would re-create the test manually?
I think you should be able to run tests by simply invoking b2 in the tests directory. I.e.:
cd libs\log\test b2 --toolset=msvc-11.0 -j4 address-model=64
if the command line on the test stand info page [1] is accurate.
This command won't check for timeout though. I'm not sure how to measure time from the command line on Windows.
[1]
http://www.boost.org/development/tests/trunk/teeks99-03d-win2008-64on64.html
Any news on this?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Tuesday 10 September 2013 16:02:12 Tom Kent wrote:
On Tue, Sep 10, 2013 at 1:05 PM, Andrey Semashev
Any news on this?
I thought it got better, have you seen it again?
Yes, the problem still persists. Latest trunk tests for msvc-8.0 failed and release tests for msvc-8.0 and msvc-10.0 failed as well with the same diagnostics.
Ok, so I just looked at this some more and discovered something that I had
previously overlooked.
It isn't the tests that are timing out, it is the *linker*. I'm assuming
this is due to the rediculously large library sizes for log on windows. If
we can't substanitally trim down the size of this library, then we should
probably look for a way to increase the test timeout value (hopefully just
for this linker step and nothing else).
Tom
On Tue, Sep 10, 2013 at 10:25 PM, Andrey Semashev wrote: On Tuesday 10 September 2013 16:02:12 Tom Kent wrote: On Tue, Sep 10, 2013 at 1:05 PM, Andrey Semashev Any news on this? I thought it got better, have you seen it again? Yes, the problem still persists. Latest trunk tests for msvc-8.0 failed and
release tests for msvc-8.0 and msvc-10.0 failed as well with the same
diagnostics. _______________________________________________
Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost
On Wed, Sep 11, 2013 at 5:28 PM, Tom Kent
Ok, so I just looked at this some more and discovered something that I had previously overlooked.
It isn't the tests that are timing out, it is the *linker*. I'm assuming this is due to the rediculously large library sizes for log on windows. If we can't substanitally trim down the size of this library, then we should probably look for a way to increase the test timeout value (hopefully just for this linker step and nothing else).
I did some refactoring to reduce library sizes several weeks ago. The static library sizes went down ~50%, AFAIR. Ironically, I don't remember seeing such test failures before the refactoring. I'm not sure I will be able to shrink it any further without disabling portions of the library or building some customized version just for tests. And I'd like to avoid it since the tests will be run on a non-default version, which will not be what is shipped to users. I tried building the library and tests on my local machine many times and I've never seen linking to take 5 minutes. I have another setup though. Did you notice any other abnormal behavior, such as memory growth or swapping or compiler/linker crashes maybe? Does linking ever complete?
I tried building the library and tests on my local machine many times and I've never seen linking to take 5 minutes. I have another setup though. Did you notice any other abnormal behavior, such as memory growth or swapping or compiler/linker crashes maybe? Does linking ever complete?
It doesn't seem to always be failing, and I don't have a good way to watch it when the automated tests are run. Whenever I have manually built it on this machine the linking never failed.
On Wednesday 11 September 2013 09:34:55 Tom Kent wrote:
I tried building the library and tests on my local machine many times and I've never seen linking to take 5 minutes. I have another setup though. Did you notice any other abnormal behavior, such as memory growth or swapping or compiler/linker crashes maybe? Does linking ever complete?
It doesn't seem to always be failing, and I don't have a good way to watch it when the automated tests are run. Whenever I have manually built it on this machine the linking never failed.
Tom, the tests are continuing to fail and I don't know what can be done on Boost.Log side. Could you try to increase the timeout (to 10 minutes, for example)?
I'm not sure how, I think it needs to be added to the regression scripts or
test modules.
On Mon, Sep 30, 2013 at 3:34 AM, Andrey Semashev
On Wednesday 11 September 2013 09:34:55 Tom Kent wrote:
I tried building the library and tests on my local machine many times and I've never seen linking to take 5 minutes. I have another setup though. Did you notice any other abnormal behavior, such as memory growth or swapping or compiler/linker crashes maybe? Does linking ever complete?
It doesn't seem to always be failing, and I don't have a good way to watch it when the automated tests are run. Whenever I have manually built it on this machine the linking never failed.
Tom, the tests are continuing to fail and I don't know what can be done on Boost.Log side. Could you try to increase the timeout (to 10 minutes, for example)?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Monday 30 September 2013 07:46:31 Tom Kent wrote:
On Mon, Sep 30, 2013 at 3:34 AM, Andrey Semashev
wrote: On Wednesday 11 September 2013 09:34:55 Tom Kent wrote:
I tried building the library and tests on my local machine many times
and
I've never seen linking to take 5 minutes. I have another setup though. Did you notice any other abnormal behavior, such as memory growth or
swapping
or compiler/linker crashes maybe? Does linking ever complete?
It doesn't seem to always be failing, and I don't have a good way to
watch
it when the automated tests are run. Whenever I have manually built it on this machine the linking never failed.
Tom, the tests are continuing to fail and I don't know what can be done on Boost.Log side. Could you try to increase the timeout (to 10 minutes, for example)?
I'm not sure how, I think it needs to be added to the regression scripts or test modules.
I think you just need to add --timeout=10 option to your run.py command line.
On Monday 30 September 2013 17:41:11 you wrote:
On Monday 30 September 2013 07:46:31 Tom Kent wrote:
On Mon, Sep 30, 2013 at 3:34 AM, Andrey Semashev
wrote: On Wednesday 11 September 2013 09:34:55 Tom Kent wrote:
I tried building the library and tests on my local machine many times
and
I've never seen linking to take 5 minutes. I have another setup though. Did you notice any other abnormal behavior, such as memory growth or
swapping
or compiler/linker crashes maybe? Does linking ever complete?
It doesn't seem to always be failing, and I don't have a good way to
watch
it when the automated tests are run. Whenever I have manually built it on this machine the linking never failed.
Tom, the tests are continuing to fail and I don't know what can be done on Boost.Log side. Could you try to increase the timeout (to 10 minutes, for example)?
I'm not sure how, I think it needs to be added to the regression scripts or test modules.
I think you just need to add --timeout=10 option to your run.py command line.
BTW, the same needs to be done for release branch testing.
On Mon, Sep 30, 2013 at 8:41 AM, Andrey Semashev
On Monday 30 September 2013 07:46:31 Tom Kent wrote:
On Mon, Sep 30, 2013 at 3:34 AM, Andrey Semashev
wrote: On Wednesday 11 September 2013 09:34:55 Tom Kent wrote:
I tried building the library and tests on my local machine many
times
and
I've never seen linking to take 5 minutes. I have another setup though. Did you notice any other abnormal behavior, such as memory growth or
swapping
or compiler/linker crashes maybe? Does linking ever complete?
It doesn't seem to always be failing, and I don't have a good way to
watch
it when the automated tests are run. Whenever I have manually built
it
on this machine the linking never failed.
Tom, the tests are continuing to fail and I don't know what can be done on Boost.Log side. Could you try to increase the timeout (to 10 minutes, for example)?
I'm not sure how, I think it needs to be added to the regression scripts or test modules.
I think you just need to add --timeout=10 option to your run.py command line.
I've added this to all the builds running on teeks99-03, so if this works we should see it start getting better over the next couple days.
On Mon, Sep 30, 2013 at 6:10 PM, Tom Kent
On Mon, Sep 30, 2013 at 8:41 AM, Andrey Semashev < andrey.semashev@gmail.com> wrote:
On Monday 30 September 2013 07:46:31 Tom Kent wrote:
On Mon, Sep 30, 2013 at 3:34 AM, Andrey Semashev
wrote: On Wednesday 11 September 2013 09:34:55 Tom Kent wrote:
I tried building the library and tests on my local machine many
and
I've never seen linking to take 5 minutes. I have another setup though. Did you notice any other abnormal behavior, such as memory growth or
swapping
or compiler/linker crashes maybe? Does linking ever complete?
It doesn't seem to always be failing, and I don't have a good way to
watch
it when the automated tests are run. Whenever I have manually built
it
on this machine the linking never failed.
Tom, the tests are continuing to fail and I don't know what can be done on Boost.Log side. Could you try to increase the timeout (to 10 minutes, for example)?
I'm not sure how, I think it needs to be added to the regression
times scripts or
test modules.
I think you just need to add --timeout=10 option to your run.py command line.
I've added this to all the builds running on teeks99-03, so if this works we should see it start getting better over the next couple days.
So, apparently it is something other than the timeout (or the timeout is longer than 10) because they are still failing with --timeout=10: http://www.boost.org/development/tests/trunk/developer/output/teeks99-03d-wi...
On Wed, Oct 2, 2013 at 7:36 AM, Tom Kent
On Mon, Sep 30, 2013 at 6:10 PM, Tom Kent
wrote: On Mon, Sep 30, 2013 at 8:41 AM, Andrey Semashev < andrey.semashev@gmail.com> wrote:
On Monday 30 September 2013 07:46:31 Tom Kent wrote:
On Mon, Sep 30, 2013 at 3:34 AM, Andrey Semashev
wrote: On Wednesday 11 September 2013 09:34:55 Tom Kent wrote:
> I tried building the library and tests on my local machine many
and
> I've never seen linking to take 5 minutes. I have another setup > though. > Did > you notice any other abnormal behavior, such as memory growth or
swapping
> or compiler/linker crashes maybe? Does linking ever complete?
It doesn't seem to always be failing, and I don't have a good way
to
watch
it when the automated tests are run. Whenever I have manually
built it
on this machine the linking never failed.
Tom, the tests are continuing to fail and I don't know what can be done on Boost.Log side. Could you try to increase the timeout (to 10 minutes, for example)?
I'm not sure how, I think it needs to be added to the regression
times scripts or
test modules.
I think you just need to add --timeout=10 option to your run.py command line.
I've added this to all the builds running on teeks99-03, so if this works we should see it start getting better over the next couple days.
So, apparently it is something other than the timeout (or the timeout is longer than 10) because they are still failing with --timeout=10:
http://www.boost.org/development/tests/trunk/developer/output/teeks99-03d-wi...
Hang on, the timeout flag didn't showup in the run.py command line at the bottom, I just tried moving it further forward in the options, we'll see if it shows up now.
On 10/2/2013 8:43 PM, Tom Kent wrote: [ ...]
So, apparently it is something other than the timeout (or the timeout is longer than 10) because they are still failing with --timeout=10:
http://www.boost.org/development/tests/trunk/developer/output/teeks99-03d-wi...
Hang on, the timeout flag didn't showup in the run.py command line at the bottom, I just tried moving it further forward in the options, we'll see if it shows up now.
I found some recent change in Boost.Concept will cause a compilation error,
which will cause fails in several libs, including Log, Graph, etc. I'm
not sure if
it is what being discussed here.
A small script code to demo it:
// Begin test_code.cpp
#include
On Thu, Oct 3, 2013 at 6:11 AM, cg
I found some recent change in Boost.Concept will cause a compilation error, which will cause fails in several libs, including Log, Graph, etc. I'm not sure if it is what being discussed here.
A small script code to demo it:
// Begin test_code.cpp #include
#include namespace boost { BOOST_concept(ReadableIterator, (Iterator)) : boost::Assignable<Iterator> , boost::CopyConstructible<Iterator> { }; }
int main(int argc, char* argv[]) { return 0; } // End test_code.cpp
Supposed boost trunk is in c:\boost, then open a VS2012/VS2013RC command line and run: cl /EHsc -I c:\boost test_code.cpp
will get: test_code.cpp(7) : error C2065: 'ReadableIterator' : undeclared identifier
The similar error will happen while compiling Log, Graph, etc.
I'm not seeing this error in testing reports (the latest MSVC-11 builds are 86133 on Goonland and 86125 on teeks99). The problem we're facing is build timeouts, with no other errors. I'll try to build the latest trunk this evening, but my guess is that your problem is not related to timeouts.
On Thu, Oct 3, 2013 at 10:50 AM, Andrey Semashev
On Thu, Oct 3, 2013 at 6:11 AM, cg
wrote: I found some recent change in Boost.Concept will cause a compilation error, which will cause fails in several libs, including Log, Graph, etc. I'm not sure if it is what being discussed here.
A small script code to demo it:
// Begin test_code.cpp #include
#include namespace boost { BOOST_concept(ReadableIterator, (Iterator)) : boost::Assignable<Iterator> , boost::CopyConstructible<Iterator> { }; }
int main(int argc, char* argv[]) { return 0; } // End test_code.cpp
Supposed boost trunk is in c:\boost, then open a VS2012/VS2013RC command line and run: cl /EHsc -I c:\boost test_code.cpp
will get: test_code.cpp(7) : error C2065: 'ReadableIterator' : undeclared identifier
The similar error will happen while compiling Log, Graph, etc.
I'm not seeing this error in testing reports (the latest MSVC-11 builds are 86133 on Goonland and 86125 on teeks99). The problem we're facing is build timeouts, with no other errors.
I'll try to build the latest trunk this evening, but my guess is that your problem is not related to timeouts.
Tried to build 86146 locally, I'm not seeing your error.
On Wed, Oct 2, 2013 at 7:43 AM, Tom Kent
On Wed, Oct 2, 2013 at 7:36 AM, Tom Kent
wrote: On Mon, Sep 30, 2013 at 6:10 PM, Tom Kent
wrote: On Mon, Sep 30, 2013 at 8:41 AM, Andrey Semashev < andrey.semashev@gmail.com> wrote:
On Monday 30 September 2013 07:46:31 Tom Kent wrote:
On Mon, Sep 30, 2013 at 3:34 AM, Andrey Semashev
wrote: On Wednesday 11 September 2013 09:34:55 Tom Kent wrote: > > I tried building the library and tests on my local machine many
and
> > I've never seen linking to take 5 minutes. I have another setup > > though. > > Did > > you notice any other abnormal behavior, such as memory growth or
swapping
> > or compiler/linker crashes maybe? Does linking ever complete? > > It doesn't seem to always be failing, and I don't have a good way
to
watch
> it when the automated tests are run. Whenever I have manually
built it
> on > this machine the linking never failed.
Tom, the tests are continuing to fail and I don't know what can be done on Boost.Log side. Could you try to increase the timeout (to 10 minutes, for example)?
I'm not sure how, I think it needs to be added to the regression
times scripts or
test modules.
I think you just need to add --timeout=10 option to your run.py command line.
I've added this to all the builds running on teeks99-03, so if this works we should see it start getting better over the next couple days.
So, apparently it is something other than the timeout (or the timeout is longer than 10) because they are still failing with --timeout=10:
http://www.boost.org/development/tests/trunk/developer/output/teeks99-03d-wi...
Hang on, the timeout flag didn't showup in the run.py command line at the bottom, I just tried moving it further forward in the options, we'll see if it shows up now.
So are you sure the flag is just --timeout=10? I have that in my script, but it doesn't seem to be picking it up...aka when the command is put at the bottom of the html page for the test runner, it shows the command correctly, but without that part of it. Is it possible that it is getting thrown out by run.py somehow? http://www.boost.org/development/tests/trunk/teeks99-03a-win2008-64on64.html The actual command I'm running: python run.py --runner=teeks99-03a-win2008-64on64 --force-update --toolsets=msvc-8.0 --bjam-options="-j2 address-model=64" --timeout=10 --comment=..\info.html --timeout=10 2>&1 | ..\wtee output.log The one that shows up in the above link: run.py --runner=teeks99-03a-win2008-64on64 --force-update --toolsets=msvc-8.0 \ "--bjam-options=\"-j4 address-model=64\"" --comment=..\\info.html Any ideas what is going on here? Tom
On Thursday 03 October 2013 16:37:45 Tom Kent wrote:
So are you sure the flag is just --timeout=10? I have that in my script, but it doesn't seem to be picking it up...aka when the command is put at the bottom of the html page for the test runner, it shows the command correctly, but without that part of it. Is it possible that it is getting thrown out by run.py somehow?
http://www.boost.org/development/tests/trunk/teeks99-03a-win2008-64on64.html
The actual command I'm running:
python run.py --runner=teeks99-03a-win2008-64on64 --force-update --toolsets=msvc-8.0 --bjam-options="-j2 address-model=64" --timeout=10 --comment=..\info.html --timeout=10 2>&1 | ..\wtee output.log
The one that shows up in the above link:
run.py --runner=teeks99-03a-win2008-64on64 --force-update --toolsets=msvc-8.0 \ "--bjam-options=\"-j4 address-model=64\"" --comment=..\\info.html
Any ideas what is going on here?
I don't know, I've never set up a testing machine for Boost before. The parameter does seem to be present for some other testers, for instance: http://www.boost.org/development/tests/trunk/VC8%20jc-bell-com.html
On Friday 04 October 2013 01:56:10 you wrote:
On Thursday 03 October 2013 16:37:45 Tom Kent wrote:
So are you sure the flag is just --timeout=10? I have that in my script, but it doesn't seem to be picking it up...aka when the command is put at the bottom of the html page for the test runner, it shows the command correctly, but without that part of it. Is it possible that it is getting thrown out by run.py somehow?
http://www.boost.org/development/tests/trunk/teeks99-03a-win2008-64on64.ht ml
The actual command I'm running:
python run.py --runner=teeks99-03a-win2008-64on64 --force-update --toolsets=msvc-8.0 --bjam-options="-j2 address-model=64" --timeout=10 --comment=..\info.html --timeout=10 2>&1 | ..\wtee output.log
The one that shows up in the above link:
run.py --runner=teeks99-03a-win2008-64on64 --force-update --toolsets=msvc-8.0 \ "--bjam-options=\"-j4 address-model=64\"" --comment=..\\info.html
Any ideas what is going on here?
I don't know, I've never set up a testing machine for Boost before. The parameter does seem to be present for some other testers, for instance:
http://www.boost.org/development/tests/trunk/VC8%20jc-bell-com.html
Tom, is it possible that multiple builds are run simultaneously on the same machine, and because each build tries to utilize all CPUs the whole system becomes overutilized? Just an idea of a possible cause of such long builds. Also, I've committed a change yesterday, which may save a few seconds of building (at least it did save as much on my setup).
participants (3)
-
Andrey Semashev
-
cg
-
Tom Kent