Boost-wide reports: Time to join in!

Now that we have Boost-wide reports up and running [1], I'd like to encourage all regression runners who's results are not represented there yet take a tiny bit of time to join in. The setup procedure is documented in http://tinyurl.com/4pnfd. It's very short and painless, too! It's crucial that we have an objective picture of the codebase when/before we release, and Boost-wide reports is the only instrument that is able to provide us with it. So please, join in! [1] http://www.boost.org/regression-logs/developer -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy ha escrito:
Now that we have Boost-wide reports up and running [1], I'd like to encourage all regression runners who's results are not represented there yet take a tiny bit of time to join in.
I'd like to add, some of the tests uploaded are difficult to identify because the toolset name is too generic (e.g. "gcc" or "cw"). It'd be great if those names could be more informative, like "gcc-3.2.3-sunos5" or whatever. I may be wrong, but I think it only takes renaming the corresponding *-tools.jam file. Slightly off-topic: is there any estimation for the branching date? Any news about the new MPL? I've got the impression that the rate of fixes has decreased these days, so maybe it's about time to mark failures and pack. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo

Joaquín Mª López Muñoz wrote:
I'd like to add, some of the tests uploaded are difficult to identify because the toolset name is too generic (e.g. "gcc" or "cw"). It'd be great if those names could be more informative, like "gcc-3.2.3-sunos5" or whatever. I may be wrong, but I think it only takes renaming the corresponding *-tools.jam file.
This would be great, but we should find some way to achieve it without manually renaming any files.
Slightly off-topic: is there any estimation for the branching date? Any news about the new MPL? I've got the impression that the rate of fixes has decreased these days, so maybe it's about time to mark failures and pack.
I think we should elaborate a detailed todo list and process it step by step. At the moment it's hard to follow what will/can be fixed and what failures have been investigated at all. Stefan

Stefan Slapeta wrote:
Joaqu?n M? L?pez Mu?oz wrote:
I'd like to add, some of the tests uploaded are difficult to identify because the toolset name is too generic (e.g. "gcc" or "cw"). It'd be great if those names could be more informative, like "gcc-3.2.3-sunos5" or whatever. I may be wrong, but I think it only takes renaming the corresponding *-tools.jam file.
This would be great, but we should find some way to achieve it without manually renaming any files.
Just create a new toolset file, named accordingly, and place this in it: extends-toolset <your-base-toolset-here> ; -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

Joaquín Mª López Muñoz writes:
Aleksey Gurtovoy ha escrito:
Now that we have Boost-wide reports up and running [1], I'd like to encourage all regression runners who's results are not represented there yet take a tiny bit of time to join in.
I'd like to add, some of the tests uploaded are difficult to identify because the toolset name is too generic (e.g. "gcc" or "cw"). It'd be great if those names could be more informative, like "gcc-3.2.3-sunos5" or whatever.
It's crutial, actually, because explicit failures markup is currently based on toolset names alone, and marking a library as "unusable" with, for example, 'gcc' is *way* too broad of a claim.
I may be wrong, but I think it only takes renaming the corresponding *-tools.jam file.
Or, better yet, inheriting from it as Rene has already shown. In fact, if you are running tests through 'regression.py', all it takes to automate this is to place something like the attached 'patch_boost' script ('patch_boost.bat' on Windows) in the driver scripts' directory.
Slightly off-topic: is there any estimation for the branching date?
I'd like to target for Monday evening, but before committing to it we *really* need to have an objective picture of the CVS state. In particular, that means: 1) Having all "supported" platforms in the Boost-wide reports. 2) Having reports with "beta" libraries and non-required toolsets *excluded*. We hope to have the second one up and running sometime today.
Any news about the new MPL?
It's still in works :(, mostly for the lack of time than any significant problems.
I've got the impression that the rate of fixes has decreased these days,
I believe partly it is so because it's hard to see the progress behind the new libraries and compilers that nobody cares about that are populating the field with yellow cells. It's generally discouraging to work on something and don't see a visible improvement over a relatively short period of time. Another contributing factors are long turnaround times (basically 24 hours), and the fact that many patches that could be committed instantly are submitted to the list and have to be applied by somebody with CVS access, consuming precious time of both sides (the patch submitter and the developer). Note that the problem with long regression cycles is *not* that it takes too long to run the tests -- Boost-wide reports effectively solve this problem by enabling the testing to be highly distributed without loosing a bit of the results' informativeness. Our average regression cycle is 24 hours because many the regression runners cannot afford running the test continuosly rather than once daily. I'm not sure what can be done about this one besides finding more volunteers that have a machine to spare, or/and a greater number of volunteers to run the tests interlacingly (e.g. if five people who volunteer to test with gcc 3.2 on Linux can arrange running the tests once daily but at different times, the gcc cycle will be shorten to 5 hours). In either case, it's going to take time to build this up, and at the moment people who have local access to a particular compiler are in the most privileged position to fix things in an agile way. As for the patches, I beleive everybody will win if we grant a few people who have been actively contributing the fixes write access -- for those who wants it, of course. But I want to re-iterate my original point -- all other factors notwithstanding, the process of fixing regressions has to be rewarding, and the reports making the progress visible and real play a significant role in it. They also have to be representative, so, to our precious regression runners who's results are not in the Boost-wide reports yet -- please take time to join in!
so maybe it's about time to mark failures and pack.
Regressions (red cells) aside, that's basically what is going to be done with the failures that are not resolved by the branch-for-release date. I hope by that time most of them will be already marked up and commented, though. -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy ha escrito: [...]
Any news about the new MPL?
It's still in works :(, mostly for the lack of time than any significant problems.
Oh, I know "amazing" is your middle name, but are you so confident that you won't introduce any regression 4 days short of the branch? Well, let's hope for the best, if you finally decide to check in. **Boost.MultiIndex current status** Well, on another topic, I'm leaving for vacation next Saturday, so I won't be able to participate during the final stage of the release process. I'll be watching the game from some sloppy Internet connection, but won't have CVS write access :( This means that somebody will have to take care of Boost.MultiIndex (any volunteer?). I hope it'll pose little problem, since IMHO the lib is in pretty good state now. For your convenience, here's a summary of the current status: * Inspection reports pass OK. * License reports pass OK, except two files: a .css and a redirecting index.html do not have (c) info. I think nobody else copyrights these dumb files, but in case (c) is required please feel free to include whatever legalese you please. * I've checked in my bio and links to multi_index docs from the main page and the libraries section. * The lib is known to work for the following: - Comeau 4.3.3 (vc7++ backend) - GCC 3.2 and later (in principle under any platform) - Visual Age 6.0 AIX - Intel 7.1/8.0 Windows/Linux (+STLport) - CW 8.3/9.2 Mac/Windows - MSVC++ 6.5/7.0/7.1 (+STLport) Any future failure in one of these toolsets should be regarded as a regression from its current status, pronably due to changes in the libs multi_index depends on. * The lib is known *not* to work in GCC 2.95.x, BCB6.4 and Comeau 4.3.3 with VC++ 6.0 backend. I'll have these toolsets marked as unusable before I leave. * I've commited a last minute workaround for MSVC++ 8.0 and am desperately waiting for the next RudbekAssociates tests turnaround. If this cames in before I leave I'll act accordingly to the results (hopefully positive.) * Any other toolset I don't make any claim wrt support/lack of it. I hope I haven't missed any important point. Good luck with the release, may the force be with you etc. I'd like to thank everybody for your support during this fascinating experience of contributing a lib to Boost. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo

Aleksey Gurtovoy wrote:
I believe partly it is so because it's hard to see the progress behind the new libraries and compilers that nobody cares about that are populating the field with yellow cells. It's generally discouraging to work on something and don't see a visible improvement over a relatively short period of time.
My impression was that there has been a good progress during the last days. Some compilers already look very good which is a very different situation from a week ago. (e.g. Intel 8 is nearing 100%; all the remaining defects are reported as compiler defect!)
Another contributing factors are long turnaround times (basically 24 hours), and the fact that many patches that could be committed instantly are submitted to the list and have to be applied by somebody with CVS access, consuming precious time of both sides (the patch submitter and the developer).
I can just second that, it's sometimes very annoying when you don't find anybody for committing a fix! And of course it's waste of time.
Note that the problem with long regression cycles is *not* that it takes too long to run the tests -- Boost-wide reports effectively solve this problem by enabling the testing to be highly distributed without loosing a bit of the results' informativeness. Our average regression cycle is 24 hours because many the regression runners cannot afford running the test continuosly rather than once daily.
[...]
My tests (Intel 8 and cw 9) run now twice a day. Although I thought this is a reasonable interval, it wouldn't be any problem for me to activate more machines and run tests more frequently. I could also add the toolsets VC 7.1 (and perhaps VC 8).
As for the patches, I beleive everybody will win if we grant a few people who have been actively contributing the fixes write access -- for those who wants it, of course.
If my help is needed, I want :) Stefan

Hi Stefan Stefan Slapeta ha escrito: [...]
My tests (Intel 8 and cw 9) run now twice a day. Although I thought this is a reasonable interval, it wouldn't be any problem for me to activate more machines and run tests more frequently. I could also add the toolsets VC 7.1 (and perhaps VC 8).
I'd be extremely grateful if you could run MSVC 8.0 tests to check a fix I just commited, even if you don't upload the results to the Boost-wide reports page (of course, if you do that too so much the better.) Thanks Joaquín M López Muñoz Telefónica, Investigación y Desarrollo

At Thursday 2004-07-29 09:24, you wrote:
Hi Stefan
Stefan Slapeta ha escrito: [...]
My tests (Intel 8 and cw 9) run now twice a day. Although I thought this is a reasonable interval, it wouldn't be any problem for me to activate more machines and run tests more frequently. I could also add the toolsets VC 7.1 (and perhaps VC 8).
I'd be extremely grateful if you could run MSVC 8.0 tests to check a fix I just commited, even if you don't upload the results to the Boost-wide reports page (of course, if you do that too so much the better.)
I just started a rerun my testing
Thanks
Joaquín M López Muñoz Telefónica, Investigación y Desarrollo
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Victor A. Wagner Jr.
I'd be extremely grateful if you could run MSVC 8.0 tests to check a fix I just commited, even if you don't upload the results to the Boost-wide reports page (of course, if you do that too so much the better.)
I just started a rerun my testing
Thanks, I wouldn't have been possible for me today anyway. BTW, is there now a working VC 8 configuration in CVS? Stefan

At Thursday 2004-07-29 13:45, you wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Victor A. Wagner Jr.
I'd be extremely grateful if you could run MSVC 8.0 tests to check a fix I just commited, even if you don't upload the results to the Boost-wide reports page (of course, if you do that too so much the better.)
I just started a rerun my testing
Thanks, I wouldn't have been possible for me today anyway.
BTW, is there now a working VC 8 configuration in CVS?
seems to be, though I had to change some stuff so that it could find my copy (the paths were wrong)
Stefan
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

Aleksey Gurtovoy wrote:
Now that we have Boost-wide reports up and running [1], I'd like to encourage all regression runners who's results are not represented there yet take a tiny bit of time to join in.
The setup procedure is documented in http://tinyurl.com/4pnfd. It's very short and painless, too!
I'm trying to run the tests on IBM but it looks to me that you have hardcoded the gcc toolset. The script tries to compile with gcc although I provide the arg --toolsets=vacpp so finally I changed the hardcoded 'gcc' at the beginning of the script in 'vacpp'. BTW, it would be nice if the script could also be instructed to reuse the boost directory that is present. I run regression tests every day on 3 platforms so I download them once, copy them on all machines and launch the scripts. Downloading them for every platform would needlessly consume bandwith. toon

Toon Knapen wrote:
Aleksey Gurtovoy wrote:
I'm trying to run the tests on IBM but it looks to me that you have hardcoded the gcc toolset. The script tries to compile with gcc although I provide the arg --toolsets=vacpp so finally I changed the hardcoded 'gcc' at the beginning of the script in 'vacpp'.
I don't think it's hard-coded; I think that's just a default. The problem I think is that the code that sets the TOOLS variable still has Windows-isms, rather than using "bjam -sTOOLS=vacpp ..." Christopher -- Christopher Currie <codemonkey@gmail.com>

Christopher Currie writes:
Toon Knapen wrote:
Aleksey Gurtovoy wrote: I'm trying to run the tests on IBM but it looks to me that you have hardcoded the gcc toolset. The script tries to compile with gcc although I provide the arg --toolsets=vacpp so finally I changed the hardcoded 'gcc' at the beginning of the script in 'vacpp'.
I don't think it's hard-coded; I think that's just a default. The problem I think is that the code that sets the TOOLS variable still has Windows-isms, rather than using "bjam -sTOOLS=vacpp ..."
Fixed in the CVS and the attached revision. -- Aleksey Gurtovoy MetaCommunications Engineering

Toon Knapen writes:
Aleksey Gurtovoy wrote:
Now that we have Boost-wide reports up and running [1], I'd like to encourage all regression runners who's results are not represented there yet take a tiny bit of time to join in. The setup procedure is documented in http://tinyurl.com/4pnfd. It's very short and painless, too!
I'm trying to run the tests on IBM but it looks to me that you have hardcoded the gcc toolset.
*Bootstrap* toolsets (the one that are used to build 'bjam' and 'process_jam_log' executables) were hard-coded, for no good reason. The attached revision of the script allows you to specify both of these separately, or omit them altogether: Options: ... --bjam-toolset bootstrap toolset for 'bjam' executable (optional) --pjl-toolset bootstrap toolset for 'process_jam_log' executable (optional) If you omit them, and there are no pre-built executables in the script's directory, then the first toolset extracted from the '--toolsets' option will be used, and if the latter is not present, then the script will fall back to some platform-dependent default.
The script tries to compile with gcc although I provide the arg --toolsets=vacpp so finally I changed the hardcoded 'gcc' at the beginning of the script in 'vacpp'.
Now simple python regression.py --toolsets=vacpp --runner=<your runner id> should work, given that you want to rebuild 'bjam'/'process_jam_log' on every cycle (if you don't, simply place the binaries in the script directory, as per docs).
BTW, it would be nice if the script could also be instructed to reuse the boost directory that is present. I run regression tests every day on 3 platforms so I download them once, copy them on all machines and launch the scripts. Downloading them for every platform would needlessly consume bandwith.
Undestood. You can do it by invoking 'regression.py' with the following sequence of commands: python regression.py cleanup bin python regression.py setup python regression.py test --toolsets=<your toolsets> python regression.py collect-logs --runner=<your runner id> python regression.py upload-logs --runner=<your runner id> -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
Toon Knapen writes:
*Bootstrap* toolsets (the one that are used to build 'bjam' and 'process_jam_log' executables) were hard-coded, for no good reason. The attached revision of the script allows you to specify both of these separately, or omit them altogether:
Options: ... --bjam-toolset bootstrap toolset for 'bjam' executable (optional) --pjl-toolset bootstrap toolset for 'process_jam_log' executable (optional)
If you omit them, and there are no pre-built executables in the script's directory, then the first toolset extracted from the '--toolsets' option will be used, and if the latter is not present, then the script will fall back to some platform-dependent default.
thanks, this works fine now.
Undestood. You can do it by invoking 'regression.py' with the following sequence of commands:
python regression.py cleanup bin python regression.py setup python regression.py test --toolsets=<your toolsets> python regression.py collect-logs --runner=<your runner id> python regression.py upload-logs --runner=<your runner id>
Now my problem is that the python on my IBM/Aix machine does not support zipping. But I probably can zip and upload the necessary file myself using standard tools. I tried to decypher what should be done from the python script and saw the 'from runner import upload_logs' but there is no 'runner' module, is there? I'm not very good in Python so any hint would be welcome. toon

Toon Knapen writes:
Aleksey Gurtovoy wrote:
Undestood. You can do it by invoking 'regression.py' with the following sequence of commands: python regression.py cleanup bin python regression.py setup python regression.py test --toolsets=<your toolsets> python regression.py collect-logs --runner=<your runner id> python regression.py upload-logs --runner=<your runner id>
Now my problem is that the python on my IBM/Aix machine does not support zipping.
Oh, that's a pity.
But I probably can zip and upload the necessary file myself using standard tools.
Sure, please see below.
I tried to decypher what should be done from the python script and saw the 'from runner import upload_logs' but there is no 'runner' module, is there?
That statement imports 'upload_logs' function from "boost/tools/regression/xsl_reports/runner/collect_and_upload_logs.py" (http://cvs.sourceforge.net/viewcvs.py/boost/boost/tools/regression/xsl_repor...).
I'm not very good in Python so any hint would be welcome.
Basically all you need to do after the "collect-logs" step is to zip the resulting XML ("<your runner id>.xml") located in the "./results" directory and upload it to ftp://fx.meta-comm.com/boost-regression/CVS-HEAD/. -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
Now my problem is that the python on my IBM/Aix machine does not support zipping.
Oh, that's a pity.
Seems to me that you should be able to fix the code so that if the zip library isn't available, it writes the material to a file and zips it using os.system('zip ... ') -- Dave Abrahams Boost Consulting http://www.boost-consulting.com

David Abrahams writes:
Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
Now my problem is that the python on my IBM/Aix machine does not support zipping.
Oh, that's a pity.
Seems to me that you should be able to fix the code so that if the zip library isn't available,
While we're on it, Toon, could you post the exact diagnostics you were getting?
it writes the material to a file and zips it using os.system('zip ... ')
Sure, only there is no a commonly named 'zip' tool available on all platforms, is there? -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
David Abrahams writes:
Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
Now my problem is that the python on my IBM/Aix machine does not support zipping.
Oh, that's a pity.
Seems to me that you should be able to fix the code so that if the zip library isn't available,
While we're on it, Toon, could you post the exact diagnostics you were getting?
the attachment regression.py.out contains the diagnostic. I also (tried to) patch regression.py so that it accepts a directory from where the boost installation can be copied (instead of downloading it). So now I define a proxy which is actually a path, if the proxy does not contain "http", the script will copytree the boost instead of download it. I is not a high-quality patch but my python still needs a lot of improving and it does what I currently need.

On 7/28/04 8:06 AM, "Aleksey Gurtovoy" <agurtovoy@meta-comm.com> wrote:
Now that we have Boost-wide reports up and running [1], I'd like to encourage all regression runners who's results are not represented there yet take a tiny bit of time to join in. [SNIP] [1] http://www.boost.org/regression-logs/developer
Does anyone know how to fix the problems with ios_state_unit_test given at <http://tinyurl.com/7yjj2>[2]? I want it to replace the older test already there, but I can't while there's problems. And all the "darwin" entries seem blank. Why? 2) <http://www.meta-comm.com/engineering/boost-regression/developer/io.html> in long form -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Aleksey Gurtovoy wrote: [...] * I have Comeau 4.3.3 and two backends : MSVC6 and MSVC71 . Which one should I use to run regression tests? * I'd like to test MINGW 3.4.1 (release candidate) with Boost. Should I use toolset MINGW? B.

Bronek Kozicki writes:
Aleksey Gurtovoy wrote: [...]
* I have Comeau 4.3.3 and two backends : MSVC6 and MSVC71 . Which one should I use to run regression tests?
MSVC 7.1, using "como-win32-4.3.3-vc7.1" toolset (just checked in).
* I'd like to test MINGW 3.4.1 (release candidate) with Boost. Should I use toolset MINGW?
Preferably "mingw-3.4.1" (again, just checked in). HTH, -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
MSVC 7.1, using "como-win32-4.3.3-vc7.1" toolset (just checked in).
Great, running now (together with vc7.1 and "regular" mingw 3.3.1)
* I'd like to test MINGW 3.4.1 (release candidate) with Boost. Should I use toolset MINGW? Preferably "mingw-3.4.1" (again, just checked in).
Will be running separately - I cannot run it together with "regular" mingw B.

Aleksey Gurtovoy wrote:
* I'd like to test MINGW 3.4.1 (release candidate) with Boost. Should I use toolset MINGW?
Preferably "mingw-3.4.1" (again, just checked in).
Hm. Something is wrong : # Getting sources (Mon, 02 Aug 2004 06:28:52 +0000)... # Downloading 'C:\MinGW341\Bronek\boost\boost.tar.bz2' for tag CVS-HEAD from www.boost-consulting.com... # Looking for old unpacked archives... # Unpacking boost tarball ("C:\MinGW341\Bronek\boost\boost.tar.bz2")... # Unpacked into directory "C:\MinGW341\Bronek\boost\boost-04-08-01-2300" C:\MinGW341\Bronek\boost\boost\tools\build\jam_src>build.bat mingw-3.4.1 ### ### "Unknown toolset: mingw-3.4.1" ### ### You can specify the toolset as the argument, i.e.: ### .\build.bat msvc ### ### Toolsets supported by this script are: borland, como, gcc, gcc-nocygwin, intel-win32, metrowerks, mingw, msvc, vc7 ### B.

Bronek Kozicki writes:
Aleksey Gurtovoy wrote:
* I'd like to test MINGW 3.4.1 (release candidate) with Boost. Should I use toolset MINGW?
Preferably "mingw-3.4.1" (again, just checked in).
Hm. Something is wrong :
# Getting sources (Mon, 02 Aug 2004 06:28:52 +0000)... # Downloading 'C:\MinGW341\Bronek\boost\boost.tar.bz2' for tag CVS-HEAD from www.boost-consulting.com... # Looking for old unpacked archives... # Unpacking boost tarball ("C:\MinGW341\Bronek\boost\boost.tar.bz2")... # Unpacked into directory "C:\MinGW341\Bronek\boost\boost-04-08-01-2300"
C:\MinGW341\Bronek\boost\boost\tools\build\jam_src>build.bat mingw-3.4.1 ### ### "Unknown toolset: mingw-3.4.1" ### ### You can specify the toolset as the argument, i.e.: ### .\build.bat msvc ### ### Toolsets supported by this script are: borland, como, gcc, gcc-nocygwin, intel-win32, metrowerks, mingw, msvc, vc7 ###
Oh, sorry, I should have mentioned that due to recent changes, if you don't have prebuilt bjam executable in the 'regression.py' directory, and you are using a numbered toolset, then you need to specify the bjam bootstrap toolset separately (since it only supports a few), using the '--bjam-toolset' option: python regression.py --runner=<your runner id> --toolsets=mingw-3.4.1 --bjam-toolset=vc7 I guess we could determine this situation and fall back to the default bjam toolset automatically. We'll look into this. -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
Oh, sorry, I should have mentioned that due to recent changes, if you don't have prebuilt bjam executable in the 'regression.py' directory, and you are using a numbered toolset, then you need to specify the bjam
OK, running now. Concurently with regression tests of other toolsets, that's pretty heavy crunching for my CPU :>> B.

oh yeah, I got one of those also... it doesn't like vc7.1 either (or didn't) At Sunday 2004-08-01 23:33, you wrote:
Aleksey Gurtovoy wrote:
* I'd like to test MINGW 3.4.1 (release candidate) with Boost. Should I use toolset MINGW?
Preferably "mingw-3.4.1" (again, just checked in).
Hm. Something is wrong :
# Getting sources (Mon, 02 Aug 2004 06:28:52 +0000)... # Downloading 'C:\MinGW341\Bronek\boost\boost.tar.bz2' for tag CVS-HEAD from www.boost-consulting.com... # Looking for old unpacked archives... # Unpacking boost tarball ("C:\MinGW341\Bronek\boost\boost.tar.bz2")... # Unpacked into directory "C:\MinGW341\Bronek\boost\boost-04-08-01-2300"
C:\MinGW341\Bronek\boost\boost\tools\build\jam_src>build.bat mingw-3.4.1 ### ### "Unknown toolset: mingw-3.4.1" ### ### You can specify the toolset as the argument, i.e.: ### .\build.bat msvc ### ### Toolsets supported by this script are: borland, como, gcc, gcc-nocygwin, intel-win32, metrowerks, mingw, msvc, vc7 ###
B. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"
participants (11)
-
Aleksey Gurtovoy
-
Bronek Kozicki
-
Christopher Currie
-
Daryle Walker
-
David Abrahams
-
Joaquín Mª López Muñoz
-
Rene Rivera
-
Stefan Slapeta
-
Stefan Slapeta
-
Toon Knapen
-
Victor A. Wagner Jr.