[math][pool]random][rational]: (somewhat) different implementations of the same algorithm.

1. The Euclide algorithm for the greatest common divisor (gcd) currently is implemented by at least four different boost libraries. Is it a good idea to edit the libraries to replace the redundant codes by #include? 2. I have few codes implementing different variants of the Stein algorithm for gcd. The codes, as tested using gcc 3.4.2 / mingw with the the optimization flag O3 under Win XP on AMD Athlon 64 3500+, are generally faster than the Euclide codes in Boost. One of the codes is about 90% faster. Is it a good idea to put the codes in Boost?

I'm trying asio on WindowsXPSP2 + VS2003, and I found that there are always some connections established from 127.0.0.1:xxxx to 127.0.0.1:yyyy, is it a bug?

I'm trying asio on WindowsXPSP2 + VS2003, and I found that there are always some connections established from 127.0.0.1:xxxx to 127.0.0.1:yyyy, is it a bug?
No, it's not a bug. There is a single socket used by the implementation's select_reactor. The socket is needed to allow a blocking select() call to be interrupted. The select_reactor is used on Windows to implement async_connect as well as the scheduling deadline_timer wait operations. It's on my to-do list to implement true asynchronous connect using ConnectEx (on Windows XP, Window Server 2003 and later). When I find some time to do this, I will be able to eliminate this select_reactor use and the associated socket. Cheers, Chris

Christopher Kohlhoff wrote:
It's on my to-do list to implement true asynchronous connect using ConnectEx (on Windows XP, Window Server 2003 and later). When I find some time to do this, I will be able to eliminate this select_reactor use and the associated socket.
Probably a stupid question, but why not use WSAAsyncSelect/WSAEventSelect?

Hi Peter, Peter Dimov <pdimov@mmltd.net> wrote:
Christopher Kohlhoff wrote:
It's on my to-do list to implement true asynchronous connect using ConnectEx (on Windows XP, Window Server 2003 and later). When I find some time to do this, I will be able to eliminate this select_reactor use and the associated socket.
Probably a stupid question, but why not use WSAAsyncSelect/WSAEventSelect?
The WSAAsyncSelect function is part of the old Windows 3.1 socket interface and uses window messages for notification. I don't really want to go there :) I could use WSAEventSelect in conjunction with WaitForMultipleObjects. It does have a 64 handle limit, whereas select doesn't (FD_SETSIZE defaults to 64 but can be redefined), but that may not be a problem in practice. However, the main reason I'm using the select_reactor on Windows over a WFMO reactor is that it reuses code. And since ConnectEx is available on Windows XP, 2003, and all new versions moving forward, there doesn't seem so much reason to develop a WFMO reactor at this time. Cheers, Chris

On 5/8/06, Peter Dimov <pdimov@mmltd.net> wrote:
Christopher Kohlhoff wrote:
It's on my to-do list to implement true asynchronous connect using ConnectEx (on Windows XP, Window Server 2003 and later).
Just wondering, what part of a normal non-blocking connect isn't truely async?

Hi Olaf, Olaf van der Spek <olafvdspek@gmail.com> wrote:
Just wondering, what part of a normal non-blocking connect isn't truely async?
The fact that it requires a thread waiting on a select() or WaitForMultipleObjects() call to know when the connect operation has completed. Cheers, Chris

On 5/8/06, Christopher Kohlhoff <chris@kohlhoff.com> wrote:
Hi Olaf,
Olaf van der Spek <olafvdspek@gmail.com> wrote:
Just wondering, what part of a normal non-blocking connect isn't truely async?
The fact that it requires a thread waiting on a select() or WaitForMultipleObjects() call to know when the connect operation has completed.
Yes, but doesn't that apply to recv/send too? I thought that was what the demuxer/servide was for.

Olaf van der Spek <olafvdspek@gmail.com> wrote:
Yes, but doesn't that apply to recv/send too? I thought that was what the demuxer/servide was for.
On windows, send and recv use overlapped I/O. The io_service::run() function waits on GetQueuedCompletionStatus() so that it gets the result of these asynchronous operations directly. What I mean is that asynchronous connect operations currently require an additional background thread to wait on select(). Using ConnectEx will remove this need. Cheers, Chris

On 5/8/06, Christopher Kohlhoff <chris@kohlhoff.com> wrote:
Olaf van der Spek <olafvdspek@gmail.com> wrote:
Yes, but doesn't that apply to recv/send too? I thought that was what the demuxer/servide was for.
On windows, send and recv use overlapped I/O. The io_service::run() function waits on GetQueuedCompletionStatus() so that it gets the result of these asynchronous operations directly. What I mean is that asynchronous connect operations currently require an additional background thread to wait on select(). Using ConnectEx will remove this need.
Ah. I'm only familiar with the epoll function on Linux and assumed on Windows it'd work in a similar way.
participants (5)
-
Christopher Kohlhoff
-
Olaf van der Spek
-
Peter Dimov
-
Yuriy Koblents-Mishke
-
李慧霸