[boost-user][asio]How can i limit tcpserver's connection count?

here‘s the code server: class tcp_server { public: tcp_server(boost::asio::io_service& io_service) : acceptor_(io_service),limit(0) { tcp::endpoint endpoint(tcp::v4(), 10000); acceptor_.open(endpoint.protocol()); acceptor_.bind(endpoint); acceptor_.listen(1); start_accept(); } private: void start_accept() { while(1) { if(limit < 1) break; } tcp::socket* socket = new tcp::socket(acceptor_.io_service()); acceptor_.async_accept(*socket, boost::bind( &tcp_server::handle_accept, this, socket, boost::asio::placeholders::error)); } void handle_accept(tcp::socket* s, const boost::system::error_code& error) { if (!error) { ++limit; start_accept(); } } tcp::acceptor acceptor_; int limit; }; client: int main(int argc, char* argv[]) { int i = 0; try { boost::asio::io_service io_service; tcp::resolver resolver(io_service); tcp::resolver::query query("127.0.0.1", "10000"); tcp::resolver::iterator endpoint_iterator = resolver.resolve(query); tcp::endpoint endpoint = *endpoint_iterator; tcp::socket socket(io_service); socket.connect(endpoint); while(1) {} } catch (std::exception& e) { std::cerr << e.what() << std::endl; } return 0; } I thought i can only launch 1 client, but fact is: i can start two clients and the 3rd one just cerr the what(), what happend there? Is there some better way to get & limit the "living-connection" of the server? -- View this message in context: http://boost.2283326.n4.nabble.com/boost-user-asio-How-can-i-limit-tcpserver... Sent from the Boost - Users mailing list archive at Nabble.com.

On Tue, 15 Mar 2011 13:32:31 +0100, rhapsodyn
here‘s the code
server: class tcp_server { public: tcp_server(boost::asio::io_service& io_service) : acceptor_(io_service),limit(0) { tcp::endpoint endpoint(tcp::v4(), 10000); acceptor_.open(endpoint.protocol()); acceptor_.bind(endpoint); acceptor_.listen(1); start_accept(); }
private: void start_accept() { while(1) { if(limit < 1) break; }
tcp::socket* socket = new tcp::socket(acceptor_.io_service()); acceptor_.async_accept(*socket, boost::bind( &tcp_server::handle_accept, this, socket, boost::asio::placeholders::error)); }
void handle_accept(tcp::socket* s, const boost::system::error_code& error) { if (!error) { ++limit; start_accept(); } }
tcp::acceptor acceptor_;
int limit; };
client: int main(int argc, char* argv[]) { int i = 0; try { boost::asio::io_service io_service;
tcp::resolver resolver(io_service); tcp::resolver::query query("127.0.0.1", "10000"); tcp::resolver::iterator endpoint_iterator = resolver.resolve(query); tcp::endpoint endpoint = *endpoint_iterator;
tcp::socket socket(io_service); socket.connect(endpoint);
while(1) {} } catch (std::exception& e) { std::cerr << e.what() << std::endl; }
return 0; }
I thought i can only launch 1 client, but fact is: i can start two clients and the 3rd one just cerr the what(), what happend there? Is there some better way to get & limit the "living-connection" of the server?
Ummm, not sure, just a thought, the documentation for listen() says: void listen( int backlog = socket_base::max_connections); backlog The maximum length of the queue of pending connections. So one connection accepted, one pending, third rejected - seems all right? Maybe you need to call listen(0)? -- Slava

On 15/Mar - 05:32, rhapsodyn wrote:
here‘s the code
server: class tcp_server { public: tcp_server(boost::asio::io_service& io_service) : acceptor_(io_service),limit(0) { tcp::endpoint endpoint(tcp::v4(), 10000); acceptor_.open(endpoint.protocol()); acceptor_.bind(endpoint); acceptor_.listen(1); start_accept(); }
private: void start_accept() { while(1) { if(limit < 1) break; }
tcp::socket* socket = new tcp::socket(acceptor_.io_service()); acceptor_.async_accept(*socket, boost::bind( &tcp_server::handle_accept, this, socket, boost::asio::placeholders::error)); }
void handle_accept(tcp::socket* s, const boost::system::error_code& error) { if (!error) { ++limit; start_accept(); } }
tcp::acceptor acceptor_;
int limit; };
client: int main(int argc, char* argv[]) { int i = 0; try { boost::asio::io_service io_service;
tcp::resolver resolver(io_service); tcp::resolver::query query("127.0.0.1", "10000"); tcp::resolver::iterator endpoint_iterator = resolver.resolve(query); tcp::endpoint endpoint = *endpoint_iterator;
tcp::socket socket(io_service); socket.connect(endpoint);
while(1) {} } catch (std::exception& e) { std::cerr << e.what() << std::endl; }
return 0; }
I thought i can only launch 1 client, but fact is: i can start two clients and the 3rd one just cerr the what(), what happend there? Is there some better way to get & limit the "living-connection" of the server?
I'm not 100% sure but : - the first one is accepted, as expected - the second one is queued, in the listen queue (size of queue = 1) - the third one is discarded, as the queue is full, and the system returns a ECONNREFUSED However, you only have "active client" at any time. Moreover, the current implementation is really bad. Your active wait completely blocks the asynchronous queue, defeating the purpose of the asynchronous implementation. If you don't want to accept anymore connection, don't call start_accept() if (limit > threshold). Regards, -- Arnaud Degroote PhD Student RIA LAAS / CNRS

Yes, i considered the "don't call start_accept() if (limit > threshold)" idea.(in fact, only with accept.close(), i can avoid the unexpected extra client) But when some "active" connection becoming "inactive", how can i restart the accept action? with another thread? It seems weird, because i thought there has to be a accept thread "always running" on server :0 -- View this message in context: http://boost.2283326.n4.nabble.com/boost-user-asio-How-can-i-limit-tcpserver... Sent from the Boost - Users mailing list archive at Nabble.com.

On 15/Mar - 20:07, rhapsodyn wrote:
Yes, i considered the "don't call start_accept() if (limit > threshold)" idea.(in fact, only with accept.close(), i can avoid the unexpected extra client) But when some "active" connection becoming "inactive", how can i restart the accept action? with another thread? It seems weird, because i thought there has to be a accept thread "always running" on server :0
I don't know the design of our whole server but you must deal with client disconnection (and decrease limit in this case ). If limit get under the threshold, just accept new connection. -- Arnaud Degroote PhD Student RIA LAAS / CNRS

2011/3/16 rhapsodyn
Yes, i considered the "don't call start_accept() if (limit > threshold)" idea.(in fact, only with accept.close(), i can avoid the unexpected extra client) But when some "active" connection becoming "inactive", how can i restart the accept action? with another thread? It seems weird, because i thought there has to be a accept thread "always running" on server :0
Then apply start_accept() again only when the client being served/inactive?
participants (4)
-
Arnaud Degroote
-
rhapsodyn
-
TONGARI
-
Viatcheslav.Sysoltsev@h-d-gmbh.de