*NIX: Boost ASIO and file descriptor limits
Hello list, I have a question regarding the system limit on a *NIX system for the maximum open file descriptors that a process may have open. If you were using the boost asio library to design a server application how would you go about and extend the server to handle more than the 1024 file descriptor limit. The common approach on *nix systems is to fork(2) the process and have the child accept the client connections and have the parent process inform the child process when the listening socket has clients waiting to be accepted. My question is how would I achieve a similar design approach. Any comments or ideas are welcome. Kind Regards, Etienne Pretorius.
On Sat, 04 Oct 2008 16:56:38 +0200, Etienne Philip Pretorius
Hello list,
I have a question regarding the system limit on a *NIX system for the maximum open file descriptors that a process may have open.
If you were using the boost asio library to design a server application how would you go about and extend the server to handle more than the 1024 file descriptor limit.
I think the 1024 file descriptor limit is only a problem if you use select(). Depending on your operating system it should be possible to raise the per-process limit. If you use then an I/O service based on poll() or epoll() I would assume this limit goes away. Boris
[...]
On Sat, 04 Oct 2008 17:16:04 +0200, Boris
[...]
I have a question regarding the system limit on a *NIX system for the maximum open file descriptors that a process may have open.
If you were using the boost asio library to design a server application how would you go about and extend the server to handle more than the 1024 file descriptor limit.
I think the 1024 file descriptor limit is only a problem if you use select(). Depending on your operating system it should be possible to raise the per-process limit. If you use then an I/O service based on poll() or epoll() I would assume this limit goes away.
Etienne, this link might also help you: http://www.kegel.com/c10k.html#limits.filehandles Boris
this link might also help you: http://www.kegel.com/c10k.html#limits.filehandles
Boris
Thank you Boris, I am not contesting that the per process file descriptor limit could be changed - you do need super-user access though to do so. I know that Postfix uses a child per connection model, and I would like to implement something similar. If I could pass, in a secure fashion, the listening socket into the child process (Postfix uses fork() and exec() with settings passed via environmental variables.). There is a process limit per user that is also 1024 on my system, so I should easily be able to handle 65536 connections if i am multi homed and bound to each interface address. As the limit is now 1024 file descriptors per process and a process limit of 1024 running processes. So I guess, my question is how do I span multiple sockets/file descriptors across multiple processes by using boost libraries. Etienne Pretorius.
On Sat, 04 Oct 2008 18:04:10 +0200, Etienne Philip Pretorius
[...]So I guess, my question is how do I span multiple sockets/file descriptors across multiple processes by using boost libraries.
Boost.Process might help you: http://www.highscore.de/boost/process/ Boris
Boost.Process might help you: http://www.highscore.de/boost/process/
Awsome! looks promising. Thank you Boris. Etienne Pretorius.
Hello Boris,
On Sat, Oct 4, 2008 at 12:23 PM, Boris
Boost.Process might help you: http://www.highscore.de/boost/process/
boost.process looks very promising, just wondering are there plans to include it with boost 1.37. Regards, Sebastian
On Mon, 06 Oct 2008 16:57:57 +0200, Sebastian Hauer
Hello Boris,
On Sat, Oct 4, 2008 at 12:23 PM, Boris
wrote: Boost.Process might help you: http://www.highscore.de/boost/process/
boost.process looks very promising, just wondering are there plans to include it with boost 1.37.
There was no progress really for two years. I picked it up this year as I needed something similar for my own purposes. The code has been thoroughly tested to make sure that the current code works reliably (as I need something reliable :). What I don't know if the current library is not too small for a Boost library. For example you can only access the current process and child processes but no other processes running. I don't know where to draw a line and what else should really be put in before it becomes an official Boost library? In the moment I'm working on improving support for asynchronous I/O (somehow integrationg Boost.Process with Boost.Asio). I got another mail from someone who thinks about using Boost.Process to build something like Expect (see http://expect.nist.gov/). Maybe I'll get some ideas from him to better understand if the current library is useful enough to become an official Boost library. And to be fair there is also another draft of a process library: http://article.gmane.org/gmane.comp.lib.boost.devel/180310 Boris
Thank you Boris, I have been using strace to track what is being called by the boost asio libraries, and in my 2.6 kernel it is epoll. getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0 uname({sys="Linux", node="etienne-laptop", ...}) = 0 futex(0xb7f46674, 0x81 /* FUTEX_??? */, 2147483647) = 0 futex(0xb7f4b700, 0x81 /* FUTEX_??? */, 2147483647) = 0 brk(0) = 0x805c000 brk(0x807d000) = 0x807d000 epoll_create(20000) = 3 pipe([4, 5]) = 0 fcntl64(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 fcntl64(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 epoll_ctl(3, EPOLL_CTL_ADD, 4, {EPOLLIN|EPOLLERR, {u32=4, u64=4}}) = 0 socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 6 epoll_ctl(3, EPOLL_CTL_ADD, 6, {0, {u32=6, u64=6}}) = 0 setsockopt(6, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 bind(6, {sa_family=AF_INET6, sin6_port=htons(1234), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0 listen(6, 128) = 0 ioctl(6, FIONBIO, [1]) = 0 accept(6, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable) epoll_ctl(3, EPOLL_CTL_MOD, 6, {EPOLLIN|EPOLLERR|EPOLLHUP, {u32=6, u64=6}}) = 0 mmap2(NULL, 8392704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb74ce000 mprotect(0xb74ce000, 4096, PROT_NONE) = 0 clone(child_stack=0xb7cce4b4, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0xb7ccebd8, {entry_number:6, base_addr:0xb7cceb90, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}, child_tidptr=0xb7ccebd8) = 15388 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 The limit unfortunately still applies. I am using a shell script to test this on the Ubuntu distro: for ((i=0;i<1024;i++)) do echo $i; `nc -n -w 1 127.0.0.1 1234 & `; done In the code I output an error when the accept failes: void server::accept( client* connection, const asio::error_code& ec ) { if(!ec) { client_.push_back(connection); }else{ //TODO: say why the connection failed delete connection; std::cout << ec.message() << std::endl; ::exit(0); } accept(); };//accept The returned error message is: Too many open files
participants (3)
-
Boris
-
Etienne Philip Pretorius
-
Sebastian Hauer