Friday, February 12, 2016

Too many open files

If you are working with application runs on Linux based OS that involves in lots of I/O operations you may have encountered this error."Too many open files (24)"

What is This error
In Linux based Os's there are resource limits are specified for user/process to ensure fair usage of resources and for security reasons.If a resource usage of  of user/process try exceed the specified limit it was prevented by the OS.

How to see this limits
By using ulimit command we can examine this parameters at the global level.

[user@localhost ~]$ ulimit -a
core file size           (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7281
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
to see this details in a more specified way we can use /proc folder with relevant process id.
[user@localhost ~]$ sudo cat /proc/989/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             7281                 7281                 processes
Max open files            1024                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       7281                 7281                 signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us
to see file descriptor limit currently used by the process

[user@localhost ~]$ sudo ls -al /proc/1181/fd | wc -l
17

Using lsof command sometimes not provide accurate detail because it count all the files involves with the process even .so files.

How to increase this limits 

In the OS level

Enter value like the following to the /etc/sysctl.conf.(Maximum number can be allowed is 65535 because it is the highest number which can be represented by an unsigned 16-bit binary number) and reload the kernel variables.
[user@localhost ~]$ cat /etc/sysctl.conf
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
fs.file-max=20000
[user@localhost ~]$ sudo sysctl -p
[sudo] password for user:
[user@localhost ~]$ sysctl fs.file-max
fs.file-max = 20000
In the user level enter the new configuration to /etc/security/limits.conf file using following format.
user       soft    nofile   10000
user       hard    nofile  8000


No comments:

Post a Comment