Many
times when working with JBoss, tomcat or any application server we see this
issue,
[2011-03-17
23:03:02,461] [ERROR] [org.apache.tomcat.util.net.AprEndpoint] [Socket accept
failed]org.apache.tomcat.jni.Error: Too many open files
at org.apache.tomcat.jni.Socket.accept(Native Method)
at org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1110)
at java.lang.Thread.run(Thread.java:619)
This
exception occurs when there is an underlying problem in the TCP layer of the
Operating system. Since everything is a file in Linux, a new file cannot be
opened and assigned to the Socket. This exception indicates that all the
available file handles for the process have been used and it cannot allocate
new handles to the requesting ones. This may be due to some leak in file
handles.
There
are 2 instances where an application use maximum file handles. When
·
No.
of open files reach the global limit
·
No.
of open files reach the user/shell limit. This default limit values are set by
PAM
So
how can we come out from this?
In
Linux we can use ulimit command
The
man page for ulimit says
You
can specify limits for the resource usage of a process. When the process
tries to exceed a limit, it may get a signal, or the system call by which it
tried to do so may fail, depending on the resource. Each process
initially inherits its limit values from its parent, but it can subsequently
change them.
To
find the current limit
$ ulimit
-a
core
file size
(blocks, -c)
0
data
seg size
(kbytes, -d) unlimited
scheduling
priority
(-e) 0
file
size
(blocks, -f) unlimited
pending
signals
(-i) 16448
max locked
memory (kbytes, -l) 32
max
memory size
(kbytes, -m) unlimited
open
files
(-n) 2048
pipe
size
(512 bytes, -p) 8
POSIX
message queues (bytes, -q) 819200
real-time
priority
(-r) 0
stack
size
(kbytes, -s) 10240
cpu
time
(seconds,
-t) unlimited
max
user
processes
(-u) 16448
virtual
memory
(kbytes, -v) unlimited
file
locks
(-x) unlimited
In
order to find how many are opened for a process,
lsof
-p 15132 |wc -l
741
If we
need to change the available limits, we can change that in
/etc/security/limits.conf.
If we
need to change the whole system wide file descriptors information,
vi
/etc/sysctl.conf
fs.file-max=10000
reload
the changes, sysctl –p
We
can get the information about the file handles for the whole system using
cat
/proc/sys/fs/file-max
If we
want to continesouly chack the file handlers used, we can use
lsof
–p <process ID> -r > lsof.out
We
can get more information by examining the file.
And
lastly we need to make sure that our application does not have leaks leaving
file handles unclosed.
More
articles to Come, Happy Coding