Here, an awesome shell one-liner to find which process uses the most files, relative to its max-open-files soft limit.

$ for x in /proc/[0-9]*
  do fds=0
     max=`awk '/^Max open files/ {print $4}' $x/limits 2>/dev/null` &&
       for t in $x/fd/*; do fds=$((fds+1)); done &&
       echo $((fds*100/max)) ${x##*/}
  done | sort -rn | while read l
  do pid=${l##* }; echo "$l `readlink /proc/$pid/exe`"; break; done
57 16674 /usr/lib/dovecot/imap-login

So, my imap-login (pid 16674) apparently uses 57% percent of its allowed max open files.

$ ls /proc/16674/fd | wc -l
19
$ cat /proc/16674/limits | grep ^Max\ open\ files
Max open files  33  33  files

On localhost, this isn't so useful. But it can be useful in a default server system monitoring (e.g. Zabbix) template: if any process nears the open files limit, you'll notice. This way you won't need to identify individual processes/daemons that may run out of file descriptors.

Explanation of peculiarities in the one-liner above:

  • awk stderr is discarded: short running processes may be gone before we can look at them; don't print an error;
  • the for t in $x/fd/* is faster than firing up wc -w (this is better than using bash and array counts, because dash is faster on the whole);
  • ${x##*/} is faster than firing up basename;
  • while... break at the bottom is faster than firing up head -n1;
  • the readlink at the end is done only once, instead of in the loop.

zabbix monitoring linux