I’ve hit this error more times than I’d like to admit. It shows up fast. It shows up late. And it always shows up when people are watching.
If you’re in a rush and only want the stripped-down walkthrough, you can jump straight to my condensed reference on Freedom Penguin here.
Here’s my real deal review of it, plus what worked for me, with plain words and copy-paste bits you can try.
The night it bit me
Friday. 11:23 pm. Ubuntu 22.04 box. Nginx in front, Node.js app, PostgreSQL in the back. Load spiked after a post went live. Then Nginx logs started yelling:
open() "/var/www/site/index.html" failed (24: Too many open files)
A minute later, my Node app threw this:
Error: EMFILE: too many open files, open '/tmp/some-cache-file'
And Chrome showed a sad white page. Fun, right?
I felt that small heart drop. But I’ve been here. So I did my usual quick scan.
What the error means (in simple words)
Linux gives each process a small “wallet” of file handles. Not just files—sockets, pipes, logs count too. Each open thing uses one slot. When you run out, boom: “Too many open files.”
Two limits matter:
- Per process limit (ulimit -n)
- System wide cap (fs.file-max)
If a service sits behind systemd, there’s one more knob: LimitNOFILE in the unit file.
That’s it. Small wallet, big problem. Baeldung’s in-depth guide on the “Too many open files” error walks through these limits with clear examples. For a deeper, distro-agnostic primer on how Linux juggles file descriptors behind the scenes, I recommend this concise guide on Freedom Penguin.
You can also dive into my extended walkthrough here if you’d rather skip ahead.
How I checked it (real commands I used)
I always start with these:
ulimit -Sn
ulimit -Hn
cat /proc/sys/fs/file-max
cat /proc/sys/fs/file-nr
On that night:
- Soft was 1024
- Hard was 1024 (yep)
- file-max was fine (around two million)
- file-nr showed a spike
Then I looked at what was open for Nginx:
pidof nginx
lsof -p <PID> | wc -l
And for the Node app (it’s called “api” on my box):
pidof node
lsof -p <PID> | awk '{print $5}' | sort | uniq -c | sort -nr | head
I saw a lot of REG and IPv4. Many sockets. Many logs. One worker was worse than the others. That was my clue. If your stack also needs to poke at /dev/tty* devices, the same lsof drill applies—I documented the serial-port angle here.
Side note: on my dev laptop, I’ve also hit this with file watchers (Webpack, Vite, Jest). Those throw EMFILE too. Different root cause, same pain. When I’m poking through project folders to see what’s being watched, I usually toggle hidden files in Nautilus—if you need a refresher, here’s how to show hidden files on Ubuntu.
Quick band-aid (so users stop waiting)
I bumped the limit in the shell and restarted Nginx and the app. It’s not pretty, but it buys time.
ulimit -n 65535
systemctl restart nginx
systemctl restart api
Traffic cooled. Errors stopped. People could breathe.
But this only lasts for that session. I needed a real fix.
The real fix I keep now
Here’s what I set on servers. It’s boring, which is good.
- Raise per-user limits
I added lines to /etc/security/limits.conf:
www-data soft nofile 65535
www-data hard nofile 65535
Then I made sure PAM loads limit rules. On Ubuntu, /etc/pam.d/common-session should have:
session required pam_limits.so
- Tell systemd about it (this matters a lot)
For each service unit, I set LimitNOFILE. Example for my Node app:
# /etc/systemd/system/api.service
[Service]
ExecStart=/usr/bin/node /srv/api/server.js
User=www-data
Group=www-data
Environment=NODE_ENV=production
LimitNOFILE=65535
Restart=always
Then:
systemctl daemon-reload
systemctl restart api
I do the same for Nginx:
# /etc/systemd/system/nginx.service.d/override.conf
[Service]
LimitNOFILE=65535
- Match Nginx worker caps
In /etc/nginx/nginx.conf:
worker_processes auto;
worker_rlimit_nofile 65535;
events {
worker_connections 8192;
multi_accept on;
}
Then:
nginx -t
systemctl reload nginx
- Make sure the kernel cap is not tiny
Mine was fine, but if you need it:
sysctl -w fs.file-max=2097152
Persist it:
echo 'fs.file-max=2097152' > /etc/sysctl.d/99-fd.conf
sysctl --system
- For dev tools that “watch” files
On my laptop, Vite and Jest hit EMFILE due to watchers, not sockets. This line fixed it for me:
sysctl -w fs.inotify.max_user_watches=524288
echo 'fs.inotify.max_user_watches=524288' > /etc/sysctl.d/99-inotify.conf
sysctl --system
After that, no more EMFILE when running npm run dev.
Real cases I’ve hit (and what fixed each one)
-
High traffic + Nginx logs rolling
- Symptom: Nginx “open() failed (24)” on static files
- Fix: LimitNOFILE=65535 + worker_rlimit_nofile; rotated logs less often
-
Node.js API under burst load
- Symptom: EMFILE on tmp cache files and sockets
- Fix: LimitNOFILE=65535 on the service; reduced keep-alive length; closed some extra file streams I forgot
-
Webpack/Jest on Linux laptop
- Symptom: EMFILE: too many open files, watch
- Fix: Raise inotify watches; close extra editors; stop Dropbox from watching the same tree
-
Postgres backup script using many small files
- Symptom: backup stalled, EMFILE on tar step
- Fix: Run with higher ulimit in the systemd unit for the job
If you’re hunting for a simpler overview before digging into per-service tweaks, How-To Geek offers a friendly summary of the fixes right here.
What I like, what I don’t
What I like:
- Linux gives clear knobs. You can see counts, you can set caps, you can tune it live.
- The error code is blunt. EMFILE means what it says.
What bugs me:
- The mix of places to set it (shell, PAM, systemd, app config) trips people a lot.
- Logs can flood and eat handles, fast. It feels sneaky.
- Defaults are low on some distros. 1024 is cute, until traffic hits.
Would I call it easy? No. But once you set sane defaults, it stays calm.
Tiny cheat sheet I keep taped to my brain
- See current limits:
ulimit -Sn -Hn
- See system cap:
cat /proc/sys/fs/file-max
- Count open files for a process:
lsof -p <PID> | wc -l
- Raise live (temp):
ulimit -n 65535
- Persist for a service:
LimitNOFILE=65535in the systemd unit
- Kernel file cap:
sysctl -w fs.file-max=2097152
- Dev watchers:
fs.inotify.max_user_watches=524288
Not fancy. Just handy.
Final take
“Too many open files” feels scary the first time. It hits like a storm. But the fix is plain: raise the right limits, in the right place, and watch your handles. You know what? The hardest part was not the commands. It was staying calm while folks pinged me.
Now when I ship a new box, I set these limits on day one. Quiet servers are my love language.
Side note: once the logs finally go quiet, stepping away from the terminal can help