“Too many open files” on Linux — My hands-on take

I’ve hit this error more times than I’d like to admit. It shows up fast. It shows up late. And it always shows up when people are watching.

If you’re in a rush and only want the stripped-down walkthrough, you can jump straight to my condensed reference on Freedom Penguin here.

Here’s my real deal review of it, plus what worked for me, with plain words and copy-paste bits you can try.


The night it bit me

Friday. 11:23 pm. Ubuntu 22.04 box. Nginx in front, Node.js app, PostgreSQL in the back. Load spiked after a post went live. Then Nginx logs started yelling:

open() "/var/www/site/index.html" failed (24: Too many open files)

A minute later, my Node app threw this:

Error: EMFILE: too many open files, open '/tmp/some-cache-file'

And Chrome showed a sad white page. Fun, right?

I felt that small heart drop. But I’ve been here. So I did my usual quick scan.


What the error means (in simple words)

Linux gives each process a small “wallet” of file handles. Not just files—sockets, pipes, logs count too. Each open thing uses one slot. When you run out, boom: “Too many open files.”

Two limits matter:

  • Per process limit (ulimit -n)
  • System wide cap (fs.file-max)

If a service sits behind systemd, there’s one more knob: LimitNOFILE in the unit file.

That’s it. Small wallet, big problem. Baeldung’s in-depth guide on the “Too many open files” error walks through these limits with clear examples. For a deeper, distro-agnostic primer on how Linux juggles file descriptors behind the scenes, I recommend this concise guide on Freedom Penguin.

You can also dive into my extended walkthrough here if you’d rather skip ahead.


How I checked it (real commands I used)

I always start with these:

ulimit -Sn
ulimit -Hn
cat /proc/sys/fs/file-max
cat /proc/sys/fs/file-nr

On that night:

  • Soft was 1024
  • Hard was 1024 (yep)
  • file-max was fine (around two million)
  • file-nr showed a spike

Then I looked at what was open for Nginx:

pidof nginx
lsof -p <PID> | wc -l

And for the Node app (it’s called “api” on my box):

pidof node
lsof -p <PID> | awk '{print $5}' | sort | uniq -c | sort -nr | head

I saw a lot of REG and IPv4. Many sockets. Many logs. One worker was worse than the others. That was my clue. If your stack also needs to poke at /dev/tty* devices, the same lsof drill applies—I documented the serial-port angle here.

Side note: on my dev laptop, I’ve also hit this with file watchers (Webpack, Vite, Jest). Those throw EMFILE too. Different root cause, same pain. When I’m poking through project folders to see what’s being watched, I usually toggle hidden files in Nautilus—if you need a refresher, here’s how to show hidden files on Ubuntu.


Quick band-aid (so users stop waiting)

I bumped the limit in the shell and restarted Nginx and the app. It’s not pretty, but it buys time.

ulimit -n 65535
systemctl restart nginx
systemctl restart api

Traffic cooled. Errors stopped. People could breathe.

But this only lasts for that session. I needed a real fix.


The real fix I keep now

Here’s what I set on servers. It’s boring, which is good.

  1. Raise per-user limits

I added lines to /etc/security/limits.conf:

www-data soft nofile 65535
www-data hard nofile 65535

Then I made sure PAM loads limit rules. On Ubuntu, /etc/pam.d/common-session should have:

session required pam_limits.so
  1. Tell systemd about it (this matters a lot)

For each service unit, I set LimitNOFILE. Example for my Node app:

# /etc/systemd/system/api.service
[Service]
ExecStart=/usr/bin/node /srv/api/server.js
User=www-data
Group=www-data
Environment=NODE_ENV=production
LimitNOFILE=65535
Restart=always

Then:

systemctl daemon-reload
systemctl restart api

I do the same for Nginx:

# /etc/systemd/system/nginx.service.d/override.conf
[Service]
LimitNOFILE=65535
  1. Match Nginx worker caps

In /etc/nginx/nginx.conf:

worker_processes auto;
worker_rlimit_nofile 65535;

events {
    worker_connections 8192;
    multi_accept on;
}

Then:

nginx -t
systemctl reload nginx
  1. Make sure the kernel cap is not tiny

Mine was fine, but if you need it:

sysctl -w fs.file-max=2097152

Persist it:

echo 'fs.file-max=2097152' > /etc/sysctl.d/99-fd.conf
sysctl --system
  1. For dev tools that “watch” files

On my laptop, Vite and Jest hit EMFILE due to watchers, not sockets. This line fixed it for me:

sysctl -w fs.inotify.max_user_watches=524288
echo 'fs.inotify.max_user_watches=524288' > /etc/sysctl.d/99-inotify.conf
sysctl --system

After that, no more EMFILE when running npm run dev.


Real cases I’ve hit (and what fixed each one)

  • High traffic + Nginx logs rolling

    • Symptom: Nginx “open() failed (24)” on static files
    • Fix: LimitNOFILE=65535 + worker_rlimit_nofile; rotated logs less often
  • Node.js API under burst load

    • Symptom: EMFILE on tmp cache files and sockets
    • Fix: LimitNOFILE=65535 on the service; reduced keep-alive length; closed some extra file streams I forgot
  • Webpack/Jest on Linux laptop

    • Symptom: EMFILE: too many open files, watch
    • Fix: Raise inotify watches; close extra editors; stop Dropbox from watching the same tree
  • Postgres backup script using many small files

    • Symptom: backup stalled, EMFILE on tar step
    • Fix: Run with higher ulimit in the systemd unit for the job

If you’re hunting for a simpler overview before digging into per-service tweaks, How-To Geek offers a friendly summary of the fixes right here.


What I like, what I don’t

What I like:

  • Linux gives clear knobs. You can see counts, you can set caps, you can tune it live.
  • The error code is blunt. EMFILE means what it says.

What bugs me:

  • The mix of places to set it (shell, PAM, systemd, app config) trips people a lot.
  • Logs can flood and eat handles, fast. It feels sneaky.
  • Defaults are low on some distros. 1024 is cute, until traffic hits.

Would I call it easy? No. But once you set sane defaults, it stays calm.


Tiny cheat sheet I keep taped to my brain

  • See current limits:
    • ulimit -Sn -Hn
  • See system cap:
    • cat /proc/sys/fs/file-max
  • Count open files for a process:
    • lsof -p <PID> | wc -l
  • Raise live (temp):
    • ulimit -n 65535
  • Persist for a service:
    • LimitNOFILE=65535 in the systemd unit
  • Kernel file cap:
    • sysctl -w fs.file-max=2097152
  • Dev watchers:
    • fs.inotify.max_user_watches=524288

Not fancy. Just handy.


Final take

“Too many open files” feels scary the first time. It hits like a storm. But the fix is plain: raise the right limits, in the right place, and watch your handles. You know what? The hardest part was not the commands. It was staying calm while folks pinged me.

Now when I ship a new box, I set these limits on day one. Quiet servers are my love language.

Side note: once the logs finally go quiet, stepping away from the terminal can help

How I Actually Check Folder Size on Linux (and What Got Me There Faster)

I’m Kayla. I spend way too much time cleaning computers. Mine. Friends. My dad’s ancient ThinkPad. You know what? Space runs out fast. Photos. Docker. Big game files. So I got picky about a simple thing: how to see folder size, fast, and without guesswork.

Here’s what I use, with real commands, and what went right (and wrong).

If you’d like an even deeper dive with some extra screenshots, I’ve posted a companion piece over on Freedom Penguin that walks through these tricks step by step.

The quick win: plain old du

When I just need the size of one folder, I go simple. This works on any Linux box I touch. For anyone who wants to master every flag and nuance, an in-depth guide to the du command lays out all its options far better than I can cram into a blog post.

Command I run:

du -sh ~/Downloads

What I saw last week:

7.2G    /home/kayla/Downloads

Short and sweet. That “-s” means summary. The “-h” means human numbers (G, M, K). Good for a quick gut check.

Side note: I’ve discovered more than once that a chunk of that space comes from stashes of high-resolution red-carpet photos and movie-star wallpapers I’d absent-mindedly saved. If you’re the type who also hoards those glossy pics, you might appreciate browsing this regularly-updated celebrity roundup that spotlights trending stars and photo sets—perfect for deciding which images are worth keeping before they swallow more gigabytes.
I also found a whole gig of selfies and ticket stubs from a spontaneous evening in Orange County; if you’re plotting your own last-minute outing and want to know what’s happening around town, the nightlife listings at One Night Affair’s Buena Park backpage can point you toward the latest events and hot spots—saving you time (and maybe a few gigabytes of random photos) when the night is over.

When I want a breakdown of the folders inside the current folder:

du -h --max-depth=1 .

Sample output from my Pictures folder:

1.3G    ./Screenshots
850M    ./iPhone
52M     ./Edits
12K     ./tmp
2.2G    .

And when I need them sorted smallest to largest:

du -h --max-depth=1 . | sort -h

I keep that one on a sticky note. It saves me every time.

One more that I run on root, to find the big stuff across the drive:

sudo du -xh --max-depth=1 / | sort -h

The “-x” flag keeps it to one disk. That skips weird mounts. My result last month:

1.1G    /opt
5.0G    /usr
8.1G    /var
15G     /home

And boom, I know where to look. If you want to see exactly how I refined this approach across different distros, I laid out the full process in my Freedom Penguin guide, How I Actually Check Folder Size on Linux (and What Got Me There Faster).

Tiny gripe: du can be slow on a huge disk. It has to walk every folder. I make tea while it runs. It’s fine.

Wait—doesn’t ls show size?

I tried. It doesn’t help here. This command:

ls -lh

It shows file sizes, not folder totals. A folder will look like “4.0K.” That’s not the real space used. So I don’t trust ls for this job.

My favorite for cleanup: ncdu (interactive and friendly)

When I need to clean fast, I use ncdu. It’s like du, but with a simple screen I can arrow through. It even deletes files if I say so. Careful fingers here. If you’re new to it, this comprehensive overview of ncdu walks through installation and every feature from top to bottom.

Install it from your package manager, then run:

sudo ncdu -x /

The “-x” keeps it from crossing to other disks. What I saw on my laptop in spring:

--- / -------------------------------------------------------------
  18.1 GiB [##########] /var
  15.2 GiB [######### ] /home
   5.0 GiB [###       ] /usr

I drilled into /var, and found Docker eating space:

/var/lib/docker  ->  14.7 GiB

I cleaned old images, and poof—14 gigs back. Felt so good. Downside? ncdu needs a terminal. I like that, but my dad does not.

Prettier text view: tree with sizes

Sometimes I want a tidy listing for notes or a report. I use tree with directory sizes.

Command:

tree -h --du -L 2 /var/log

Snip from my server:

/var/log
├── 320M [########] journal
├── 120M [###     ] nginx
└── 12M  [#       ] apt

It’s not as hands-on as ncdu, but it reads well.

When I need visuals: Disk Usage Analyzer (baobab)

On GNOME, I open “Disk Usage Analyzer.” It scans the drive and shows a big ring chart. My dad gets it at a glance. He clicked Downloads, and we found a 4.5G ISO he forgot about. He pressed Delete, and we cheered—quietly, but still.

Plus, it shows mounted disks too. That helps on messy setups.

Small catch: it can feel slow on old HDDs. But the view is worth it when you want a picture, not a list.

A modern touch: dust

I also use dust. It’s like du, but with a bar chart in the terminal. Nice and fast.

Example:

dust -d 1 ~

What it printed for me:

 15G  ████████████████████████████  /home/kayla
 12G  ████████████████████          Downloads
  2G  ████                          Pictures
600M  ██                            .cache

Great for a quick read. I still switch to ncdu when I want to delete things.

Little things that save me time

  • Permissions: du may say “Permission denied.” I add sudo, or skip locked folders.
  • Noisy system folders: I avoid scanning /proc, /sys, and /run. They’re special.
  • One disk only: I add “-x” so du or ncdu stays on one filesystem.
  • Want exact bytes? Use “-B1”:
    du -B1 -s ~/Videos
    
  • Curious why “apparent size” differs? Sparse files. Compression. If I need the raw sum of file sizes, I use:
    du -sh --apparent-size .
    
  • Ran into the dreaded “too many open files” limit while chewing through logs? I break down the fix step by step in my hands-on take.
  • Sometimes space hogs hide in dot folders—if you need a refresher on revealing them inside Nautilus or another file manager, check out my piece on showing hidden files on Ubuntu.

My go-to flow

  • Need a quick answer? du -sh FOLDER
  • Want a tidy list? du -h –max-depth=1 . | sort -h
  • Cleaning for real? ncdu -x /
  • Need a picture? Disk Usage Analyzer (baobab)
  • Want a quick glance with bars? dust

Honestly, I use all of them. Different days. Different messes. But if you made me pick only one, I’d keep ncdu. It’s saved me gigs, fast, without fuss. And yes, I still keep that little du + sort trick on a sticky note. It’s my tiny space saver.

Linux: How I Check My Processor Temperature (Real Use, Real Wins)

I’m Kayla. I tinker on Linux a lot. I also baby my CPUs, because hot chips crash work. I’ve tried a bunch of tools to check temps. Some were smooth. Some were weird. Here’s what I use, what I see on my machines, and a few small gotchas I wish someone told me sooner.


Why I even cared

It started in July. My ThinkPad fan sounded like a tiny jet. The laptop felt toasty near the F-keys. My desktop did fine, but builds ran long and hot. You know what? Heat sneaks up on you.

So I began checking temps often. Not in a fussy way—just quick peeks. I like simple.


The quick way: lm-sensors

On Ubuntu and Fedora, this is home base for me. For deeper documentation, the comprehensive lm_sensors: Hardware Monitoring for Linux wiki walks through every sensor class and kernel module in detail.

  • Install it:
    • Ubuntu/Debian: sudo apt install lm-sensors
    • Fedora: sudo dnf install lm_sensors
  • Detect chips:
    • sudo sensors-detect (hit Enter for the defaults)
  • Then check temps:
    • sensors

If you're hungry for an even deeper, distro-agnostic walkthrough, Freedom Penguin has an excellent primer that complements what I cover here.
That primer sits alongside my own annotated deep-dive with screenshots—you can find it right here.

On my AMD desktop (Ryzen 5600), here’s a real sample I see when idle with a case fan curve set:

k10temp-pci-00c3
Adapter: PCI adapter
Tctl:         +42.4°C
Tdie:         +37.8°C
Tccd1:        +35.5°C

nvme-pci-0100
Composite:    +39.0°C

nct6775-isa-0290
fan1:        780 RPM
fan2:        640 RPM

On my Intel work NUC, it looks like this under a light load:

coretemp-isa-0000
Adapter: ISA adapter
Package id 0:  +67.0°C  (high = +100.0°C, crit = +100.0°C)
Core 0:       +61.0°C
Core 1:       +62.0°C
Core 2:       +59.0°C
Core 3:       +60.0°C
  • Tdie or Core is close to the true die temp.
  • Tctl can read a bit higher on AMD. It’s by design. Don’t panic.

If I want a live ticker in the terminal, I do:

watch -n 1 sensors

One-second updates. Easy.


A small digression: when “sensors” shows nothing

It happens. On some boxes I had to load the right module:

  • Intel: sudo modprobe coretemp
  • AMD: sudo modprobe k10temp
  • Many desktops: Super I/O chips like nct6775 load on their own now, but I’ve had to add them to /etc/modules on older boards.

Then I run sensors-detect again. If a prompt feels scary, I leave the risky scans off. Safer that way.


GUI that doesn’t get in my way: Psensor

Sometimes I want a tray icon and a graph. Psensor is my pick. For an overview of features, screenshots and source code, the Psensor: A Graphical Hardware Temperature Monitor for Linux homepage is worth bookmarking.

  • Install:
    • Ubuntu: sudo apt install psensor
  • It shows CPU, GPU (if supported), and drive temps.
  • I set alerts. If any core hits 90°C, it pops a notice and plays a tiny sound.

My ThinkPad T14 (AMD 6850U) while in a Zoom call plus 12 Chrome tabs:

  • Baseline idle: 39–45°C
  • Typical call: 65–75°C
  • Short compile spike: 88–92°C (fan roars, then drops)

Psensor’s graph makes it obvious when a tab or app misbehaves. That’s how I caught a rogue Electron app once.

If you’re on GNOME, the “Freon” extension is nice for a quick panel readout. KDE folks: “Thermal Monitor” widget is clean.

If you're shopping for a Linux-friendly travel rig that also behaves well under Kali, these are the laptops I actually use.


Plain file reads, when I’m feeling nerdy

Sometimes I skip tools and read the kernel values:

cat /sys/class/thermal/thermal_zone0/temp

It prints in millidegrees. So 52000 means 52.0°C. It’s barebones, but script friendly.

On embedded gear where I’m wired in over UART, poking around sensors from a serial console is the norm—if you’re wrangling serial ports on Linux, here’s what actually worked for me.


Case study 1: My ThinkPad T14 AMD (work travel buddy)

Stock Ubuntu 24.04. Power-profiles-daemon on “Balanced.”

  • Idle in a cafe: 42–48°C
  • VS Code + Docker build: 75–85°C
  • Long Rust build: peaks 92°C for a minute, then 80–85°C
  • Fan curve is quick to ramp; not quiet, but safe

Things that helped me:

  • Clean the vents. A simple puff of air drops a few degrees. It’s boring. Still works.
  • “Balanced” profile most days; “Power Saver” while writing.
  • thermald helps a bit on Intel. On AMD, firmware does most of the job.

Result: fewer spikes, and the fan doesn’t hunt up and down as much.


Case study 2: My Ryzen 5600 desktop (home build box)

Parts: stock cooler first, then a be quiet! Pure Rock later. Fractal case. Two intake fans. One exhaust.

  • With the stock cooler:
    • Idle: 38–44°C
    • Linux kernel build: 84–88°C peak; fans loud at 100%
  • With a mid-range tower cooler:
    • Idle: 33–38°C
    • Kernel build: 72–77°C peak; fan noise drops a lot

I tuned the fan curve in BIOS. I set a gentle ramp up to 70°C, then sharper after. That cut “whoosh” bursts during short tasks.


Bonus: Raspberry Pi 4 temps

On my Pi 4, this is my quick check:

vcgencmd measure_temp

No fan, just a low profile heatsink.

  • Idle: 45–50°C
  • Node build: 70–75°C
  • It throttles near 80–85°C. A tiny fan helps a lot. I use the Pimoroni one.

When I need deeper detail (and when I don’t)

  • turbostat (Intel only) shows per-core speeds, temps, and C-states:
    • sudo turbostat --Summary --quiet --interval 1
    • Great for tuning, but I use it rarely.
  • powertop can hint at power hogs. It’s not a temp tool, but it helps lower heat.
  • GPUs are separate:
    • NVIDIA: nvidia-smi --query-gpu=temperature.gpu --format=csv,noheader
    • AMD GPU: sensors | grep -i edge often shows it, or use rocm-smi on supported cards.

Side note: excessive network retries can quietly warm a NIC and spike CPU soft-irq time; when I’m tracking that down I lean on the terminal classic mtr to reveal where packets are stalling. If you ever need to spoof your Wi-Fi card’s identity to dodge captive portals or MAC-based quotas, the workflow in my guide to changing a MAC address on Linux is painless and safe.


Alerts and sanity checks I actually keep

  • Psensor alert at 90°C for laptops, 85°C for desktops
  • A quick watch -n 1 sensors during big builds
  • If temps climb for no reason, I check:
    • Dust in vents
    • High background CPU (htop helps)
    • Old thermal paste (over 3 years? I refresh it)
    • Case airflow (one more intake fan fixed a lot for me)

For heat issues rooted in runaway processes, I sometimes discover the culprit is a service spewing file-handle leaks; when that crops up, the fixes I outlined in my hands-on take on the “too many open files” error come in handy.


A tiny warning

I Reboot Linux Servers A Lot. Here’s What Actually Works.

I’m Kayla. I live in on-call land. Coffee on the desk. Pager on the table. I’ve rebooted more Linux boxes than I care to count—Ubuntu, Debian, RHEL, even a tired CentOS 7 box that still hangs on in a closet rack. And yes, I’ve messed up a few. Honestly, that’s how I learned.

This isn’t fancy. It’s what I use, with real commands that have saved me at 2 a.m. Ready? Let me explain.
For deeper dives into system administration tricks, I often browse articles on Freedom Penguin, a site packed with practical Linux wisdom.
One of those dives is my own step-by-step write-up, I Reboot Linux Servers a Lot—Here’s What Actually Works, which captures these battle-tested notes in a tidy reference.

The One I Use Most: “sudo reboot”

If the box is healthy, this is my go-to:

  • sudo reboot

It’s simple. It tells systemd to shut stuff down cleanly. Services stop. Filesystems sync. The machine comes back.

(Need a deeper technical refresher on how the command works under the hood? Check out the clear walkthrough in Linux Reboot Command Explained (With Examples) | DigitalOcean.)

Real example: Last week I patched a Debian 12 VM. I ran apt, saw new kernel bits, and typed:

  • sudo reboot

The host was back in about 45 seconds. Nginx started. DB was fine. I took a breath.

Tiny tip: I peek before I leap.

  • uptime (to see load)
  • who (to see who’s on)
  • systemctl list-jobs (to see if something is busy)

When I Need a Heads-Up: “shutdown -r”

Sometimes I need to warn folks. Or schedule it a bit later. That’s where this helps:

  • sudo shutdown -r +5 "Rebooting in 5 minutes for kernel update. Save work."
  • sudo shutdown -c (if I change my mind)

Real story: I once planned a kernel jump on an app node at noon. I used the +5 timer, sent a message, and watched our chat blow up with “wait!” from a teammate finishing a batch job. I canceled. We tried again in 10 minutes. No drama.

You can also do it now:

  • sudo shutdown -r now

Same clean reboot, but with a broadcast message. Handy on multi-user boxes.

If you’re collecting alternative approaches—GUI buttons, SysRq combos, init commands, and more—the roundup in How to Reboot Your Linux System (6 Methods) | Beebom is a nice checklist to compare against your own habits.

systemctl reboot: Same, But I Like It For Scripts

It’s close to reboot, but I use it in automation or when I’m already in systemctl land:

  • sudo systemctl reboot

On my CentOS 7 archive box, it behaves the same. Clean and predictable.

Slight aside: I never use –force here unless I’m in trouble. More on that later.

SSH Might Drop. Use tmux or screen First.

I learned this the hard way. I kicked a reboot over SSH, lost the session, and had no logs saved.

Now I do:

  • tmux new -s maintenance
  • run updates, then reboot

If SSH blips, tmux keeps my scrollback. You know what? That little habit saved me during a bumpy network night when packets were playing hide and seek.

After the Reboot: My Quick Health Check

I don’t trust “it should be fine.” I check:

  • ping server.example.com (do I get replies?)
  • ssh server (does it accept keys fast?)
  • systemctl –failed (did anything fail to start?)
  • journalctl -b -1 -p err (errors from the last boot)
  • df -h (any filesystem oddities?)
  • free -h (memory looks normal?)
  • last -x | head (did it reboot when I think it did?)
  • app checks (curl the health endpoint, or hit the site)

Real example: On an Ubuntu 22.04 host, systemctl –failed showed a stray service that was stuck on a missing mount. I fixed fstab, rebooted again, and all green.

If you pop back in and suddenly see “too many open files” errors scattered through the logs, my notes on diagnosing that problem in Too Many Open Files on Linux—My Hands-On Take will walk you through a quick fix.

Logs look okay? Great. While you’re at it, double-check that the system clock is in the right zone—nothing confuses cron runs or monitoring graphs faster than drifting timestamps. If you need a refresher, here’s my field-tested guide to setting the time zone on Ubuntu.

When Things Are Stuck: The “Oh No” Tools

I don’t like these. But sometimes I need them.

  • sudo reboot -f
    This forces a reboot. It can skip some clean shutdown steps. I had to use it once when systemd froze during a storage hang. It worked, but I was nervous.

  • Magic SysRq (console access needed)
    If you’re on the console and the kernel is alive, this can save the box:
    Press Alt+SysRq and type slowly: R E I S U B
    It tries to sync and reboot safely. I used it on a lab server with a crashing NIC driver. It felt old school, but it got me home.

Use force only when the clean way fails. Files can get sad if you yank the rug.

Don’t Reboot The Wrong Host (Yep, I Did)

I once typed sudo reboot on a shell where I meant to be on a staging box. Guess what? It was prod. My face went hot.

Now I:

  • put the hostname in my shell prompt
  • use SSH configs with clear names
  • run hostname and whoami before big moves

Simple stuff. But it saves you.

Reboot Timing: Drain First, Then Go

If it’s a web node behind a load balancer, I drain traffic first:

  • mark node out of the pool
  • wait for active sessions to drop
  • then reboot

For databases, I’m strict. I fail over first or stop services cleanly. No cowboy moves.

Handy Things I Reach For

  • wall "Rebooting in 10 minutes" (broadcast to users)
  • needrestart (on Debian/Ubuntu, helps see what needs restart)
  • cat /var/run/reboot-required (Ubuntu’s little flag)
  • Ansible: ansible all -a "reboot" -b (for fleets, staged by groups)
  • mosh (helps with shaky links, but I still pair with tmux)

Tiny digression: I love mosh on long flights. SSH drops mid-air. Mosh hangs in there.

Confession: during those quiet minutes while a maintenance window ticks down, I’ll sometimes wander the web to see how other corners of tech intersect with everyday life—social apps, niche platforms, even adult-focused services. One rabbit hole led me to this no-fluff SnapSext review that breaks down how the app leverages disappearing-photo culture for dating, outlines its real-world pros and cons, and helps you decide whether it’s worth signing up. On another late-night scroll I ended up exploring how regional classifieds have evolved post-Backpage—turns out the Beloit, Wisconsin scene is alive and well at Backpage Beloit, where you can browse current listings, pick up safety pointers, and see how local connections are being made in a post-shutdown landscape.

My Simple Cheat Sheet

  • Clean, fast: sudo reboot
  • Schedule with message: sudo shutdown -r +10 "Rebooting for updates"
  • Cancel a scheduled one: sudo shutdown -c
  • Systemd flavor: sudo systemctl reboot
  • Force (last resort): sudo reboot -f
  • See failures after boot: systemctl –failed
  • See last boot errors: journalctl -b -1 -p err

Tape it to your monitor if you like. I did, for a while.

A Quick Walkthrough I Actually Did This Morning

Ubuntu 22.04 app VM, post-patch:

  1. tmux new -s patch
  2. sudo apt update && sudo apt full-upgrade -y
  3. if [ -f /var/run/reboot-required ]; then echo "needs reboot"; fi
  4. who; uptime; systemctl list-jobs
  5. sudo shutdown -r +3 "Rebooting in 3 minutes for kernel update"
  6. sip coffee; warn in chat; cancel if needed (sudo shutdown -c)
  7. let it reboot; wait 60 seconds
  8. ping; ssh back in
  9. systemctl –failed; journalctl -b -1 -p err; curl -f http://127.0.0.1/health

Time spent: about 8 minutes. No alerts. Felt nice.

Final Thoughts

Reboots aren’t scary. Rushed reboots are. Keep it clean, warn people, use tmux,

Puppy Linux Hardware Requirements: My Hands-On Story

I’m Kayla, and I’ve run Puppy Linux on a bunch of old machines I keep in a closet. I love seeing a “dead” laptop wake up. It’s a little silly, but I grin when the fan kicks in and the desktop pops up. You know what? It feels like finding five dollars in your coat. If you’re curious just how ancient a box you can revive, OSNews once profiled Puppy running comfortably on spruced-up Pentium II/III hardware—proof the pup really can fetch life from stone-age PCs.

Let me explain what Puppy really needs, and what actually happened on my gear.
If you enjoy digging into tales of breathing life into outdated PCs, you’ll find plenty more inspiration over at Freedom Penguin.

What Puppy Needs (for real)

  • CPU: A Pentium 4 or newer works. 32-bit or 64-bit is fine. There are builds for both.
  • RAM: 512 MB will boot. 1 GB is smoother. More is better if you load it all in RAM.
  • Storage: A 2 GB USB stick is enough to try it. For a “frugal install,” 512 MB to 4 GB free space feels safe.
  • Graphics: Intel, Nvidia, or ATI usually work with the open drivers. Worst case, it falls back to a basic driver.
  • Wi-Fi: Intel cards are easy. Broadcom can need extra firmware. I did need that once.

For the official bare-bones spec sheet—just a 64-bit single-core CPU and 1 GB of RAM—the latest F96 release notes confirm Puppy’s famously light footprint.

That’s the short version. Now the fun part. If you're after a blow-by-blow breakdown of why those numbers matter, I laid it all out in “Puppy Linux Hardware Requirements: My Hands-On Story.”

My Test Bench, aka the Junk Pile

The ThinkPad That Wouldn’t Quit (IBM T42)

  • Specs: Pentium M 1.7 GHz, 1.5 GB RAM, 40 GB HDD, Intel 2200BG Wi-Fi
  • Puppy used: BionicPup32

This old ThinkPad booted from a USB made with Rufus (on my main PC). It took about 40 seconds to reach the desktop. The Intel Wi-Fi worked out of the box. No drama. I made a 1 GB swap partition with GParted, and that stopped the rare freeze when I opened a few tabs in the browser.

Use case: Email, light web, local music. YouTube at 480p was fine. 720p felt choppy. Still, for a laptop from the flip-phone era? I was impressed.

The Tiny Netbook with Tiny Patience (ASUS Eee PC 1005HA)

  • Specs: Atom N280, 1 GB RAM, 160 GB HDD
  • Puppy used: XenialPup 7.5 (32-bit)

It’s slow with most systems. With Puppy, it felt… okay. Not fast. But okay. Boot from USB, do a frugal install, and save to a folder. The Ralink Wi-Fi chip connected right away. I turned off a few startup items and stuck to one browser tab. Docs and PDF reading felt snappy. It ran cool too, which that little fan appreciated. If you want to keep an eye on thermals, I’ve got a quick guide on how I check my processor temperature in Linux that works great on Puppy as well.

Pro tip: I unchecked “copy to RAM” at boot. That saved memory. It matters on 1 GB machines.

The Loud Desktop That Still Kicks (Dell Dimension 2400)

  • Specs: Pentium 4 2.66 GHz, 512 MB RAM, 40 GB HDD, Intel 865G graphics
  • Puppy used: Slacko 6.3 (from CD)

Yes, I booted from a CD. It worked. I added a 1 GB swap partition or it would stall. After that, file sharing and old games were smooth. I even printed a shipping label. The iGPU used a basic driver at first; then it picked the right one on the second boot. I didn’t touch a thing. That felt like luck, but I’ll take it.

A Modern Check, Just Because (ThinkPad X260)

  • Specs: i5-6300U, 8 GB RAM, SSD
  • Puppy used: FossaPup64

Boot time? Under 10 seconds. It ran from RAM, so apps popped like toast. Wi-Fi, sound, touchpad, sleep—everything just worked. This is overkill for Puppy, but it’s great as a quick rescue and backup tool. I keep a Puppy USB in my backpack now. It’s my Swiss Army knife. For heavier, security-focused distros I lean on some different rigs—here’s a rundown of the laptops I trust with Kali Linux if you’re curious.

RAM, Swap, and That “Runs in RAM” Magic

Puppy can load itself into RAM. That’s why it feels so fast. But it does need enough memory for that trick.

  • 512 MB RAM: It boots. Use a swap file or partition. Don’t load Puppy into RAM. Keep apps light.
  • 1 GB RAM: Good. You can load more into memory and still browse.
  • 2 GB+ RAM: Great. Load it, open apps, and forget you’re on a USB.

Swap saved me on two machines. I made a 1 GB swap with GParted. It stopped browser crashes cold.

Storage: How Little Is “Little”?

The ISO is small. Around 300 to 450 MB depending on the build. For a frugal install and a savefile, I like having at least 2 to 4 GB free. A 2 GB USB stick works for testing. A 4 GB or 8 GB stick feels comfy for real use.

Here’s the thing: Using a “savefolder” on a normal partition is nicer than a fixed savefile. It grows as you need it.

Wi-Fi, Graphics, and Little Hiccups

  • Intel Wi-Fi: Plug and go, even on the old 2200BG card.
  • Broadcom Wi-Fi: I had one older HP laptop that showed no Wi-Fi at first. I opened Puppy Package Manager, searched “b43,” grabbed the firmware, and it sprang to life. Took five minutes.
  • Graphics: Intel iGPU is easy. On the Dell, it used a basic driver first, then the correct one later. On a GeForce 210 card, the open driver worked fine for 2D and video.

Sound worked on all my boxes. Only once did I open the ALSA wizard to bump the right device. Two clicks and done.

32-Bit, 64-Bit, and That PAE Word

Some old CPUs don’t handle PAE. My ThinkPad T42 is like that. So I used a non-PAE 32-bit Puppy. Newer machines do fine with PAE and 64-bit builds. If you boot and it stops with a weird CPU message, try a non-PAE build. Simple fix.

What It’s Actually Good For

  • Reviving old laptops for email, docs, and school work
  • A fast, clean rescue USB for backups and file recovery
  • A quiet media player for local music and SD video
  • Light dev work with Geany and a terminal (yes, I did a tiny Python script)

And yes, it can feel like a new lease on life for old gear. Not magic, but close.

Little Tips That Saved Me Time

  • Make a swap partition if you have 512 MB or 1 GB RAM.
  • If it feels slow, don’t load Puppy into RAM at boot.
  • Keep savefiles small; use savefolders when you can.
  • If Wi-Fi doesn’t show, check for firmware in the package manager.
  • For USBs, a 4 GB stick is a sweet spot. Cheap, too.

Final Take

Puppy Linux doesn’t ask for much. A Pentium-era CPU, 512 MB to 1 GB of RAM, and a tiny slice of storage. That’s it. On old hardware, it turns “ugh” into “usable.” On new hardware, it’s lightning.

I’ve run it on four very different machines, and I’d do it again tomorrow. If you’ve got a dusty laptop and a spare USB stick, give it a go. Worst case, you learn something. Best case, you get a happy little computer back. Honestly, that’s the best feeling.

Side note: once your freshly resuscitated machine is online and you want a playful way to test how smoothly Puppy handles modern web apps, try pointing its browser to plancul.app—the site is lightweight, loads quickly even on modest hardware, and gives you a chance to meet like-minded adults for casual dates while simultaneously stress-testing your revived system’s networking chops.

While you’re at it, spin up a tab with a feature-rich classifieds portal—[Onenightaffair’s Backpage Mesa replacement](https://onenightaffair

I turned off IPv6 on my Linux boxes. Here’s how it went.

I’m Kayla, and I tinker a lot. Home lab, travel laptop, a couple servers. I don’t hate IPv6. But my ISP’s IPv6 was flaky. Pages paused. SSH felt sticky. Slack calls would spin. You know what? I got tired of “Hmm, maybe it’s DNS.” So I tried turning IPv6 off on a few machines and lived with it for a while. Full disclosure: the original, longer-form story lives over on Freedom Penguin as I turned off IPv6 on my Linux boxes—here’s how it went; what you’re reading here is my boiled-down field notes.

This is my honest take—what worked, what broke, and the exact steps I used.

For more Linux-focused anecdotes and how-tos, you can browse the community articles over at Freedom Penguin.

My setup (so you know I’m not guessing)

  • Home: Ubuntu 22.04 on a ThinkPad X1. NetworkManager. UFW on.
  • Work: RHEL 9 server on bare metal. Grub2. nftables.
  • VPS: Debian 12 on a cheap cloud box. No real IPv6 needed there, but I tested anyway.
  • Travel: Fedora 39 on a small Asus laptop. I often sit in hotels and coffee shops with weird Wi-Fi.

Different needs. Same pain: half-baked IPv6.


Method 1: Sysctl – quick switch, fast results

I used sysctl on Ubuntu and Debian. It’s simple, and I could flip it back fast.

  1. I made a file:
sudo nano /etc/sysctl.d/99-disable-ipv6.conf
  1. I put this in:
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
# if you want loopback too:
net.ipv6.conf.lo.disable_ipv6 = 1
  1. I applied it:
sudo sysctl --system
  1. I checked it:
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
# 1 means off
ip -6 addr
# should show nothing

For a distro-agnostic walkthrough that mirrors this sysctl approach, check out Hostinger's guide on disabling IPv6 on Linux.

What happened on Ubuntu 22.04? Web pages stopped stalling. SSH felt snappy. Slack calls no longer froze at the start. I know, sounds small, but small things stack up.

On my Debian 12 VPS, nothing dramatic. It just cleaned up the logs and made name lookups feel steady.

What I liked:

  • No reboot. Fast rollback.
  • Easy to script.
  • Very clear on/off state.

What bugged me:

  • Some apps still asked for AAAA records and got nothing. Not bad, just noisy.

Rollback:

  • Delete that file and run sudo sysctl --system. Or set the values to 0 and apply.

Method 2: Kernel flag – the hard “off” with GRUB

On my RHEL 9 server, I wanted a firm shut-off. So I used a kernel flag. This is more “final.”

  1. I edited GRUB:
sudo nano /etc/default/grub
  1. I added ipv6.disable=1 inside the line:
GRUB_CMDLINE_LINUX="ipv6.disable=1"
  1. I rebuilt GRUB:
  • RHEL/CentOS/Fedora (BIOS):

    sudo grub2-mkconfig -o /boot/grub2/grub.cfg
    
  • RHEL/CentOS/Fedora (UEFI):

    sudo grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
    
  • Ubuntu/Debian:

    sudo update-grub
    
  1. I rebooted. Then checked:
ip -6 addr
# nothing

Did it help? Yes. My rsync jobs stopped timing out on name lookups. Log noise dropped a lot. It felt calmer.

What I liked:

  • It’s the cleanest “off.” No half measures.
  • Survives every reboot without a thought.

What bugged me:

  • It’s global. If you use tools that love IPv6 (looking at you, Tailscale peers and some Kubernetes bits), they’ll complain. WireGuard v6 peers? They won’t work.

Rollback:

  • Remove ipv6.disable=1, rebuild GRUB, reboot.

If you're curious about how this kernel-level switch stacks up against other methods (like sysctl toggles or NetworkManager settings), OpenSource.com hosts a thorough comparison in their write-up on disabling IPv6.


Method 3: NetworkManager – per-Wi-Fi and perfect for travel

Hotels often have broken tunnels or weird captive portals. I don’t want a reboot there. So I turned off IPv6 just for that Wi-Fi.

With nmcli:

nmcli connection show
nmcli connection modify "MyHotelWiFi" ipv6.method "ignore"
nmcli connection up "MyHotelWiFi"

Check:

nmcli connection show "MyHotelWiFi" | grep ipv6.method

With the GUI (Fedora/Ubuntu):

  • Settings → Wi-Fi → gear icon → IPv6 → set to “Ignore.”

What changed? My browser stopped waiting for AAAA. Pages loaded straight away. Zoom calls got stable on sketchy hotel routers. It’s a nice, soft switch.

What I liked:

  • No reboot. Just the one network.
  • Easy to flip back when I’m home.

What bugged me:

  • You need to do it per network. Not hard, just one more step.

Rollback:

  • Set ipv6.method to “auto” and reconnect.

While you’re already poking at that connection profile, it never hurts to throw in a little privacy by changing your MAC address before jumping on sketchy guest networks—same toolset, same quick wins.


Bonus: Firewalls only, when you must keep IPv6

On one box, I had to keep IPv6 on for a lab test, but I still wanted to block it upstream. So I used nftables.

Simple block:

sudo nft add table ip6 filter
sudo nft add chain ip6 filter output { type filter hook output priority 0 ; }
sudo nft add rule ip6 filter output drop

Careful: This blocks all IPv6 traffic. If that’s too strong, make rules per port or per address.

Rollback:

sudo nft flush ruleset

Quick tests I ran

  • See if it’s off:

    cat /proc/sys/net/ipv6/conf/all/disable_ipv6
    
  • See addresses:

    ip a | grep inet6
    
  • Ping with v6 only (should fail if off):

    ping -6 google.com
    
  • DNS checks:

    getent ahosts example.com | head
    
  • Firewall state:

    sudo nft list ruleset
    

While poking around, I also loaded a couple of bare-bones websites to make sure a straight IPv4 request completed quickly. One refreshingly lightweight example is the Moreno Valley personals page at OneNightAffair’s Backpage mirror — its simple HTML layout loads almost instantly, making it a handy real-world sanity check that your IPv4 stack is working end-to-end after you’ve disabled IPv6.

If you want an even clearer picture of how packets wander through the internet after the switch, spin up a quick mtr session—there’s a straightforward primer in MTR on Linux: my honest take from the terminal that walks through the flags.


What got better (for me)

  • Page loads stopped “thinking” first.
  • SSH was instant on bad Wi-Fi.
  • Logs got cleaner. Less AAAA noise.
  • Fewer weird pauses in Slack and Zoom.

It felt like a breath of fresh air on rough networks.


What broke (or got weird)

  • No access to v6-only hosts. Rare, but real.
  • LXD wanted IPv6 for some defaults. I had to tweak it.
  • Tailscale and some P2P tools prefer IPv6. They fell back, which used more relays.
  • Some local service discovery got iffy. Link-local tricks use v6 a lot.

Also, turning IPv6 off does not make you “safe.” Use a firewall anyway. On Ubuntu, I kept UFW on. On RHEL, I kept nftables rules tight.


Who should switch it off?

  • Folks on flaky ISP v6. You’ll feel the difference.
  • Laptops on hotel or guest Wi-Fi. Use the NetworkManager trick.
  • Simple servers that don’t need v6 at all. Sysctl or kernel flag works great.

Who should keep it?

  • Dual-stack shops. If your team runs Kubernetes, LXD, or heavy mesh gear, think twice.
  • Anyone who depends on v6-only services.

One unplanned side quest from all this late-night