Reclaiming space from /dev and /run/shm
I am pretty sure we didn't create these disks, but they are occupying nearly 15GB of space put together, leaving us with just 13GB available space on /dev/root
.
root@ubuntu:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 315G 287G 13G 96% /
devtmpfs 7.9G 4.0K 7.9G 1% /dev
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 1.6G 372K 1.6G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 0 7.9G 0% /run/shm
none 100M 0 100M 0% /run/user
Is there a way to resize /dev
and /run/shm
, and allocate some of that space to /dev/root
where we need it?
2 Replies
Those are temporary filesystems residing in RAM. Here are a couple posts that I think provide a good explanation of them:
- What are “/run/lock” and “/run/shm” used for?
- What are /dev, /run and /run/shm and can I resize them?
If you wish to do so, you can resize them by copying the appropriate lines from /lib/init/fstab
into /etc/fstab
and tweaking the size as you see fit.
Before:
df -h
...
Filesystem Size Used Avail Use% Mounted on
/dev/root 315G 287G 13G 96% /
devtmpfs 7.9G 4.0K 7.9G 1% /dev
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 1.6G 372K 1.6G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 0 7.9G 0% /run/shm
none 100M 0 100M 0% /run/user
After:
# added lines to /etc/fstab
none /dev devtmpfs,tmpfs size=512m,mode=0755 0 0
none /run/shm tmpfs nosuid,nodev,size=512m 0 0
# remount and check again
mount -o remount /dev; mount -o remount /dev/shm
df -h
...
Filesystem Size Used Avail Use% Mounted on
/dev/root 315G 287G 13G 96% /
devtmpfs 512M 4.0K 512M 1% /dev
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 1.6G 372K 1.6G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 512M 0 512M 0% /run/shm
none 100M 0 100M 0% /run/user
You might notice from my output however, this didn't actually free up any space for /dev/root
, and that's because these RAM residing filesystems are not sharing allocable space with it. So why is there a discrepancy in the df -h
output? Using 287GB out of 315GB should leave me with 28GB - why am I only seeing 13GB?
The answer is that ext filesystems reserve 5% of their capacity by default. The purpose of this padding to reduce fragmentation as the disk gets close to full. In this example, (315-287) - (315 * 0.05) = 12.25GB of free space, and then df -h
rounds up to 13GB.
Other than removing data (deleting or moving to a Block Storage volume), the only way to give more free space to the root filesystem in this case is to resize the Linode to a larger plan.
tmpfs is a file system which keeps all files in virtual memory.
Everything in tmpfs is temporary…no files will be created on your (non-swap) hard disc. However, if pages are swapped out, those pages take up swap space (which is generally located on a hard disc on either a dedicated swap device or in a dynamic swap file). If you unmount a tmpfs instance, everything stored in the instance is lost.
tmpfs puts everything into the kernel internal caches and grows and shrinks to accommodate the files it contains and is able to swap unneeded pages out to swap space (so the information shown by df -h
is misleading).
You can clear these internal caches and swap with this simple script:
#!/bin/bash
sudo /bin/true # forces entry of password
sudo sync && sudo /sbin/sysctl vm.drop_caches=3 >/dev/null 2>&1 && sudo swapoff -a && sudo swapon -a
free # report
exit 0
It's not advisable to use this too often otherwise you are continually upsetting the kernel's carefully constructed view of it's (dynamic) workload.
The Linux kernel has a tunable called vm.swappiness. This tunable controls how aggressively the the kernel will swap memory pages (such as those allocated to tmpfs filesystems). Higher values will increase aggressiveness and lower values decrease aggressiveness.
You can read more about this tunable here:
https://linuxize.com/post/how-to-change-the-swappiness-value-in-linux/
Be careful.
-- sw