Resize Error: Insufficient space for resize operation
I have a 50GB Linode, which has filled up due to some files I'm unable to find I tried many different methods.
I followed instructions and shutdown the linode to resize the storage but what ever the number I tried I got error message:
Insufficient space for resize operation
I tried 60GB, 70Gb, 80GB and Also 100GB.
What could be the problem?
17 Replies
✓ Best Answer
Here's some excerpts from the df
man page:
-h “Human-readable” output. Use unit suffixes: Byte, Kibibyte,
Mebibyte, Gibibyte, Tebibyte and Pebibyte (based on powers of
1024) in order to reduce the number of digits to four or fewer.
-i Include statistics on the number of free and used inodes. In
conjunction with the -h or -H options, the number of inodes is
scaled by powers of 1000.
So, df -h
shows you've only used 23% of the space available on /dev/sda. You have plenty of physical space available.
df -i
shows you've used 100% of the inodes available in the filesystem mounted on /dev/sda (i.e., /). Inodes are what the filesystem uses to keep track of allocated/free disc blocks:
https://en.wikipedia.org/wiki/Inode
This may give you some insights:
https://serverfault.com/questions/801761/ext4-running-out-of-inodes
Note the post about 0-byte files…they use an inode! Before you rebuild your Linode, try looking for 0-byte files you don't need and remove them. That will free up some inodes. See:
https://stackpointer.io/unix/unix-linux-find-delete-zero-byte-files/558/
If you boot to Rescue Mode, you can do this without any of the annoying "out of space" messages. find(1) and rm(1) are both available in the Rescue Mode OS. You shouldn't need anything else. You will have to do this at the lish/glish console, however.
Once you've freed some inodes, you can boot back to your normal OS and install/run ncdu for a more thorough examination/cleaning. I don't know what services your Linode is providing, but I would leave them turned off until you are done with this process.
Since you have this:
/dev/loop0 10803 10803 0 100% /snap/core18/2074
/dev/loop2 10803 10803 0 100% /snap/core18/2128
/dev/loop3 796 796 0 100% /snap/lxd/21545
/dev/loop4 1602 1602 0 100% /snap/lxd/21029
/dev/loop5 474 474 0 100% /snap/snapd/12883
/dev/loop1 11720 11720 0 100% /snap/core20/1081
/dev/loop7 474 474 0 100% /snap/snapd/13170
I'm assuming that you're using snapd on Ubuntu. IMHO, this is probably the culprit. Also, IMHO, you should probably stop doing that:
https://thenewstack.io/canonicals-snap-great-good-bad-ugly/
Like systemd, snapd is an overblown solution to a problem that doesn't exist…or, rather, only exists in Mark Shuttleworth's mind (the above article's conclusion notwithstanding…).
-- sw
You need to upgrade to a bigger plan. You can add block storage…any size you want.
What did you do to try to find the files?
-- sw
Searched every where and found that the server does not have more then 14GB utilization, how every when ever I try to upload or update something it gives "no space available error"
This is df -i output:
root@localhost:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 243133 404 242729 1% /dev
tmpfs 254422 670 253752 1% /run
/dev/sda 3168000 3168000 0 100% /
tmpfs 254422 13 254409 1% /dev/shm
tmpfs 254422 3 254419 1% /run/lock
tmpfs 254422 18 254404 1% /sys/fs/cgroup
/dev/loop0 11720 11720 0 100% /snap/core20/1081
/dev/loop2 10803 10803 0 100% /snap/core18/2128
/dev/loop1 10803 10803 0 100% /snap/core18/2074
/dev/loop3 1602 1602 0 100% /snap/lxd/21029
/dev/loop5 474 474 0 100% /snap/snapd/13170
/dev/loop4 96000 1145 94855 2% /tmp
/dev/loop6 796 796 0 100% /snap/lxd/21545
/dev/loop7 474 474 0 100% /snap/snapd/12883
tmpfs 254422 22 254400 1% /run/user/0
You can see that it shows /dev/sda is 100% full I cleared all logs, unused kernel images all sorts of thing but nothing worked.
Strangely dh -u gives different results
root@localhost:~# du -h --max-depth=1 -x /
4.0K /srv
4.0K /cdrom
16K /lost+found
4.0K /mnt
28K /snap
2.3G /home
4.8G /usr
4.0K /opt
212M /boot
1.8G /var
50M /root
4.0K /media
7.9M /etc
11G /
root@localhost:~# du -h --max-depth=1 -x /
4.0K /srv
4.0K /cdrom
16K /lost+found
4.0K /mnt
28K /snap
2.3G /home
4.8G /usr
4.0K /opt
212M /boot
1.8G /var
50M /root
4.0K /media
7.9M /etc
11G /
The first thing I would do is remove unneeded/unused packages: sudo apt autoremove
.
I would drill down on /boot, /var, /root and /home with find(1):
https://linuxconfig.org/how-to-use-find-command-to-search-for-files-based-on-file-size
If your system is Debian/Ubuntu, you can use ncdu as an alternative.
You can install it with sudo apt-get install ncdu
. Here's the details:
You'll have to do this as root to be able to access directories that don't belong to you.
I'll bet /boot has unneeded/outdated kernel files. I'll bet /root is full of core dumps from cron jobs or snapd failures. I'll bet /var is full of tracks leftover from software you may have tried out and abandoned (unneeded packages live here). I'll bet /home is full of old email, personal photos, etc that may belong to your customers (you're not operating a free storage service).
Standard warnings about removing stuff apply: DANGER WILL ROBINSON! DANGER! DANGER!
Once you get this mess cleaned up, you should institute some sort of scan/auto-removal policy to prevent this from happening again.
-- sw
Thanks for the detailed reply.
Unfortunately I have already tried all the above.
sudo apt autoremove
sudo apt clean
Core dumps filling up the disk.
find / -xdev -name core -ls -o -path "/lib*" -prune
Unnecessary packages filling up the space.
apt-get autoremove --purge
Outdated kernel packages
dpkg -l "linux{tools}" |grep ^.i
And only ncdu I have not tried, but then if I try to install that I will get error "not enough space available"
Even attaching block storage is of no use, it does not show anything, same problem continutes.
I just noticed that you don't have a /tmp directory. This may be part of your problem… While you're at it you might verify that /var/tmp exists as well.
Do you have a swap partition or do you use dynamic swap? That will have an impact on the usage for /. If you use dynamic swap, you may want to reconfigure your Linode to use a fixed swap partition.
As for ncdu, you can spin up come block storage, download the source for ncdu and build it on block storage from source. If you don't want to do that, you can download the static binary for x86_64 to block storage and run it from there. Both are available at https://dev.yorhel.nl/ncdu.
Block storage is super-cheap… $1/mo per 10Gb. The only downside is that it is slower than the "main" storage that you are given with your plan. For what you need to do, that's not going to matter much.
You can also boot to Rescue Mode, install the static binary of ncdu and run it that way. Rescue Mode boots a specialized, memory-only Linux distro with a RAMdisk. You can mount your Linode's /dev/sda so some mount point in your rescue Linux OS, install ncdu to your RAMdisk and scan away. See:
https://www.linode.com/docs/guides/rescue-and-rebuild/
Rescue Mode was designed for situations such as yours.
The block storage method is going to be preferable if you want your Linode to provide whatever services it provides while you're doing this. The Rescue Mode method is going to be preferable if you can afford your Linode to be down.
-- sw
Hi sw, Thank you for detailed reply.
I do have /tmp and also there is a tmp folder under /var/tmp.
I have a Swap with 512mb storage, this is what Linode storage dhasboard shows.
Also
>swapon -s
Filename Type Size Used Priority
/cyberpanel.swap file 1511420 134400 -2
/dev/sdb partition 524284 0 -3
I have attached a 10G block storage by following the config command from Linode Storage area, but I don't see the attached storage in the sever but Linode says it is attached to my Linode.
1. Add it to the config you used to boot your Linode. This will make sure it's attached with an entry in /dev the next time you boot your Linode (i.e., /dev/sdX where X is b, c, d, etc). The Cloud Manager may have already done this so don't worry if it's already there. I just don't remember…
2. Add it to your /etc/fstab so Linux will mount /dev/sdX to your chosen mount point at boot. See: https://www.redhat.com/sysadmin/etc-fstab
At this point, you can either reboot or:
3. Create your mount point and mount the device:
sudo mkdir /mnt # this can be any name you want
# as long as it's legal in your filesystem
# it should be the same as what you used in
# /etc/fstab above
sudo mount /dev/sdX /mnt
You should see the contents of your block storage in your filesystem under /mnt (there should be nothing there).
-- sw
Thank again for the detailed reply.
From Linode Dashboard>Storage>Volume Config:
I did the following:
To get started with a new volume, you'll want to create a filesystem on it:
Create a Filesystem
mkfs.ext4 "/dev/disk/by-id/scsi-0Linode_Volume_ten"
Once the volume has a filesystem, you can create a mountpoint for it:
Create a Mountpoint
mkdir "/mnt/ten"
Then you can mount the new volume:
Mount Volume
mount "/dev/disk/by-id/scsi-0Linode_Volume_ten" "/mnt/ten"
If you want the volume to automatically mount every time your Linode boots, you'll want to add a line like the following to your /etc/fstab file:
Mount every time your Linode boots
/dev/disk/by-id/scsi-0Linode_Volume_ten /mnt/ten ext4 defaults,noatime,nofail 0 2
Only the above one I have not done.
I also did what you said to mount sudo mount /dev/sdX /mnt/ten
do I need to restart to take effect or it should show immediately?
do I need to restart to take effect or it should show immediately?
If you did everything correctly, you should see the mounted volume immediately. Do df
. You should see an entry for /mnt/ten somewhere in the list.
See: https://devconnected.com/how-to-mount-and-unmount-drives-on-linux/
Come to think of it, you may not have to change /etc/fstab at all. Whatever magic setting the volume name in the Linode boot configuration may have taken care of that for you.
It's been a long time since I've done this… Here's the official Linode guide:
https://www.linode.com/docs/products/storage/block-storage/guides/add-volume/
-- sw
Thanks again,
I have the attached volume in /mnt/ten
However my main problem is /dev/sda which is 100% full and not letting me add or do any website management.
root@localhost:/# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 243133 413 242720 1% /dev
tmpfs 254422 687 253735 1% /run
/dev/sda 3168000 3168000 0 100% /
tmpfs 254422 13 254409 1% /dev/shm
tmpfs 254422 3 254419 1% /run/lock
tmpfs 254422 18 254404 1% /sys/fs/cgroup
/dev/loop0 11720 11720 0 100% /snap/core20/1081
/dev/loop2 10803 10803 0 100% /snap/core18/2128
/dev/loop1 10803 10803 0 100% /snap/core18/2074
/dev/loop3 1602 1602 0 100% /snap/lxd/21029
/dev/loop5 474 474 0 100% /snap/snapd/13170
/dev/loop4 96000 5967 90033 7% /tmp
/dev/loop6 796 796 0 100% /snap/lxd/21545
/dev/loop7 474 474 0 100% /snap/snapd/12883
tmpfs 254422 22 254400 1% /run/user/0
you can see that I have attached the extra volume to /dev/sda
root@localhost:~# findmnt /dev/sda
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda ext4 rw,relatime
/mnt/ten /dev/sda ext4 rw,relatime
Yet the /dev/sda is full.
What you've done is given a different name to /dev/sda:
- /; and
- /mnt/ten.
Both of these point to the same physical device.
You mounted the wrong device to /mnt/ten. Your block storage device should have a different entry in /dev.
First, remove the mount of /dev/sda to /mnt/ten:
sudo amount /mnt/ten
In Cloud Manager, click on "Configurations" for your Linode. Edit the the operative configuration for your Linode (you prob only have one). Scroll down a bit. You should see something like (can't post pictures here so I'm trying to make do with text):
Block Device Assignment
/dev/sda
…Add a Device
You need to click Add a Device. The device you add will be the label you gave your block storage volume when you created it (most likely "ten"…note that this will be listed under Volumes and your main disk will be listed under Disks). This is a drop-down so you'll only probably have one choice here.
Assign that device to /dev/sdb (the Cloud Manager will probably do this for you). After you restart your Linode, your block storage device will be known to Linux as /dev/sdb. If you watch the boot messages on the weblish console, you can see Linux attach the block storage volume as /dev/sdb.
Once you can log in, you can then mount the block storage volume to the mount point you created:
sudo mount /dev/sdb /mnt/ten
after that you'll see /mnt/ten in the df
list (n.b., I prefer df -h
-- gives the numbers in human-readable units). The size should be ~10Gb (it'll be slightly smaller because the ext4 file system has some overhead).
When you modify /etc/fstab, it should look like this:
/dev/sdb /mnt/ten ext4 rw,relatime 0 0
Making this entry will cause Linux to mount /dev/sdb (your block storage volume) at mount point /mnt/ten when the system boots up.
-- sw
P.S. I use "raw" discs so my setup is a bit different than this…ergo, I'm doing this from memories of a time long, long ago in a galaxy far, far away. I apologize in advance for any mistakes…
Thank you for the reply.
I unmounted the /mnt/ten, and mounted it to /dev/sdc
Edited the /etc/fstab and added the lines as you mentioned
/dev/sdc /mnt/ten ext4 rw,relatime 0 0
However the df- h shows the usage or dev/sda (as shown below)
root@localhost:/dev# df -h
Filesystem Size Used Avail Use% Mounted on
udev 950M 0 950M 0% /dev
tmpfs 199M 1000K 198M 1% /run
/dev/sda 49G 11G 36G 23% /
tmpfs 994M 152K 994M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 994M 0 994M 0% /sys/fs/cgroup
/dev/loop0 56M 56M 0 100% /snap/core18/2074
/dev/loop2 56M 56M 0 100% /snap/core18/2128
/dev/loop3 68M 68M 0 100% /snap/lxd/21545
/dev/loop4 71M 71M 0 100% /snap/lxd/21029
/dev/loop5 33M 33M 0 100% /snap/snapd/12883
/dev/loop1 62M 62M 0 100% /snap/core20/1081
/dev/loop7 33M 33M 0 100% /snap/snapd/13170
/dev/loop6 1.5G 13M 1.4G 1% /tmp
tmpfs 199M 0 199M 0% /run/user/0
But the df -i shows completely different result:
root@localhost:/dev# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 243132 413 242719 1% /dev
tmpfs 254421 679 253742 1% /run
/dev/sda 3168000 3168000 0 100% /
tmpfs 254421 13 254408 1% /dev/shm
tmpfs 254421 3 254418 1% /run/lock
tmpfs 254421 18 254403 1% /sys/fs/cgroup
/dev/loop0 10803 10803 0 100% /snap/core18/2074
/dev/loop2 10803 10803 0 100% /snap/core18/2128
/dev/loop3 796 796 0 100% /snap/lxd/21545
/dev/loop4 1602 1602 0 100% /snap/lxd/21029
/dev/loop5 474 474 0 100% /snap/snapd/12883
/dev/loop1 11720 11720 0 100% /snap/core20/1081
/dev/loop7 474 474 0 100% /snap/snapd/13170
/dev/loop6 96000 889 95111 1% /tmp
tmpfs 254421 22 254399 1% /run/user/0
This problem is so persistent and I'm not able to solve, I thinking about firing up one more Linode of similar size and migrate the data to that linode.
Thank you very much for you reply, this really helped and I was able to free up 10% of inodes and that error message is not coming any more, I will install ncdu and see further what is consuming the rest of the 90% inodes, I read here https://unix.stackexchange.com/questions/406534/snap-dev-loop-at-100-utilization-no-free-space that having snapd is alright?? and it is suppose to work the way it is working?
Thank you very much for you reply, this really helped and I was able to free up 10% of inodes and that error message is not coming any more.
That's great!
I will install ncdu and see further what is consuming the rest of the 90% inodes.
Another step forward!
I read here https://unix.stackexchange.com/questions/406534/snap-dev-loop-at-100-utilization-no-free-space that having snapd is alright?? and it is suppose to work the way it is working?
My problem with snapd is that Ubuntu is the only platform that uses it widely. It's a mechanism that Canonical uses to lock you into using Ubuntu. It's also a solution in search of a problem…
If all that is ok with you, that's fine with me. It wouldn't be ok with me.
-- sw
Thank you for all the help in resolving this issue, actually the 10gb block storage that I attached is not at all required, it was only inodes hat were causing problem.
I further investigated using this command
find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
and found out that one cache folder and php sessions were using all these inodes.
clearing the php session brought the number of inodes usage to 10%.
Thank you.