Will data overflow into attached block storage if instance storage is reached?
I have a Linode instance with a shared CPU, 8 GB of RAM, and storage that comes with it. I have attached a block storage instance of 120 GB with it.
I wanted to know how the storage works if the in-built storage of an instance reaches its limit. Will the remaining data directly go into attached block storage, or do I have to configure it?
When I see details with df -h
it shows the following data:
root@localhost:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 796M 964K 795M 1% /run
/dev/sda 157G 11G 139G 8% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 796M 0 796M 0% /run/user/0
And the fdisk -l
command shows the following:
root@localhost:~# fdisk -l
Disk /dev/sda: 159.51 GiB, 171261820928 bytes, 334495744 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb: 512 MiB, 536870912 bytes, 1048576 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 120 GiB, 128849018880 bytes, 251658240 sectors
Disk model: Volume
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Thanks for the help
1 Reply
If the built-in storage of your Linode instance reaches its limit, any attempt to write more data to the disk will fail. By default, the remaining data will not be automatically directed to the attached Block Storage volume.
I typically recommend directing any sources of data that accumulate over time, such as log files and user-upload directories, to the Block Storage volume. This is because Block Storage volumes can be easily scaled up in size at any time, providing the flexibility needed to handle growing amounts of data.