Speed of Block Storage
We just added a Block Storage Volume but are wondering about the speed.
First when we synced our data using rsync it took around 75 minutes to sync 149GB.
Even doing a simple du
of the directory takes forever compared to the server storage:
The block storage is mounted on /home/test/media
and the previous directory has been renamed to /home/test/media_old
root@production-server:/home/test# time du media -hsc
150G media
150G total
real 4m21.515s
user 0m1.432s
sys 0m8.336s
root@production-server:/home/test# time du media_old -hsc
149G media_old
149G total
real 0m34.800s
user 0m1.085s
sys 0m7.484s
That's an increase of about 770%, I was expecting an increase due to the fact it's external storage but 770% seems excessive.
8 Replies
Hi Peter,
While it’s expected to see a performance difference between internal and Block Storage, especially given the difference between the SSD storage Linode's have and the NVMe disks that make up Block Storage, the gap here seems a bit large. Typically you can expect up to 150MB/s and up to 5K IOPS on Block Storage volumes. The rsync results you mentioned seem to be clocking in at about 41.38MB/s, although there are some other factors that could be at play here. Was this data rsync’d locally from a directory to the Volume? Or was this from another Linode or a remote server?
Additionally, while the configuration of our Block Storage service allows for fast write speeds, read speeds can be often be somewhat slower. This may account for the larger difference you’re seeing in ‘du’.
If you’d like to open a Support Ticket, we’ll be happy to continue troubleshooting and dig a little deeper into this with you. The output of the following commands would be really helpful to us:
# This will give us a benchmark for write speeds
dd if=/dev/zero of=blockstorage.test bs=4M count=4000
# This will give us a benchmark for read speeds
dd if=blockstorage.test of=/tmp/blockstorage.test bs=4M
# This will give us an idea of general system stats
iostat 1 10
Thanks for the quick response.
The rsync was locally. The block storage was added to the server and the directory was synced to the directory the block storage was mounted on.
I'll open a support ticket and add the output of the benchmarks as well.
I wanted to follow up on the benchmarks that @thorner posted above:
Typically you can expect up to 150MB/s and up to 5K IOPS on Block Storage volumes.
We've recently made some changes to Block Storage that affects the expected IOPS. The updated benchmarks are:
For short I/O bursts of less than 60 seconds, customers can expect to see 1500 IOPS. For longer running I/O bursts, you should expect to see lower IOPS.
I explain why in more detail in the following post:
This is what I get for rsync -a --info=progress2 from block volume to local volume:
286,505,913,367 99% 19.43MB/s 3:54:20
I wanted to add my experiences as well, and I'm hoping that what I learn will help me in my current system design.
I created a 10GB volume, I ran the following 3 times while inside my block storage volume. (note above the original command had count=4000, but since I don't have 16gb of space, I halved it.)
dd if=/dev/zero of=blockstorage.test bs=4M count=2000
Here are the results:
2000+0 records in
2000+0 records out
8388608000 bytes (8.4 GB, 7.8 GiB) copied, 18.5814 s, 451 MB/s2000+0 records in
2000+0 records out
8388608000 bytes (8.4 GB, 7.8 GiB) copied, 25.5947 s, 328 MB/s2000+0 records in
2000+0 records out
8388608000 bytes (8.4 GB, 7.8 GiB) copied, 41.7827 s, 201 MB/s
Notice on the first run,the file blockstorate.test doesn't yet exist. However, it does on the subsequent runs, and I let it overwrite it.
A few things are interesting and curious at the same time. If I run that command 3 more times, but delete the blockstorage.test file before each run, I consistently get more than 400 MB/s
I'm wondering if I should write a script to check if the file exist firsts, delete it, and then write it anew rather than overwriting it.
If I keep running the command, overwriting the file each time, it finally stabilizes at around 250 MB/S on average in subsequent writes.
What could cause the lack of consistency. I'm not unhappy with 200 MB/s, but 400+ is almost the same as my main hard drive. Is there anything I can do to achieve the higher speeds more consistently?
as far as IOSTATS, resource usage is virtually nil
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sdc 0.00 0.00 0.00 0 0
sda 13.00 0.00 128.00 0 128
sdb 0.00 0.00 0.00 0 0
loop0 3.00 0.00 12.00 0 12
avg-cpu: %user %nice %system %iowait %steal %idle
0.67 0.00 1.17 0.00 0.00 98.16
Since my earlier comment above, we've made some great improvements to our Block Storage offering. You can find out more details on that here: https://www.linode.com/community/questions/19437/does-a-dedicated-cpu-or-high-memory-plan-improve-disk-io-performance#answer-78348
Is there any throttling that applies to the block storage use depending on the usage pattern?
I'm seeing around 150MB/s when benchmarking with hdparm -tT, but real-life use with PostgreSQL doing a large index scan maxes out around 5MB/s and 600 tps (iostat). The PostgreSQL server was running some other queries in the background when measuring this, but not a significant amount, in order of 30tps/0.1MBs. The iowait during the index scan fluctuated around 26-28% (4 vCPUs)
Would this be around the expected figures for database load patterns?