Why is my disk speed so slow?
Start: 2009-10-12 00:15:30
Create a 1GB file.
00:15:30 up 23:54, 1 user, load average: 0.18, 0.57, 0.62
1048576000 bytes (1.0 GB) copied, 217.374 seconds, 4.8 MB/s
00:19:08 up 23:58, 2 users, load average: 2.03, 1.36, 0.92
gzip of 1GB file
real 92.42
md5sum of gzip 1GB file
real 11.40
End: 2009-10-12 00:20:53
Only 4.8 MB/s? Why so low? This is a stock CentOS install that is not very active at all as I have not moved the sites over to it yet.
Any ideas what's going on?
[rjones@server2 ~]$ uname -a
Linux server2.xxxxxx.com 2.6.18.8-linode19 #1 SMP Mon Aug 17 22:19:18 UTC 2009 i686 i686 i386 GNU/Linux
8 Replies
@arjones85:
1048576000 bytes (1.0 GB) copied, 217.374 seconds, 4.8 MB/s
How exactly were you creating the file?
Here's the script in its entirety:
! /bin/sh
Author: chriss
Date: 2008/09/13
Simple benchmarks
TFILE=./ddtmp
TFILEGZ=${TFILE}.gz
SRCFILE=/dev/urandom
NULL=/dev/null
GREPBIN=which grep
WBIN=which w
HEADBIN=which head
MD5SUMBIN=which md5sum
TAILBIN=which tail
GZIPBIN=which gzip
DDBIN=which dd
DATEBIN=which date
TEEBIN=which tee
Quick sanity checks
if [ ! "$GREPBIN" ]; then
echo "Utility grep not found. Exiting."
exit 1
fi
if [ ! "$WBIN" ]; then
echo "Utility w not found. Exiting."
exit 1
fi
if [ ! "$HEADBIN" ]; then
echo "Utility head not found. Exiting."
exit 1
fi
if [ ! "$MD5SUMBIN" ]; then
echo "Utility md5sum not found. Exiting."
exit 1
fi
if [ ! "$TAILBIN" ]; then
echo "Utility tail not found. Exiting."
exit 1
fi
if [ ! "$GZIPBIN" ]; then
echo "Utility gzip not found. Exiting."
exit 1
fi
if [ ! "$DDBIN" ]; then
echo "Utility dd not found. Exiting."
exit 1
fi
if [ ! "$DATEBIN" ]; then
echo "Utility date not found. Exiting."
exit 1
fi
if [ ! "$TEEBIN" ]; then
echo "Utility tee not found. Exiting."
exit 1
fi
Do a simple benchmark
LOGFILE="./${0}.$DATEBIN +%Y%m%d%H%M%S
.log"
echo "Start: $DATEBIN "+%Y-%m-%d %H:%M:%S"
" | $TEEBIN -a $LOGFILE
echo "Create a 1GB file." | $TEEBIN -a $LOGFILE
$WBIN | $HEADBIN -1 | $TEEBIN -a $LOGFILE
$DDBIN if=$SRCFILE of=$TFILE bs=1024 count=1024000 2>&1 | $TAILBIN -1 | $TEEBIN -a $LOGFILE
$WBIN | $HEADBIN -1 | $TEEBIN -a $LOGFILE
echo "gzip of 1GB file" | $TEEBIN -a $LOGFILE
{ time -p $GZIPBIN $TFILE 1>$NULL 2>&1 ; } 2>&1 | $GREPBIN real | $TEEBIN -a $LOGFILE
echo "md5sum of gzip 1GB file" | $TEEBIN -a $LOGFILE
{ time -p $MD5SUMBIN $TFILEGZ 1>$NULL 2>&1 ; } 2>&1 | $GREPBIN real | $TEEBIN -a $LOGFILE
rm $TFILEGZ
echo "End: $DATEBIN "+%Y-%m-%d %H:%M:%S"
" | $TEEBIN -a $LOGFILE
360````
Start: 2009-10-12 19:36:35
Create a 1GB file.
19:36:35 up 98 days, 23:55, 3 users, load average: 0.00, 0.05, 0.09
1048576000 bytes (1.0 GB) copied, 219.067 s, 4.8 MB/s
19:40:14 up 98 days, 23:59, 3 users, load average: 1.54, 0.69, 0.33
540````
Start: 2009-10-12 18:00:24
Create a 1GB file.
18:00:24 up 5 min, 2 users, load average: 0.00, 0.00, 0.00
1048576000 bytes (1.0 GB) copied, 145.728 s, 7.2 MB/s
18:02:50 up 7 min, 2 users, load average: 1.47, 0.64, 0.24
Looks as if Stever may be right - in this rather limited test, the rate of disk I/O was definitely better with less 'nodes on the host. Does anyone with a bigger Linode have time to do the test?
@arjones85:
SRCFILE=/dev/urandom
(…)
$DDBIN if=$SRCFILE of=$TFILE bs=1024 count=1024000 2>&1 |
I think you will find that /dev/urandom is the speed limiting factor in this test. Quite a few CPU cycles go into reading from that device.
Guzpaz - yep - using /dev/zero pumps the disk I/O rate up to 30.6 MB/s on a 360.
hdparm -tT /dev/xvda