How can I test the bandwidth on my Linode?
I can see the total bandwidth for my Linode under Network In and Network on on the Linode Pricing page. How can I test this bandwidth myself?
3 Replies
You can certainly test the network throughput of your Linode, though the results will vary greatly depending on your network and any other external networks the traffic goes through.
Bandwidth vs throughput
First, as a bit of background, bandwidth is sometimes used synonymously with the term throughput, though there is a difference. Bandwidth is the theoretical limit of the network capacity. Think of bandwidth as the amount of water theoretically able to go through a particular segment of pipe and throughput as the amount of water that is actually able to go through the entire system of piping at a particular time. Because of this, bandwidth is relatively static and generally only changes when networking equipment is upgraded or modified. In fact, we can see the bandwidth of a Linode on that Pricing page, like you mentioned. Throughput, however, changes based on lots of factors, including the size of the data, amount of data, and the bandwidth and load of all networks that the data is traveling through.
Using iPerf
You can test the network throughput to or from your Linode by using a tool called iPerf. We have a great guide on using iPerf here: Network Throughput Testing with iPerf.
Essentially, you'll want to install iPerf on your Linode and another machine outside of our network. This could be your home machine, though for quick tests you might want to skip this step and use one of the public iPerf servers listed at iperf.cc.
If using a public iPef server, run the following on your Linode, replacing ping.online.net
with the url of the server you want to test against:
iperf -c ping.online.net -d -t 30 -i 10
If using your own machine as well as your Linode, run the following commands:
On your Linode: iperf -s
On your external machine: iperf -c $linode-ip -d -t 30 -i 10
, replacing $linode-ip with the IP address of your Linode.
These commands run a bidirectional test every 10 seconds for 30 seconds. You can adjust these settings as necessary.
Keep in mind, the speeds you see in iPerf will certainly be less than the total bandwidth of your Linode. That's completely expected!
It's a good idea to measure both bandwidth and latency. iperf 2.0.14a has latency measurements using --trip-times. It does required clock synchronization between the client and the server.
iperf 2.0.14 found here
iperf 2.0.14 man page here
Here is an example
[rjmcmahon@localhost iperf2-code]$ src/iperf -s -i 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.1.10%enp2s0 port 5001 connected with 192.168.1.62 port 47790 (MSS=1448) (trip-times) (sock=4) (peer 2.0.14-alpha)
[ ID] Interval Transfer Bandwidth Burst Latency avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
[ 1] 0.00-1.00 sec 1.09 GBytes 9.34 Gbits/sec 2.990/1.007/3.716/0.371 ms (8907/131083) 3.34 MByte 390539 18552=2380:2537:2636:2482:2250:2158:1919:2190
[ 1] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 3.001/2.322/3.675/0.348 ms (8979/131060) 3.37 MByte 392185 19320=2659:2821:2790:2622:2340:2166:1840:2082
[ 1] 2.00-3.00 sec 1.10 GBytes 9.41 Gbits/sec 3.002/2.308/3.733/0.348 ms (8978/131077) 3.37 MByte 392060 18635=2447:2683:2540:2308:2152:2149:2096:2260
[ 1] 3.00-4.00 sec 1.10 GBytes 9.41 Gbits/sec 3.003/2.321/3.710/0.348 ms (8979/131066) 3.37 MByte 391940 18625=2350:2641:2582:2355:2258:2250:2049:2140
[ 1] 4.00-5.00 sec 1.10 GBytes 9.42 Gbits/sec 3.003/2.306/3.669/0.349 ms (8978/131084) 3.37 MByte 391956 19504=2550:2941:3029:2662:2300:2176:1927:1919
[ 1] 5.00-6.00 sec 1.10 GBytes 9.41 Gbits/sec 2.971/2.260/3.670/0.364 ms (8979/131063) 3.33 MByte 396167 18766=2459:2647:2659:2370:2236:2194:2007:2194
[ 1] 6.00-7.00 sec 1.10 GBytes 9.41 Gbits/sec 2.974/2.276/4.351/0.364 ms (8978/131081) 3.34 MByte 395705 19148=2553:2707:2801:2590:2360:2183:1865:2089
[ 1] 7.00-8.00 sec 1.10 GBytes 9.41 Gbits/sec 2.972/2.295/3.864/0.363 ms (8979/131058) 3.34 MByte 395932 19108=2536:2840:2717:2537:2287:2095:1983:2113
[ 1] 8.00-9.00 sec 1.10 GBytes 9.42 Gbits/sec 2.972/2.254/3.685/0.363 ms (8979/131071) 3.33 MByte 396009 19180=2533:2800:2773:2592:2309:2242:1884:2047
[ 1] 9.00-10.00 sec 1.10 GBytes 9.41 Gbits/sec 2.971/2.253/3.673/0.364 ms (8978/131083) 3.33 MByte 396052 18993=2534:2694:2807:2435:2240:2188:1893:2202
[ 1] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec 2.986/1.007/4.351/0.359 ms (89741/131072) 3.35 MByte 393842 189883=25004:27332:27334:24955:22734:21801:19483:21240
[rjmcmahon@localhost iperf2-code]$ src/iperf -c 192.168.1.10 -i 1 --trip-times -e
------------------------------------------------------------
Client connecting to 192.168.1.10, TCP port 5001 with pid 17357 (1 flows)
Write buffer size: 131072 Byte
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.1.62%enp2s0 port 47790 connected with 192.168.1.10 port 5001 (MSS=1448) (trip-times) (sock=3) (ct=0.36 ms)
[ ID] Interval Transfer Bandwidth Write/Err Rtry Cwnd/RTT NetPwr
[ 1] 0.00-1.00 sec 1.09 GBytes 9.37 Gbits/sec 8937/0 0 1634K/1104 us 1060923
[ 1] 1.00-2.00 sec 1.10 GBytes 9.42 Gbits/sec 8981/0 0 1634K/1097 us 1073070
[ 1] 2.00-3.00 sec 1.10 GBytes 9.41 Gbits/sec 8978/0 0 1634K/1109 us 1061104
[ 1] 3.00-4.00 sec 1.10 GBytes 9.41 Gbits/sec 8978/0 0 1634K/1118 us 1052562
[ 1] 4.00-5.00 sec 1.10 GBytes 9.42 Gbits/sec 8979/0 0 1634K/1095 us 1074790
[ 1] 5.00-6.00 sec 1.10 GBytes 9.41 Gbits/sec 8975/0 0 1634K/1128 us 1042882
[ 1] 6.00-7.00 sec 1.10 GBytes 9.42 Gbits/sec 8979/0 0 1716K/1093 us 1076757
[ 1] 7.00-8.00 sec 1.10 GBytes 9.42 Gbits/sec 8983/0 0 1800K/1115 us 1055982
[ 1] 8.00-9.00 sec 1.09 GBytes 9.40 Gbits/sec 8969/0 0 1800K/1118 us 1051507
[ 1] 9.00-10.00 sec 1.10 GBytes 9.42 Gbits/sec 8982/0 0 1800K/1082 us 1088067
[ 1] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec 89743/0 0 1800K/1082 us 1087070
Anther way to think about things is in terms of cars, i.e. distance, speed and horsepower. Instead of throughput read the Transfer column. That's the amount of information in bytes moved over the interval. The actual speed of the transaction is the latency and not the throughput. In the above it took an average of 3 milliseconds to get a block of memory (131702 bytes) moved from client to server. The net power, here read on the server side, is about 3.37. There aren't really units for netpower but read it as a number where higher netpowers are better. Its a measure of throughput / latency or good / bad.
A paper by Leonard Kleinrock titled Internet congestion control using the power metric: Keep the pipe just full, but no fuller may help.