Packet loss London - VoIP

Hi all,

I use my Linode for a couple of things (nginx/mysql/few custom webservices) - nothing too heavy, and one teamspeak 3 server.

My CPU load shows ~6-10% on the graphs, which isn't really a lot… So I don't think there should be any problems. Yet:

We use teamspeak 3 for about 10 people at a time, the network-monitor shows 400-600KB/s, which shouldn't be a problem at all.

However, on some days we suddenly get lagspikes (for about a minute). According to Teamspeak, the packet loss is about 15-20% during that time, an average of all clients connected. At that time it cuts up/stutters a lot.

Then, a minute later everything works again, usually for the rest of the night.

The lagspikes, at random times/days are quite annoying however, especially seeing that the load on the server isn't high at all.

Is this a problem to be expected/considered normal?

We connect from Ireland, The Netherlands, Portugal, Denmark, Sweden and Germany. The packet loss /stuttering problems occur for all of us at the same time. (It's not just one of us that has the lagspike).

It's a Linode 512 in London.

Hopefully someone will be able to assist :) Thanks!

Niels

10 Replies

Do you have data from 'mtr' (at the network level) and 'vmstat' (at the system level) when that's happening? That will help isolate things.

Hi,

I am currently not connected to the teamspeak server (will be again tonight) - so I cannot test this live. It doesn't occur every day, but once it happens I'll run this test again.

I made a report just now as well, and I did notice the mtr saying quite some packet loss as well (even though I can't confirm this with teamspeak at the moment). Could it be the cause?

root@nuvini:/home/niels# vmstat 1 20
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0  49068  11076  11352 262840    0    0     3     1    1    0  0  0 100  0
 0  0  49068  11068  11352 262840    0    0     0     0 1159 2234  0  0 100  0
 0  0  49068  11068  11352 262840    0    0     0     0 1156 2219  0  0 100  0
 0  0  49068  11068  11352 262840    0    0     0     0 1120 2239  0  0 100  0
 0  0  49068  11068  11352 262840    0    0     0     0 1122 2227  0  0 100  0
 1  0  49068  11068  11352 262840    0    0     0     0 1120 2231  0  0 100  0
 0  0  49068  11068  11352 262840    0    0     0     0 1122 2223  0  0 100  0
 0  0  49068  10944  11360 262836    0    0     0    12 1133 2253  0  0 100  0
 0  0  49068  10944  11360 262840    0    0     0     0 1115 2229  0  0 100  0
 0  0  49068  10944  11360 262840    0    0     0     0 1102 2209  0  0 100  0
 0  0  49068  10944  11360 262840    0    0     0     0 1114 2226  0  0 100  0
 0  0  49068  10944  11360 262840    0    0     0     0 1128 2237  0  0 100  0
 1  0  49068  10944  11360 262840    0    0     0     0 1127 2239  0  0 100  0
 0  0  49068  10944  11360 262840    0    0     0     0 1115 2220  0  0 100  0
 0  0  49068  10944  11360 262840    0    0     0     0 1131 2232  0  0 100  0
 0  0  49068  10944  11360 262840    0    0     0     0 1141 2238  0  0 100  0
 1  0  49068  10944  11360 262840    0    0     0     0 1159 2225  0  0 100  0
 0  0  49068  11060  11360 262840    0    0     0     0 1192 2246  0  0 100  0
 0  0  49068  11068  11360 262840    0    0     0     0 1129 2245  0  0 100  0
 1  0  49068  11068  11360 262840    0    0     0     0 1115 2223  0  0 100  0
root@nuvini:/home/niels# mtr
root@nuvini:/home/niels# mtr --report google.com
^C
root@nuvini:/home/niels# mtr --report 8.8.8.8
HOST: nuvini                      Loss%   Snt   Last   Avg  Best  Wrst StDev
  1\. 212.111.33.229                0.0%    10    0.8   2.3   0.5  16.2   4.9
  2\. 212.111.33.233                0.0%    10    0.5   0.6   0.5   0.8   0.1
  3\. te3-1-border76-01.lon2.telec 30.0%    10    1.6   1.1   0.8   1.6   0.2
  4\. 85.90.238.45                 10.0%    10    1.2   9.0   1.1  69.7  22.8
  5\. 217.20.44.194                10.0%    10    0.9   1.0   0.9   1.2   0.1
  6\. google1.lonap.net             0.0%    10    1.2   2.3   0.9  12.4   3.6
  7\. 209.85.255.78                 0.0%    10   48.5   5.9   1.0  48.5  15.0
  8\. 209.85.253.90                 0.0%    10    1.8   3.7   1.2  14.0   4.0
  9\. 209.85.240.28                 0.0%    10    7.5   7.0   6.5   9.0   0.8
 10\. 216.239.49.36                 0.0%    10   10.2  10.3  10.2  10.5   0.1
 11\. 209.85.255.118               60.0%    10   22.0  15.5  10.3  22.0   6.0
 12\. google-public-dns-a.google.c  0.0%    10   10.4  10.5  10.4  10.7   0.1
root@nuvini:/home/niels# mtr --report <home-ip>
HOST: nuvini                      Loss%   Snt   Last   Avg  Best  Wrst StDev
  1\. 212.111.33.229                0.0%    10    0.5   1.0   0.5   4.6   1.2
  2\. 212.111.33.233                0.0%    10    0.7   0.6   0.5   0.7   0.1
  3\. te3-1-border76-01.lon2.telec 70.0%    10    0.8   0.9   0.8   1.1   0.2
  4\. 85.90.238.45                 30.0%    10    1.3   5.9   0.9  34.3  12.5
  5\. xe-7-0-0-0.lon-004-score-1-r  0.0%    10    0.9   0.9   0.8   1.1   0.1
  6\. ae1-0.par-gar-score-1-re0.in  0.0%    10   15.9  15.7  15.5  16.3   0.3
  7\. ae0-0.par-gar-score-2-re0.in  0.0%    10    8.0   7.9   7.6   8.6   0.3
  8\. ae2-0.ams-koo-score-1-re0.in  0.0%    10   15.5  15.4  15.3  15.7   0.1
  9\. pr1.nik-asd.internl.net       0.0%    10   16.1  17.8  16.0  33.0   5.3
 10\. xms-nh.customer.internl.net   0.0%    10   19.6  16.7  16.1  19.6   1.1
 11\. ???                          100.0    10    0.0   0.0   0.0   0.0   0.0</home-ip>

Thanks,

That packet loss may not be an issue. It's common for routers to assign mtr's probes a low priority and drop them even when they're transferring real traffic 100% reliably. Since later hops show no loss, that's probably what's going on here. It's also not uncommon for consumer ISPs to block the probes entirely. When you ran that mtr, were you experiencing any issues connecting to your node from home?

@mnordhoff:

That packet loss may not be an issue. It's common for routers to assign mtr's probes a low priority and drop them even when they're transferring real traffic 100% reliably. Since later hops show no loss, that's probably what's going on here. It's also not uncommon for consumer ISPs to block the probes entirely. When you ran that mtr, were you experiencing any issues connecting to your node from home?

That was a random mtr I did - at the time I couldn't check if there were any problems with the connection.

Since the packet loss is pretty random I have to wait for it to happen again. Yesterday we did not have any problems, but the two days before that we did, as well as a week ago.

When it happens again I'll make the logs again and hopefully it'll give some more insight. It's just a matter of waiting -until- it happens :)

Took a while for it to happen - but it just did. Managed to quickly get the diagnostics!

niels@nuvini:~$ mtr --report <my ip="">HOST: nuvini                      Loss%   Snt   Last   Avg  Best  Wrst StDev
  1\. 212.111.33.229               10.0%    10    3.7  73.7   1.1 267.4 112.4
  2\. 212.111.33.233                0.0%    10    1.4  85.2   0.5 424.5 141.8
  3\. te3-1-border76-01.lon2.telec 20.0%    10    0.8  72.9   0.8 427.4 152.4
  4\. 85.90.238.45                  0.0%    10    1.0 127.3   1.0 425.3 148.8
  5\. xe-7-0-0-0.lon-004-score-1-r  0.0%    10    0.9 111.9   0.9 363.6 120.0
  6\. ae1-0.par-gar-score-1-re0.in  0.0%    10   35.9 107.7  15.6 397.4 117.5
  7\. ae0-0.par-gar-score-2-re0.in  0.0%    10   12.0 114.6   8.6 342.5 121.0
  8\. ae2-0.ams-koo-score-1-re0.in  0.0%    10   69.0 203.5  69.0 504.3 139.5
  9\. pr1.nik-asd.internl.net       0.0%    10   17.3 185.4  17.3 441.7 137.3
 10\. xms-nh.customer.internl.net   0.0%    10   29.6 157.9  29.6 379.0 106.7
 11\. ???                          100.0    10    0.0   0.0   0.0   0.0   0.0
niels@nuvini:~$ mtr --report <my-ip>HOST: nuvini                      Loss%   Snt   Last   Avg  Best  Wrst StDev
  1\. 212.111.33.229               10.0%    10  422.3 195.9   7.9 466.0 180.6
  2\. 212.111.33.233               10.0%    10  416.6 184.3   1.5 557.5 225.1
  3\. te3-1-border76-01.lon2.telec 50.0%    10  439.6 227.8   0.9 439.6 205.9
  4\. 85.90.238.45                 70.0%    10  492.6 398.6 188.1 515.0 182.6
  5\. xe-7-0-0-0.lon-004-score-1-r 10.0%    10  433.7 182.5   0.8 577.7 235.2
  6\. ae1-0.par-gar-score-1-re0.in 30.0%    10   21.3 110.7  15.5 658.6 241.7
  7\. ae0-0.par-gar-score-2-re0.in 10.0%    10  366.6 219.7  76.7 596.0 181.5
  8\. ae2-0.ams-koo-score-1-re0.in 10.0%    10  487.3 198.9  18.1 533.3 206.7
  9\. pr1.nik-asd.internl.net      10.0%    10  465.7 159.4  16.1 492.3 209.8
 10\. xms-nh.customer.internl.net  10.0%    10  406.8 136.8  16.2 433.4 182.4
 11\. ???                          100.0    10    0.0   0.0   0.0   0.0   0.0
niels@nuvini:~$ vmstat 1 20               
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0  54728  14152  18840 253896    0    0     1     1    1    0  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     8 1247 2336  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     0 1225 2330  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     0 1224 2339  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     0 1213 2321  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     0 1221 2337  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     0 1231 2343  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     0 1214 2318  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0    12 1220 2349  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     0 1210 2329  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     0 1208 2325  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     8 1212 2327  0  0 100  0
 0  0  54728  14152  18840 253896    0    0     0     0 1208 2320  0  0 100  0
 0  0  54728  13624  18840 253896    0    0     0     0 1401 2346  0  1 99  0
 0  0  54728  13640  18840 253896    0    0     0     0 1209 2318  0  0 100  0
 0  0  54728  13648  18840 253896    0    0     0     0 1220 2323  0  0 100  0
 1  0  54728  13656  18840 253896    0    0     0     8 1218 2335  0  0 100  0
 0  0  54728  13656  18840 253896    0    0     0     0 1211 2325  0  0 100  0
 0  0  54728  13656  18840 253896    0    0     0     0 1226 2318  0  0 100  0
 0  0  54728  13656  18840 253896    0    0     0     0 1209 2321  0  0 100  0</my-ip></my> 

Edit:

I've also been looking through some logs - at the time it happened I do see this happening - could it be the cause?

Mar 22 21:32:49 nuvini dhclient: DHCPREQUEST on eth0 to 109.74.207.72 port 67
Mar 22 21:32:49 nuvini dhclient: DHCPACK from 109.74.207.72
Mar 22 21:32:49 nuvini dhclient: bound to 178.79.129.186 -- renewal in 39653 seconds

Other than that, there's nothing special in the logs that I can see. The lag lasts for more than just a few seconds - so I do doubt that is the issue.

Edit2: While this was happening - it wasn't just VoIP either, the SSH session had some hiccups as well.

Any ideas - or should I just open a support ticket?

Thanks!

Interesting you mention this as I also run a teamspeak and have been getting random packet loss also from a london based linode. I've found it has sometimes impacted ssh and websites as well when it happens.

Just curious, are you on london375 as thats the host I'm on, would be interesting if we were both having this problem on the same box.

Hi Dru,

I'm on London378 - so it's not the same host. However, perhaps it is an issue with the first hop if we both pass it? (212.111.33.229).

I'll open up a support ticket and point them to this thread - hopefully will give some more insight.

In a way it's good to know I'm not the only one having -some- issues, at least.

Edit: They told me to send them a message ASAP with new MTR reports as well when it happens again, so they can look at it real-time.

I hope it's possible, since it's only a few-minutes window. Would require a very fast response time –

If you can, please do the same thing. If it is indeed caused by the same thing for the both of us, Dru, we might have a better chance if we both watch out for it and inform them as soon as it happens again.

I'm a tad unwell at the moment but I'll definitely keep as close an eye on it as I can and get some logs/diagnostics when it happens.

Just started having issues that sound very similar to this tonight. Been running TS for 11 days and it only started about 90min ago, but seems to be persistant now.

Web traffic and SSH traffic is also affected. Load according to graphs on dashboard is normal so i was wondering what it could be.

Glad to see this thread come up on my first google search.

Using a linode768 in london also.

How does mtr look, both towards your Linode and out from your Linode?

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct