Tokyo to London migration solved Linode connectivity issue

I had a Linode running in the Tokyo facility and I received two reports from two friends that they were not able to access my website, ping my Linode or connect to any open port on my Linode. I created the open ports using netcat. e.g. nc -vv -lp 9090.

Precise symptoms of the problem: * These two users were able to access all websites except mine.

  • They were unable to ping my domain name or my Linode's IP address.

  • They were unable to ping or traceroute tokyo10.linode.com. But they were able to ping or traceroute london356.linode.com.

  • My domain name was correctly resolving to my Linode's IP address on their systems. Confirmed this with nslookup, ping, etc.

  • Both these users were in the US.

  • Both these users were unable to connect to or access my Linode only from a particular network. For example, one was able to access my website from her home network but not from her university network using the same laptop. The other was able to access my website from his office network but not from his home network using the same laptop.

  • All other users except these two users were able to access my website.

  • Trace routes were giving up with lots of time-outs after some hops. Trace routes for one friend showed *.verizon-gni.net and *.qwest.net routers. Trace routes for another friend showed nothing but university routers.

  • I installed Debian on my Linode. It had the default configuration. I didn't setup iptables or anything like it which could have caused the issue. I don't exactly know what caused the problem. It would have been nice to have them run WinMTR to know exactly which router was causing the issue during trace route but I wanted to resolve the issue as soon as possible without burdening them with troubleshooting work.

The only reason that seemed likely to me was that some router in between was transmitting packets to a network segment that were larger than the segment's MTU. I couldn't confirm this hypothesis.

However, assuming this hypothesis to be true, I decided to migrate my Linode from Tokyo facility to London facility and this fixed the issue as soon as the migration was complete.

Update: Had a chat about it on IRC. Linode staff informed me that they have seen this issue earlier with Tokyo IP ranges. They said that Qwest/Verizon are doing some kind of source IP filtering that causes the packets coming from Tokyo to Qwest/Verizon to be dropped.

3 Replies

Including the traceroute information to tokyo10.linode.com as well as london356.linode.com provided to me by one of the two friends who faced this issue.````
Tracing route to tokyo10.linode.com [106.187.33.21]
over a maximum of 50 hops:

1 2 ms 2 ms 1 ms 10.140.112.1
2 4 ms 3 ms 3 ms 172.19.252.41
3 3 ms 3 ms 3 ms 172.19.252.30
4 17 ms 3 ms 3 ms wireless-gw2.qwest.asu.edu [172.19.252.50]
5 6 ms 6 ms 4 ms 172.19.252.45
6 7 ms 5 ms 5 ms sparky-jr-core2.dco.asu.edu [172.30.253.2]
7 * * * Request timed out.

49 No resources.

Trace complete.

````
Tracing route to london356.linode.com [178.79.151.180]
over a maximum of 50 hops:

  1     1 ms     1 ms     1 ms  10.140.112.1 
  2     5 ms     3 ms     3 ms  172.19.252.41 
  3     5 ms     3 ms     4 ms  172.19.252.30 
  4     5 ms     3 ms     3 ms  wireless-gw2.qwest.asu.edu [172.19.252.50] 
  5     5 ms     4 ms     4 ms  172.19.252.45 
  6     7 ms     5 ms     6 ms  sparky-jr-core2.dco.asu.edu [172.30.253.2] 
  7     7 ms     6 ms     5 ms  172.30.101.2 
  8     8 ms     6 ms     6 ms  206.206.223.58 
  9    18 ms    18 ms    16 ms  vlan143.car1.Phoenix1.Level3.net [4.53.104.105] 
 10    18 ms    17 ms    16 ms  ae-2-5.bar1.Phoenix1.Level3.net [4.69.148.118] 
 11    39 ms    41 ms    39 ms  ae-8-8.ebr1.Dallas1.Level3.net [4.69.133.30] 
 12    41 ms    39 ms    39 ms  ae-61-61.csw1.Dallas1.Level3.net [4.69.151.125] 
 13    42 ms    40 ms    40 ms  ae-63-63.ebr3.Dallas1.Level3.net [4.69.151.134] 
 14    61 ms    59 ms    59 ms  ae-7-7.ebr3.Atlanta2.Level3.net [4.69.134.22] 
 15    60 ms    58 ms    58 ms  ae-63-63.ebr1.Atlanta2.Level3.net [4.69.148.242] 
 16    74 ms    72 ms    72 ms  ae-6-6.ebr1.Washington12.Level3.net [4.69.148.106] 
 17    75 ms    72 ms    72 ms  ae-1-100.ebr2.Washington12.Level3.net [4.69.143.214] 
 18    81 ms    81 ms    81 ms  4.69.148.49 
 19   191 ms   208 ms   208 ms  ae-42-42.ebr2.London1.Level3.net [4.69.137.69] 
 20   151 ms   151 ms   152 ms  ae-59-224.csw2.London1.Level3.net [4.69.153.142] 
 21     *      219 ms   155 ms  ae-2-52.edge3.London1.Level3.net [4.69.139.105] 
 22   245 ms   313 ms   207 ms  Telecity.edge3.lon1.l3.net [195.50.113.2] 
 23   245 ms   207 ms   149 ms  te4-1-dist65-01.lon10.telecity.net [217.20.44.218] 
 24   169 ms   181 ms   208 ms  212.111.33.234 
 25   234 ms   150 ms   161 ms  london356.linode.com [178.79.151.180] 

Trace complete.

Based on that, I'd say the folks at asu.edu implemented bogon filtering on their border routers but haven't updated it in at least 10 months; 106/8 was allocated in January 2011, and 178/8 two years prior to that. Your end users probably need to open a ticket with their Internet support folks, who should be able to figure it out. This does affect their ability to reach a significant chunk of the Internet, but perhaps not a chunk they typically care about.

This is a fairly common problem. The good news is that there are no more unallocated IPv4 /8 blocks, so it should become less of an issue over the next few years.

In case anyone is curious where to find the information about IP address allocation that hoopycat has mentioned above, here is the document:

http://www.iana.org/assignments/ipv4-ad … -space.xml">http://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xml

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct