Kubernetes/Flannel Routing issue with Linode private networks
Hi, I'm building a small kubernetes cluster to play some experiments with it.
I'd like to setup connection for containers between nodes with flannel host-gw driver which is simply make sort of l3 network over l2 by simply adding routes per node in respect of isolated subnet accessing from each node:
node1:
10.10.10.0/24 dev cni0 proto kernel scope link src 10.10.10.1 # 10.10.10.0/24 is a "local" subnet for containers
10.10.11.0/24 via 192.168.1.111 dev eth0 #< 10.10.11.0/24 is served by another node of kubernetes cluster
node2:
10.10.10.0/24 via 192.168.2.111 dev eth0
10.10.11.0/24 dev cni0 proto kernel scope link src 10.10.11.1
But here I got stuck due to traffic inability to route to private networks each nodes (10.10.x.x). So basically ping from node-1 to 10.10.11.51 does not work and 10.10.11.21 from node-2 vise versa.
I've tried to figure out possible issues that could be assumed during setup and ended up with following information.
I've simply configured a host route from node1 to 111.111.111.1 ip address over node1 with enabled ip forward and ACCEPT policies in ip tables:
node1 /lib/systemd/system # ip r add 111.111.111.1/32 via 192.168.2.2
node1 /lib/systemd/system # ip r get 111.111.111.1
111.111.111.1 via 192.168.2.2 dev eth0 src 192.168.2.111 uid 0
cache
node1 /lib/systemd/system # ping 111.111.111.1
PING 111.111.111.1 (111.111.111.1) 56(84) bytes of data.
^C
--- 111.111.111.1 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 9ms
I've tried to make it my home local network and everything works properly:
192.168.1.100 is another linux machine in home network area
user@host:/home/user# ip r add 111.111.111.1/32 via 192.168.1.100
user@host:/home/user# traceroute 111.111.111.1
traceroute to 111.111.111.1 (111.111.111.1), 30 hops max, 60 byte packets
1 192.168.1.100 (192.168.1.100) 4.203 ms 4.091 ms 4.036 ms
2 gw (192.168.1.1) 3.988 ms 3.906 ms 5.507 ms
3 w.w.x.x (12.12.12.12) 17.110 ms 17.084 ms 17.030 ms
private networks routing (docker)
user@host:/home/user# ip r add 172.10.0.0/24 via 192.168.1.100
user@host:/home/user# telnet 172.10.0.2 5432
Trying 172.10.0.2…
Connected to 172.10.0.2.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
5 Replies
Hello,
Although I am not incredibly familiar with Kubernetes deployments or using Flannel to create this type of network I think that the reason that you’re running into this issue may have something to do with either host-gw requiring direct layer2 connectivity between the hosts running Flannel or the fact that your Linodes are essentially acting as individual firewalls/routers on our platform. I believe that due to the way our private networks are structured, we do not allow the delivery of packets that do not have destination IP’s and Mac addresses combos that matches what the host machine is expecting for a particular Linode.
It looks like the deployment of a VPS as a firewall, discussed in the community post below, is unable to be implemented for security reasons:
https://www.linode.com/community/questions/10923/forward-traffic-through-a-firewall#answer-51565
This issue looks like it’s been elaborated on in this post about why we do not allow IPv6 forwarding which basically helps prevent malicious activity from occurring on our platform and impacting other customers.
You may be able to implement a simple VPN between your clusters and have them operated on a private network instead of using flannel and host-gw but I can absolutely understand if that’s not the type of deployment that you’re interested in experimenting with. If this does sound like a viable solution for you we have a great guide on implementing Tinc, linked below, that may work for you.
https://www.linode.com/docs/networking/vpn/how-to-set-up-tinc-peer-to-peer-vpn/
Other than that we have a guide specifically for managing a Docker cluster with Kubernetes.
https://www.linode.com/docs/applications/containers/manage-a-docker-cluster-with-kubernetes/
If neither of these seem like viable solutions for you hopefully another community member finds this thread and can help shed some light on how you can use Flannel or something like it with Kubernetes on our platform.
I hope this information helps.
Thanks,
Matt Watts
Linode Support Team
I believe that due to the way our private networks are structured, we do not allow the delivery of packets that do not have destination IP’s and Mac addresses combos that matches what the host machine is expecting for a particular Linode.
So as far as I got we can not do something like this:
[node-1-10.200.x.x-private-net]->[192.168.x.x linode private net]->[node-n-10.200.y.y-private-net] and vice versa.
Due to linode networking setup and security ?
Does this mean that flannel cannot be used as a CNI when attempting to deploy Kubernetes on Linode?
@recipedude, vxlan backend works fine anywhere including linode, but you won't be able to setup host-gw without any extra actions to bring real or fair-enough emulated L2 networking between hosts.
I've abandoned attempts to setup host-gw for now, however it would be great to have such possibility out of a box.