Cloud firewall behavior I don't understand and cannot fix
Hello
I have a 3 node HA k3s cluster running on the platform.
The linode ccm is a daemonset on the cluster and it correctly assigns a layer 4 noadbalancer to the ingress-controller. Rancher, which is running on the cluster, is working fine. I also have a manually configured noadbalancer for port 6443, which is also working fine. All the above is normal, until I turn on the cloud firewall. I have a rule for port 6443 for the manual noadbalancer to the 3 linodes. That one is working and that noadbalancer always sees all backend servers as up. I also have a similar rule for the automatically created ccmNodeBalancer. The rule allows all TCP ports from nodeBalancer to the backend servers. This nodeBalancer however, sees all backend servers as down most of the time. At random times it might see 1 up, 2 down, or other combinations. If I disable the cloud firewall all goes back to normal within a few minutes. I really cannot think of way to solve this. UFW is disabled on all the linodes. I also have 2 rules for the cloud firewall, allowing TCP and UDP from the entire 192.168.128.0/17 network, on all ports, so that the 3 nodes can communicate without problems. Any help is welcome.
The hardest part to understand
No firewall = working
Firewall allowing all traffic from balancer to linodes = not working.
What is the difference?
Kindest regards
jkatergaris
2 Replies
The order of rules is important last I checked. What are the actual rules you have (remove public ips if you feel the need)? Include default policies.
I would first check to see if maybe you are Droping something before you Accept it. Something likely TCP related.
You gave me an idea with your comment. Instead of making the default rule drop everything, I changed it back to allow everything and now I'm slowly cutting off everything I don't need. So far it's going ok. Thanks