How do I remove a specific node from a LKE pool?

I've used kubectl to drain and then delete a node, but this doesn't automatically remove it from the node pool as I'd expect. I could 'recycle' it, but I don't want to replace it, just scale the pool down in a controlled way.

I also have pool autoscaling enabled, but despite nodes operating well under capacity, the pool's size is remaining fixed. Are there any reasons this can happen? I have in the past had pools scale down automatically, but it doesn't seem to be happening in this case.

I have recently done the upgrade to kubernetes 1.26 - could this be related?

In any case, I think it would be a sensible feature for LKE to automatically remove nodes that have been deleted via kubectl.

1 Reply

If you you're seeing unexpected behavior, feel free to reach out in a Support Ticket in Cloud Manager, or by phone if it's urgent. For now, I'd like to talk about what could cause this issue and how you can work around it.

First, let's look at the issue with Autoscale. I've researched reasons this might not work and found this in the guide I just linked:

The LKE Autoscaler will not automatically increase or decrease the size of the node pool if the current node pool is either below the minimum of the autoscaler, or above the maximum. This behavior can be further described by following examples:

  • If the Node pool has 3 nodes in the current node pool and a minimum of 5, the autoscaler will not automatically scale the current node pool up to meet the minimum. It will only scale up if pods are unschedulable otherwise.
  • If the Node Pool has 10 nodes in the current node pool and a maximum of 7, the autoscaler will not automatically scale the current node pool down to meet the maximum. It can only scale down when the maximum is at or above the current number of nodes in the node pool. This is an intentional design choice to prevent the disruption of existing workloads.

If that doesn't apply, and you've confirmed your settings and that situation meets the listed criteria, it may be worth reaching out. Here are the criteria:

  • If Pods are unschedulable due to an insufficient number of nodes in the node pool, the number of nodes is increased.

  • If Pods are able to be scheduled on less nodes than are currently available in the node pool, nodes are drained and removed automatically. Pods on drained nodes are immediately rescheduled on pre-existing nodes.


Next, I wanted to point you toward what I think would be the easiest way to get this done. First, you can use the API or CLI to delete a specific LKE Node using the instructions provided in this guide.

I'd recommend first using Node Pool View to make sure the response shows the configurations you expect for autoscaler and node count, which will look something like this:

{
  "autoscaler": {
    "enabled": true,
    "max": 12,
    "min": 3
  },
  "count": 6,

If those don't seem correct, you can try to update them. Otherwise, a new node may be created to replace the old one.

Lastly, we have some older Community Site posts that might be relevant to how you move forward. The first offers alternatives for autoscaling if you that's something you'd like to look into. The second addresses removing a specific node from your pool:

LKE is an intricate service with lots of pieces, so it's difficult to figure out without a lot of specific detail, but I hope that helps. If you think this is something we should be addressing directly, reach out. If you believe it's outside our Scope of Support, then you're in the right place. If so, it might help to follow up here with some specific outputs or more information that could help others to point you in the right direction.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct