How Can I Deploy the Kubernetes-Metrics Server on LKE?
The Kubernetes metrics-server allows your cluster to gather some useful CPU and memory statistics, and makes them available for monitoring via kubectl or the kube-apiserver directly.
After deploying the metrics-server, you’ll have access to a few new commands that will give you a view of your cluster’s performance and resource usage. One such is kubectl top nodes
, which can be used to view the resource utilization of the nodes in your cluster:
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
lke2470-2848-5e73abded959 99m 4% 1076Mi 27%
lke2470-2848-5e73abdedf20 105m 5% 1036Mi 26%
lke2470-2850-5e73ad04e684 53m 5% 726Mi 38%
lke2470-2851-5e73af83ce60 73m 7% 657Mi 34%
lke2470-2851-5e73a9f553a6 86m 7% 798Mi 41%
Additionally, you can use kubectl top pods
to view similar statistics, but for the pods running In your cluster:
$ k top pods -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default demo-567cd68495-mbqp9 1m 7Mi
kube-system calico-kube-controllers-5c77dffc85-n6tbn 1m 8Mi
kube-system calico-node-8d5bq 18m 25Mi
kube-system calico-node-b258s 28m 25Mi
kube-system calico-node-h7mz5 21m 25Mi
kube-system calico-node-knzsj 15m 28Mi
kube-system coredns-5c98db65d4-dxlzv 3m 8Mi
kube-system coredns-5c98db65d4-kbd48 3m 8Mi
kube-system csi-linode-controller-0 4m 19Mi
kube-system csi-linode-node-8jtjz 0m 9Mi
kube-system csi-linode-node-hggfv 1m 8Mi
kube-system csi-linode-node-nf6h9 1m 9Mi
kube-system csi-linode-node-wnbsk 1m 8Mi
kube-system kube-proxy-8qnqm 1m 9Mi
kube-system kube-proxy-d8r75 1m 9Mi
kube-system kube-proxy-ss4v2 1m 9Mi
kube-system kube-proxy-wwdx4 3m 9Mi
kube-system metrics-server-cf696b66c-wh268 3m 12Mi
Here, the "m" means millicore, or 1/1000 of a CPU core.
The metrics-server can be deployed to any Kubernetes cluster, including LKE clusters, to give you valuable insight into the resource consumption of your workloads. You can deploy the metrics-server using the following manifest and commands:
cat << EOF > ./metrics-server.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# Mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
imagePullPolicy: Always
command:
- /metrics-server
# * metrics-server must reach kubelets by Node private IP
- --kubelet-preferred-address-types=InternalIP
# * metrics-server connects to each Node's kubelet
# * each Node's kubelet presents a CA certificate
# echo | \
# openssl s_client -showcerts -connect localhost:10250 2>/dev/null | \
# openssl x509 -inform pem -noout -text
# * The Common Name and Subject Alternative Names do not include the Node's private IP
# Subject: CN = lke3578-4746-5e97658362af@1586980323
# X509v3 extensions:
# X509v3 Subject Alternative Name:
# DNS:lke3578-4746-5e97658362af
# * This certificate is generated frequently and dynamically by kubelet
# * There is not currently a way to add Node private IP as a SAN
- --kubelet-insecure-tls
volumeMounts:
- name: tmp-dir
mountPath: /tmp
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: 443
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- deployments
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
EOF
kubectl apply -f metrics-server.yaml
10 Replies
I wanted to add to this to note that if you're looking for more information and reference guides for using Kubernetes with Linode, you can check out our Linode Kubernetes Library.
Here's how you replicate the config above with the metrics-server helm chart so that it will run on LKE:
values:
- args:
- "--kubelet-preferred-address-types=InternalIP"
- "--kubelet-insecure-tls"
Isn't using --kubelet-insecure-tls
a bad practice?
How can we get the CACert(s) for the cluster's internal APIs?
Here's some updated guidance for using helm charts now that the stable/metrics-server
have been depricated.
Create a yaml file for the values, I named mine metrics.yaml
apiService:
create: true
extraArgs:
kubelet-preferred-address-types: InternalIP
kubelet-insecure-tls:
Then run the following commands:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install -f metrics.yaml metrics bitnami/metrics-server
The rest of this post is some thoughts and explanations.
A few notes about the extraArgs:
kubelet-preferred-address-types: InternalIP
This is required because Linode does not have a dns resolution setup for the hostname or have public/internal full dns provided on the cluster. After testing the different options only InteralIP worked, although I didn't try the option for Public IPs.
kubelet-insecure-tls:
While this is not a good idea in most circumstances it is unfortunately required because the hosts only have self-signed certificates and normally TLS validation fails when accessing by raw IP.
How could it be better?
I'm not sure why Linode has not implemented properly signed certificates or enabled hostname-based access.
I'm not aware of any way we could do this ourselves with LKE, but I'd love to see these things enabled on the cluster.
@asauber I tried what you suggested in the OP and it completely hosed my LKE to the point that I had to delete it and redo the entire cluster.
The metrics server was returning "Service Unavailable" and things were getting stuck when I tried to delete/recreate it (eg cert manager)
@rgerke I looked through those docs but can't find anything in there referencing the metrics server?
The helm chart is also depreciated … how should I deploy a metrics server to LKE? (1.20)
TLDR: Don't run the first post, or you'll completely bork your install
@Monotoko I just attempted to recreate the issue you ran into with a brand new LKE cluster running on 1.20, but unfortunately I wasn't able to do so.
I was able to copy and paste both commands provided by @asauber and the metrics-server installed without a problem.
If you run into this again, can you please leave the cluster active and open a Support ticket so we can take a closer look into it for you?
Why is this required?
--kubelet-insecure-tls
No other hosting provider requires this for metric-server, from AWS, Azure, and GCP, to the budget guys like Digital Ocean and Scaleway.
To anyone landing here, here's the updated instructions for installing with the metrics-server helm chart:
Create metrics.yaml
with the following:
apiService:
create: true
args:
- "--kubelet-preferred-address-types=InternalIP"
- "--kubelet-insecure-tls"
Run the following helm command to install metrics-server to the metrics-server
namespace:
helm upgrade --install --namespace kube-system -f metrics.yaml metrics-server metrics-server/metrics-server
Hi,
Is insecure TLS still the recommended method after 4 years since the original topic?