Deploying the metrics-server service
Deploy a metric-server service that you can fully configure
The other embedded service disabled in your K3s cluster deployment is the metrics-server. This service scrapes resource usage data from your cluster nodes and offers it through its API. The problem with the embedded metrics-server, and with any other embedded service included in K3s, is that you cannot change their configuration directly. You can adjust what is configurable through the parameters you can set to the K3s service itself, or make manual temporary changes through kubectl.
In particular, the embedded metrics-server comes with a default configuration that is not adequate for the setup of your K3s cluster. Since you cannot change the default configuration permanently, it is better to deploy the metrics-server independently in your cluster, but with the proper configuration already set in it.
Checking the metrics-server’s manifest
First you would need to check out the manifest used for deploying the metrics-server and see where you have to apply the required change. This also means that you have to be aware of which version you are going to deploy in your cluster. K3s v1.33.4+k3s1 comes with the v0.8.0 release of metrics-server which, at the time of writing this, happens to be the latest version available.
Important
Ensure the service’s version is compatible with your cluster’s Kubernetes version
Each release of any service comes with their own particularities regarding compatibilities, in particular with your cluster’s Kubernetes engine. Always check that the release of any software you want to deploy in your cluster is compatible with the Kubernetes version running your cluster. This detail is more important in particular with applications that work with the Kubernetes API.
Download the components.yaml manifest for metrics-server v0.8.0 from the Assets section found at this Github release page. Open it and look for the Deployment object declared in it:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: registry.k8s.io/metrics-server/metrics-server:v0.8.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 10250
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---This is the object you need to modify to adapt metrics-server to your particular cluster setup. You also have to take some values from the manifest used to deploy this service embedded in K3s. You can find that YAML in the K3s GitHub page.
Deployment of metrics-server
As you did for deploying MetalLB, you are going to use a Kustomize project to deploy the metrics-server in your cluster.
Preparing the Kustomize folder structure
Create the folder structure to hold the Kustomize project for your metrics-server deployment:
In your
kubectlclient system, create a folder structure for the Kustomize project:$ mkdir -p $HOME/k8sprjs/metrics-server/patchesIn the command above you can see that, inside the
metrics-serverfolder, there is also apatchesone. The idea is to patch the default configuration of the service by adding a couple of parameters.
Patch for the metrics-server deployment
The patch to apply to the metrics-server deployment is some sort of “partial” manifest that is declared like any other Kubernetes resource:
Create a new
metrics-server.deployment.patch.yamlfile under thepatchesfolder:$ touch $HOME/k8sprjs/metrics-server/patches/metrics-server.deployment.patch.yamlThis file will only contain the patch to modify the metrics-server deployment object.
Declare in
metrics-server.deployment.patch.yamlthe patch for the metrics-server deployment:# Metrics server arguments patch apiVersion: apps/v1 kind: Deployment metadata: name: metrics-server namespace: kube-system spec: template: spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - key: "node-role.kubernetes.io/control-plane" operator: "Exists" effect: "NoSchedule" containers: - name: metrics-server args: - --cert-dir=/tmp - --secure-port=10250 - --kubelet-preferred-address-types=InternalIP - --kubelet-use-node-status-port - --metric-resolution=15s - --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305This patch only contains the necessary information to identify the resource to be patched and the properties to add or change:
tolerations
This section has been taken directly from theDeploymentobject K3s uses to deploy its embedded metrics-server. These tolerations will make the metrics-server pod to be scheduled or not (effect: "NoSchedule") in nodes that are tainted with those keys.For instance, since the server node is tainted with
"k3s-controlplane=true:NoExecute"it will not run a pod for the metrics-server service (nor for other regular apps or services).args
Under this section are configured certain parameters that affect how the metric-server service runs:--cert-dir
The directory where the TLS certs are located for this service. Here is set to a temporary folder, proper for a containerized service.--secure-port
The port on which to serve HTTPS with authentication and authorization. Here is set to the same one used to connect to kubelets.--kubelet-preferred-address-types
Priority of node address types used when determining an address for connecting to a particular node. In your K3s cluster’s case, the only one that is really needed is the internal IP.By only setting the
InternalIPvalue, you ensure that metrics-server only communicates through the isolated secondary network you have in your setup.--kubelet-use-node-status-port
When enabled, it makes the metrics-server check the status port (10250by default) that a kubelet process opens in the node where it runs.--metric-resolution
How long the metric-server will retain the last metrics scraped from the kubelets. By default is one minute.tls-cipher-suites
Comma-separated list of cipher suites admitted for the server. The list specified in the yaml snippet is the one K3s applies to deploy its embedded metric-server service.
Important
Review this patch whenever you update the metrics-server!
Every time you update the metrics-server service in your setup, do not forget to see how the patched values look in the official deployment declaration of the newer version you deploy. Otherwise, you could end up having errors due to using deprecated arguments or incorrect values.
Kustomize manifest for the metrics-server project
The Kustomize manifest of the metrics-server deployment is also where you specify the patches to apply to the resources of this deployment:
Create the
kustomization.yamlfile:$ touch $HOME/k8sprjs/metrics-server/kustomization.yamlFill the
kustomization.yamlfile like this:# Metrics server setup apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.8.0/components.yaml patches: - path: patches/metrics-server.deployment.patch.yamlNotice that:
In the
resourceslist you have the URL to the officialcomponents.yamlfile of the metrics-server, although you could reference here the downloaded file too.The
patchessection is where you specify all the patches you want to apply over the resources you deploy in the Kustomize project. This section supports different ways to declare and apply patches on resources, check them out in its official Kubernetes documentation. The method used here is probably the cleanest one, since it only needs specifying the path to the patch file.
Validating the Kustomize YAML output
Test the Kustomize project with kubectl, redirecting its output to less or some other text editor:
$ kubectl kustomize $HOME/k8sprjs/metrics-server | lessIn the output, look for the Deployment object and ensure that the args parameters and the tolerations section appear set as expected:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
image: registry.k8s.io/metrics-server/metrics-server:v0.8.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 10250
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
volumes:
- emptyDir: {}
name: tmp-dir
---Deploying metrics-server
With the metrics-server Kustomize project ready, you can deploy it:
Apply the Kustomize project to finally deploy metrics-server in your cluster:
$ kubectl apply -k $HOME/k8sprjs/metrics-server/After a minute or so, check if the metrics-server pod and service are running:
$ kubectl get pods,svc -n kube-system | grep metrics pod/metrics-server-5f87696c77-j7zgd 1/1 Running 0 41s service/metrics-server ClusterIP 10.43.50.63 <none> 443/TCP 41sYou should get two lines regarding metrics-server. Also notice that the metrics-server is set in the
kube-systemnamespace and that it does not have an external IP, only a internal cluster IP (10.43.50.63in the snippet above).
Checking the metrics-server service
To see the resource usage values scraped by metrics-server, you have to use the kubectl top command. You can get values both from nodes and pods:
Get values from nodes with
kubectl top node:$ kubectl top node NAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) k3sagent01 64m 2% 561Mi 28% k3sagent02 69m 3% 516Mi 26% k3sserver01 156m 7% 785Mi 54%Get values from pods with
kubectl top pods, although always specifying a namespace (remember, pods are namespaced in Kubernetes):$ kubectl top pods -A NAMESPACE NAME CPU(cores) MEMORY(bytes) kube-system coredns-64fd4b4794-phpd5 5m 14Mi kube-system local-path-provisioner-774c6665dc-bzp9n 1m 8Mi kube-system metrics-server-5f87696c77-j7zgd 7m 18Mi kube-system traefik-c98fdf6fb-z87fh 1m 28Mi metallb-system controller-58fdf44d87-kfc2f 5m 24Mi metallb-system speaker-jqcxt 12m 24Mi metallb-system speaker-v7qkb 12m 24Mi metallb-system speaker-xfqft 13m 61MiHere the
top podcommand has a-Aoption to get the metrics from pods running in all namespaces of the cluster.
To see all the options available for both top commands, use the --help option.
Metrics-server’s Kustomize project attached to this guide
You can find the Kustomize project for this metrics-server deployment in the following attached folder:
Relevant system paths
Folders on remote kubectl client
$HOME/k8sprjs$HOME/k8sprjs/metrics-server$HOME/k8sprjs/metrics-server/patches
Files on remote kubectl client
$HOME/k8sprjs/metrics-server/kustomization.yaml$HOME/k8sprjs/metrics-server/patches/metrics-server.deployment.patch.yaml
References
Kubernetes
Kubernetes Metrics Server
Related to Kubernetes Metrics Server
- GitHub. K3s. releases. K3s v1.33.4+k3s1
- ComputingForGeeks. How To Install Metrics Server on a Kubernetes Cluster
- StackOverflow. How to troubleshoot metrics-server on kubeadm?
- Reddit. Kubernetes. [learner] Debugging issue with metrics-server
- DEV. The case of disappearing metrics in Kubernetes
- StackOverflow. Query on kubernetes metrics-server metrics values