K3s cluster with two or more server nodes
Real Kubernetes clusters have more than one server node
In this chapter of the guide you have seen how to create a K3s Kubernetes cluster with just one server node. This works fine and suits the constrained scenario of this guide. But if you want a more complete Kubernetes experience, you need to know how to set up two or more server nodes in your cluster.
This appendix chapter summarizes what to add or just do differently on the procedures explained in the guide, with the goal of creating a K3s cluster with two server nodes.
Important
You cannot reuse the single-node cluster built with this guide
You cannot convert a single-node cluster setup that uses the embedded SQLite database into a multiserver one. You have to do a clean new install of the K3s software, although you can reuse the VMs you already have.
Add a new VM to act as the second server node
The first step is rather obvious: create a new VM and configure it to be the second K3s server node. And by configure I mean the following.
Create a new K3s server node VM by link-cloning the
k3snodetplVM template, but:Give it the next
VM IDnumber after the one assigned to the first K3s server node:
The first server has the ID411, so assign the ID412to this new VM.Follow the same convention for naming this VM, but changing the number in the string:
The first server is calledk3sserver01, so the new VM should be calledk3sserver02.
Configure this new
k3sserver02VM as you did with the first server node VM, although:Assign to its network cards the next IPs in the range reserved for server nodes in your network configuration:
If you were using the same IP ranges used in the guide:- The net0 card should have
10.4.1.2. - The net1 card should have
172.16.1.2.
- The net0 card should have
Change its hostname so its unique and concurs with the name of the VM:
If the VM is calledk3ssserver02, its hostname should also bek3sserver02.Either import the configuration files for TFA and SSH key-pair from
k3sserver01or generate new ones for themgrsysuser:- TFA:
/home/mgrsys/.google_authenticatorfile. - SSH key-pair: entire
/home/mgrsys/.sshfolder.
- TFA:
Either give the
mgrsysuser the same password as in the first server node (convenient, but not recommended) or assign it a new one.
Adapt the Proxmox VE firewall setup
You will need to add a bunch of extra firewall rules to allow this second server node work properly in your K3s cluster. So open your Proxmox VE web console and do the following:
Go the
Datacenter > Firewall > Aliaspage, and add a new alias for the IP of your new VM’s primary NIC:- Name
k3sserver02_net0, IP10.4.1.2.
- Name
Browse to
Datacenter > Firewall > IPSet, and there:- Add the
k3sserver02_net0alias to thek3s_nodes_net0_ipsset.
- Add the
Open a shell terminal as
mgrsyson your Proxmox VE host, then copy the firewall file of the first K3s server VM but giving it theVM IDof your second server VM:$ cd /etc/pve/firewall/ $ sudo cp 411.fw 412.fwModify the
412.fwfile so the IPSET blocks point to the correct IP aliases for thek3sserver02node:[OPTIONS] enable: 1 ndp: 0 log_level_out: info ipfilter: 1 log_level_in: info [IPSET ipfilter-net0] dc/k3sserver02_net0 [RULES] GROUP k3s_srvrs_net0_in -i net0
Setup of the FIRST K3s server node
The /etc/rancher/k3s.config.d/config.yaml file for the first server node (k3sserver01) is just slightly different from the one used in the single-server cluster scenario:
# k3sserver01
cluster-domain: "homelab.cluster"
tls-san:
- "k3sserver01.homelab.cloud"
- "10.4.1.1"
flannel-backend: host-gw
flannel-iface: "ens19"
bind-address: "0.0.0.0"
https-listen-port: 6443
advertise-address: "172.16.1.1"
advertise-port: 6443
node-ip: "172.16.1.1"
node-external-ip: "10.4.1.1"
node-taint:
- "node-role.kubernetes.io/control-plane=true:NoSchedule"
kubelet-arg: "config=/etc/rancher/k3s/kubelet.conf"
disable:
- metrics-server
- servicelb
protect-kernel-defaults: true
secrets-encryption: true
agent-token: "SomeReallyLongPassword"
cluster-init: trueThis config.yaml file is essentially the same as it is set for the K3sserver01 node in the guide, but just with one extra parameter at the end:
cluster-init
Using this option will initialize a new cluster that will run with an embedded Etcd data source.Important
A K3s cluster with several server nodes will not work just with a sqlite data source\ > “Fully fledged” K3s clusters require more advanced database engines to run, such as Etcd.
With the config.yaml file ready, execute the K3s installer.
$ wget -qO - https://get.k3s.io | INSTALL_K3S_VERSION="v1.33.4+k3s1" sh -s - serverSetup of the SECOND K3s server node
The k3sserver02 node’s config.yaml file
The /etc/rancher/k3s.config.d/config.yaml file for the second server has few, but important, differences.
# k3sserver02
cluster-domain: "homelab.cluster"
tls-san:
- "k3sserver02.homelab.cloud"
- "10.4.1.2"
flannel-backend: host-gw
flannel-iface: "ens19"
bind-address: "0.0.0.0"
https-listen-port: 6443
advertise-address: "172.16.1.2"
advertise-port: 6443
node-ip: "172.16.1.2"
node-external-ip: "10.4.1.2"
node-taint:
- "node-role.kubernetes.io/control-plane=true:NoSchedule"
kubelet-arg: "config=/etc/rancher/k3s/kubelet.conf"
disable:
- metrics-server
- servicelb
protect-kernel-defaults: true
secrets-encryption: true
agent-token: "SamePasswordAsInTheFirstServer"
server: "https://172.16.1.1:6443"
token: "K10<sha256 sum of cluster CA certificate>::server:<password>"There’s no cluster-init option, the agent-token is also present here, and two new parameters have been added:
agent-token
This has to be exactly the same password as in the first server node.server
The address or url of a server node in the cluster, in this case the IP of the secondary NIC of the first server node. Notice that you also need to specify the port, which in this case is the default6443.token
Shared secret for authenticating this second server node in an already running cluster. The token is generated and saved within the first server node that starts said cluster, in the/var/lib/rancher/k3s/server/tokenfile.
Getting the server token from the FIRST server node k3sserver01
With the first server node up and running, obtain from it the server token you will need to authorize the joining of any other server nodes into your K3s cluster. Use the cat command for getting it from the /var/lib/rancher/k3s/server/token file that should exist in the k3sserver01 VM :
$ sudo cat /var/lib/rancher/k3s/server/token
K10288e77934e06dda1e7523114282478fdc1798545f04235a86b97c71a0bca41f4::server:baecfccac88699f5a12e228e72a69cf2As it happens with agent tokens, you can distinguish three parts in a server token string:
- After the
K10characters, you have the sha256 sum of theserver-ca.certfile generated in this first server node. - The
serverstring is the username that identifies all server nodes in the cluster. - The remaining string after the
:is the password shared by all server nodes in the cluster.
K3s installation of the SECOND server node k3sserver02
The procedure for your second K3s server node will be as follows.
Edit the
/etc/rancher/k3s/config.yamlfile and verify that all its values are correct, in particular the interface and IPs and both theagent-tokenand thetoken. Remember:- The
agent-tokenmust be the same password already set in the first server node. - The
tokenvalue is stored within the first server node, in the/var/lib/rancher/k3s/server/tokenfile.
- The
With the
config.yamlfile properly set, launch the installation of your second server node:$ wget -qO - https://get.k3s.io | INSTALL_K3S_VERSION="v1.33.4+k3s1" sh -s - serverIn your first server node, execute the next
watch kubectlcommand:$ watch sudo kubectl get nodes -Ao wideObserve the output until you see the new server join the cluster and reach the
ReadySTATUS.Every 2.0s: sudo kubectl get nodes -Ao wide k3sserver01: Thu Feb 20 12:00:01 2026 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k3sserver01 Ready control-plane,master 163d v1.33.4+k3s1 172.16.1.1 10.4.1.1 Debian GNU/Linux 13 (trixie) 6.12.73+deb13-amd64 containerd://2.1.5-k3s1 k3sserver02 Ready control-plane,master 5m32s v1.33.4+k3s1 172.16.1.2 10.4.1.2 Debian GNU/Linux 13 (trixie) 6.12.73+deb13-amd64 containerd://2.1.5-k3s1Notice, in the
ROLEScolumn, the roleetcdthat indicates that the server nodes are running the embedded etcd engine that comes with the K3s installation.
Regarding the K3s agent nodes
The agent nodes are installed with exactly the same config.yaml file and command you already saw in the guide. The only thing you might consider to change is to make each of your agent nodes point to different server nodes (the server parameter in their config.yaml file). Since the server nodes are always synchronized, it should not matter to which one each agent is connected to.