Contents

Installing k3s on Incus

After several attempts with Microk8s and its multiple errors on both LXC/LXD and physical servers, I switched to K3s, a simpler and lighter solution for K8s. The installation on physical servers and its performance were much better than Microk8s from the first hours, and I regret the hours I wasted trying to find the errors that Microk8s produced on the servers and in the deployment of services.

Here are some notes on how I installed K3s with HA using an external DB on Incus, as shown in the image:

/devops/k3s/k3s/images/db.png
Figure 1: K3s Architecture with HA using an external DB. Source: K3s Documentation

K3s with HA
To have HA, you must have at least a cluster with 3 servers. For this guide, I created 6 containers in Incus that formed the cluster, plus one container for the Postgres database.

Incus Profile

To run K3s on Incus, you need to use a specific profile, and fortunately, we can use the profiles created for Microk8s:

  • If the host system’s file system is ZFS, use this version
  • If the host system’s file system is EXT4, use this version

If you want your K3s cluster to be accessible from the local network, you can also include the network bridge in the profile. You can add the profile via the command line or using the Incus web interface.

Server Configuration

With the profile created, I used the Rockylinux 9 image:

/devops/k3s/k3s/images/rocky.png
Figure 2: Selecting the Rockylinux 9 image

Now, I prepared the server with these commands as the root user:

1
2
3
4
5
6
7
8
9
# adduser node
# passwd node
# usermod -aG wheel node
# echo 'L /dev/kmsg - - - - /dev/console' > /etc/tmpfiles.d/kmsg.conf
# dnf update && dnf upgrade -y
# dnf install openssh-server nano nfs-utils wget curl ca-certificates -y
# su node
$ ssh-keygen
$ cat .ssh/authorized-keys # Add the public key from the host machine here.

We now have the basic configuration for all cluster servers. Just create a snapshot and create the other servers from that snapshot:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
$ incus snapshot create master mastersnap
$ incus copy master/mastersnap node2
$ incus copy master/mastersnap node3
$ incus copy master/mastersnap node4
$ incus copy master/mastersnap node5
$ incus copy master/mastersnap node6
$ incus copy master/mastershap db
$ incus list
+--------+---------+------+------+-----------+-----------+
| NAME   | STATE   | IPV4 | IPV6 | TYPE      | SNAPSHOTS |
+--------+---------+------+------+-----------+-----------+
| db     | STOPPED |      |      | CONTAINER | 0         |
| master | STOPPED |      |      | CONTAINER | 1         |
| node2  | STOPPED |      |      | CONTAINER | 0         |
| node3  | STOPPED |      |      | CONTAINER | 0         |
| node4  | STOPPED |      |      | CONTAINER | 0         |
| node5  | STOPPED |      |      | CONTAINER | 0         |
| node6  | STOPPED |      |      | CONTAINER | 0         |
+--------+---------+------+------+-----------+-----------+

K3s

Configuring the /etc/hosts file

It is very important to configure a static IP for each server and specify each one with its hostname in the /etc/hosts file. In my case, this file looked like this:

1
2
3
4
5
6
7
192.168.5.11 master
192.168.5.12 node2
192.168.5.13 node3
192.168.5.14 node4
192.168.5.15 node5
192.168.5.16 node6
192.168.5.10 db

The contents of the /etc/hosts file must be the same on all servers and on the host machine as well for SSH access.

Managing multiple servers at once

A tool that can be useful for managing multiple servers is Cluster SSH. The syntax is:

1
$ cssh node@master node@node2 node@node3 ...

Configuring the DB server with Postgres

Installing Docker on the DB server:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# dnf install 'dnf-command(config-manager)'
# sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# sudo dnf -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
# sudo systemctl --now enable docker
# sudo systemctl --now start docker
# usermod -aG docker node
# su node
$ mkdir -p postgres/data
$ touch postgres/docker-compose.yaml
$ cd postgres && docker compose up -d

This is the docker-compose.yaml for Postgres that I used:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
version: '3.9'
services:
  postgres:
    image: postgres:latest
    ports:
    - 5432:5432
    volumes:
    - ./data:/var/lib/postgresql/data
    environment:
    - POSTGRES_PASSWORD=postgres
    - POSTGRES_USER=postgres
    - POSTGRES_DB=k3s
    restart: always

The DB server with Postgres is now ready.

Installing K3s

As we have 6 servers, we will use 3 for the control-plane or server node and 3 as workers.

For the three servers with control-plane

On the master server, I installed K3s with these commands:

1
2
3
4
5
6
7
8
$ curl -sfL https://get.k3s.io | K3S_TOKEN=fs3J@ivWEjj@6n sh -s - server --cluster-init --datastore-endpoint="postgres://postgres:postgres@db:5432/k3s"
# verify the k3s service:
$ sudo systemctl status k3s
# execute the following commands:
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
$ sudo chown $USER ~/.kube/config
$ sudo chmod 600 ~/.kube/config
$ echo "export KUBECONFIG=~/.kube/config" >> .bashrc

Then, on the other two servers:

1
2
3
$ curl -sfL https://get.k3s.io | K3S_TOKEN=fs3J@ivWEjj@6n sh -s - server --server https://master:6443 --datastore-endpoint="postgres://postgres:postgres@db:5432/k3s"
# verify the k3s service:
$ sudo systemctl status k3s

On the 3 worker servers

Simply use these commands on all the remaining servers:

1
2
$ curl -sfL https://get.k3s.io | K3S_TOKEN=fs3J@ivWEjj@6n sh -s - agent --server https://master:6443
$ systemctl status k3s-agent
K3s Documentation
All the commands used and their parameters are found in the official K3s documentation in the High Availability Embedded etcd and High Availability External DB sections.

Verification and testing

If everything went well, we can check the status of the cluster from the master server:

1
2
3
4
5
6
7
8
[node@master ~]$ kubectl get no
NAME      STATUS   ROLES                  AGE   VERSION
node3     Ready    control-plane,master   166m  v1.28.7+k3s1
master    Ready    control-plane,master   3h18m  v1.28.7+k3s1
node2     Ready    control-plane,master   169m  v1.28.7+k3s1
node5     Ready    <none>                 163m  v1.28.7+k3s1
node4     Ready    <none>                 163m  v1.28.7+k3s1
node6     Ready    <none>                 162m  v1.28.7+k3s1

Testing with Microbot

Now, let’s launch a service with 6 replicas that can be accessed from the micro.reset.com domain:

images/index.en index.md microbot-deploy.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: apps/v1
kind: Deployment
metadata:
  name: microbot
spec:
  replicas: 6
  selector:
    matchLabels:
      app: microbot
  template:
    metadata:
      labels:
        app: microbot
    spec:
      containers:
      - name: microbot
        image: dontrebootme/microbot:v1
        ports:
        - containerPort: 80

images/index.en index.md microbot-svc.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: v1
kind: Service
metadata:
  name: microbot
spec:
  selector:
    app: microbot
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

images/index.en index.md microbot-ingress.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: microbot
spec:
  rules:
  - host: micro.reset.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: microbot
            port:
              number: 80

And on the master server, we deploy the service:

1
2
3
$ kubectl create -f microbot-deploy.yaml
$ kubectl create -f microbot-svc.yaml
$ kubectl create -f microbot-ingress.yaml

We verify that the pods are created:

1
2
3
4
5
6
7
8
[node@master ~]$ kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
microbot-78865c7965-zg68s   1/1     Running   1 (136m ago)  161m
microbot-78865c7965-xbp4j   1/1     Running   1 (136m ago)  161m
microbot-78865c7965-wxzf8   1/1     Running   1 (136m ago)  161m
microbot-78865c7965-m8dtj   1/1     Running   1 (136m ago)  161m
microbot-78865c7965-nzcm9   1/1     Running   1 (136m ago)  161m
microbot-78865c7965-bfb2p   1/1     Running   1 (136m ago)  161m

On our host machine, we add micro.reset.com to the hosts file and test it in the browser:

1
2
192.168.5.11 master micro.reset.com
...

/devops/k3s/k3s/images/micro.png
Figure 3: Microbot

Congratulations, your K3s cluster is now operational.