During my attempts to enable persistence for services in the cluster, I found several options that seemed promising, such as the Mayastor plugin. However, despite my efforts, I was unable to get it to work in LXD. After searching for alternative solutions, I decided to use NFS, which worked flawlessly from the start and saved me a lot of
time. Here are the steps to use it with Microk8s.
Installing the NFS Server on the Host Machine
Depending on the Linux distribution being used, the necessary packages must be installed. The most important options are found in the /etc/nfs.conf
and /etc/exports
files. Let’s take a look at the first one:
1
2
3
4
5
|
# /etc/nfs.conf
...
[nfsd]
host = 10.239.143.1
...
|
In this file, it’s only necessary to specify the IP address that the NFSD service will use. The IP address 10.239.143.1
corresponds to the lxdbr0
interface, which allows us to communicate with the cluster. We can verify this by running the command:
1
2
3
4
5
6
7
8
9
10
11
|
$ lxc list
+--------+---------+----------------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+----------------------------+-----------------------------------------------+-----------+-----------+
| master | RUNNING | 10.239.143.98 (eth0) | fd42:e01c:70e5:5c30:216:3eff:fe89:cb44 (eth0) | CONTAINER | 1 |
| node2 | RUNNING | 10.239.143.172 (eth0) | fd42:e01c:70e5:5c30:216:3eff:fe82:ca09 (eth0) | CONTAINER | 0 |
| node3 | RUNNING | 10.239.143.235 (eth0) | fd42:e01c:70e5:5c30:216:3eff:fe6f:8e95 (eth0) | CONTAINER | 0 |
| node4 | RUNNING | 10.239.143.95 (eth0) | fd42:e01c:70e5:5c30:216:3eff:fe61:9890 (eth0) | CONTAINER | 0 |
| node5 | RUNNING | 10.239.143.104 (eth0) | fd42:e01c:70e5:5c30:216:3eff:fec1:b451 (eth0) | CONTAINER | 0 |
| node6 | RUNNING | 10.239.143.17 (eth0) | fd42:e01c:70e5:5c30:216:3eff:fe74:7a90 (eth0) | CONTAINER | 0 |
+--------+---------+----------------------------+-----------------------------------------------+-----------+-----------+
|
We can see that the IP addresses of the nodes in the cluster are in the same segment as the IP address of the lxdbr0
interface.
The second file, /etc/exports
, contains the directories that will be used for NFS storage:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
$ mkdir -p /srv/nfs/kube
$ chmod -R 0777 /srv/nfs
$ cat /etc/exports
# /etc/exports - exports(5) - directories exported to NFS clients
#
# Example for NFSv3:
# /srv/home hostname1(rw,sync) hostname2(ro,sync)
# Example for NFSv4:
# /srv/nfs4 hostname1(rw,sync,fsid=0)
# /srv/nfs4/home hostname1(rw,sync,nohide)
# Using Kerberos and integrity checking:
# /srv/nfs4 *(rw,sync,sec=krb5i,fsid=0)
# /srv/nfs4/home *(rw,sync,sec=krb5i,nohide)
#
# Use `exportfs -arv` to reload.
/srv/nfs 10.239.143.0/24(rw,sync,crossmnt,fsid=0)
/srv/nfs/kube 10.239.143.0/24(rw,sync,no_subtree_check,no_all_squash)
$ exportfs -arv
|
As we can see, there are two shared directories, only accessible from the 10.239.143.0/24
network, with the following permissions:
1
2
3
4
5
|
$ ls -alh /srv/nfs
total 12K
drwxrwxrwx 3 root root 4,0K Jan 21 01:32 .
drwxr-xr-x 5 root root 4,0K Jan 21 01:32 ..
drwxrwxrwx 2 root root 4,0K Jan 21 01:32 kube
|
It was not necessary to change the permissions to nobody:nobody
. This is all the configuration required on the host machine. We can test the NFS setup by mounting the /srv/nfs
directory from a node in the cluster using the command:
1
2
3
4
|
$ mkdir /tmp/test
$ mount -t nfs 10.239.143.1:/srv/nfs /tmp/test
# verify
$ mount | grep nfs
|
We can create a file in /tmp/test
to verify the write permissions.
Installing the NFS Addon in the Cluster
Now that we are sure the NFS service is working correctly on the host machine and the nodes in the cluster, we can install the NFS addon following the official guide for Microk8s. This should be done on the master node.
Official NFS Guide
The official NFS guide also shows how to install and configure the NFS server.
Don’t forget that this addon will only work correctly if the NFS packages are installed on each node in the cluster. In the
first part of the cluster installation, the
nfs-utils
package was installed on the master node, and since we used the snapshot of that node, all nodes in the cluster have NFS installed.
Creating the Storage Class
According to the official guide, the Storage Class provided should work correctly, but in practice, it didn’t. After many hours of troubleshooting, I found that the following Storage Class works:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: 10.239.143.1
share: /srv/nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- soft
- nolock
- nfsvers=3
|
The soft
, nolock
, and nfsvers=3
options were the correct ones. With this Storage Class, each service will have persistence from any node in the cluster, and we can now perform the tests suggested in the official guide.