본문 바로가기

software engineering

Installing Kubernetes Cluster on CentOS 8 - Raspberry PI

Prerequisite

Before installing Kubernetes Cluster on CentOS8, it's obvious that you have CentOS8 running on Raspberry PI boards.

ilyoan.tistory.com/entry/Installation-of-CentOS-8-on-RaspberryPi-4-B?category=901156

 

Installation of CentOS 8 on RaspberryPi 4 B

Prerequisite RaspberryPi 4 I hope, I don't need to explain this micro SD Card RaspberryPi 4 uses micro SD card as main storage SD Card writer You'll need this to Flash CentOS image to the SD card HM..

ilyoan.tistory.com

Because we are going to setup a Kubernetes Cluster, it is recommended to have at least 3 borads. One for master node and two boards for worker nodes.

Installataion of master node

hosts

On your master node, set the system host name and update /etc/hosts file.

$ hostnamectl set-hostname master-node

$ cat <<EOF>> /etc/hosts
192.168.0.15 master-node
192.168.0.16 worker-node-1
192.168.0.17 worker-node-2
EOF

firewall and netfilter

Kubernetes uses various of ports for various purposes. Here is the list of ports a Kubernetes master node is going to use:

kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#control-plane-node-s

As you can see, master nodes and worker nodes use different set of ports.

master node

$ firewall-cmd --permanent --add-port=6443/tcp
$ firewall-cmd --permanent --add-port=2379-2380/tcp
$ firewall-cmd --permanent --add-port=10250/tcp
$ firewall-cmd --permanent --add-port=10251/tcp
$ firewall-cmd --permanent --add-port=10252/tcp
$ firewall-cmd --permanent --add-port=10255/tcp
$ firewall-cmd –reload

$ modprobe br_netfilter
$ echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

worker node

$ firewall-cmd --permanent --add-port=6783/tcp
$ firewall-cmd --permanent --add-port=10250/tcp
$ firewall-cmd --permanent --add-port=10255/tcp
$ firewall-cmd --permanent --add-port=30000-32767/tcp
$ firewall-cmd  --reload

$ modprobe br_netfilter
$ echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

SELinux

SELinux needs to be disabled according to github.com/jfcgaspar/website/blob/master/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md#installing-kubeadm-kubelet-and-kubectl which says "Setting SELinux in permissive mode by running setenforce 0 and sed ... effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet." as of Dec. 2020.

$ setenforce 0
$ sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

docker-ce

Add docker repository for dnf and install docker-ce.

$ dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

$ dnf install https://download.docker.com/linux/centos/8/aarch64/stable/Packages/containerd.io-1.4.3-3.1.el8.aarch64.rpm

$ dnf install docker-ce

Now, you can enable and start docker service.

$ systemctl enable docker
$ systemctl start docker

$ docker -v
Docker version 19.03.13, build 4484c46

kubeadm

Add kubernetes repository.

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

You can now install kubeadm package.

$ dnf install kubeadm -y 

And then enable and start the service.

$ systemctl enable kubelet
$ systemctl start kubelet

control plan (master node)

You can initialize control plan using kubeadm. Before right jump into that, we have to disable swap.

$ swapoff -a

Now it should be ready to start initializing the control plain.

$ kubeadm init

...

kubeadm join 192.168.0.15:6443 --token wfwz25.d0xu1yjk16r35jk8 --discovery-token-ca-cert-hash sha256:ab97fb9bb3ae4e8869c5a0145f2fe737526e81d79b0710126355e6306347e976

At the last line of the output of the command, you wil find a join command that will be used from your worker node later. So store it somewhere.

In case you lost the join command or the token expried, you can simply create a new token with join command printed

$ kubeadm token create --print-join-command

If you want to allow an user to use the now initallized cluster run:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now confirm that the kubectl command is activated.

$ kubectl get nodes
NAME          STATUS   ROLES                  AGE    VERSION
master-node   Ready    control-plane,master   6m5s   v1.19.4

cluster network

Next step if to install CNI (Cluster Network Interface I guess). There are various implementaions available. Among them, I found that weavenet is simple to install and actually that was the first one that I was able to configure without any trouble.

Use this command to install weavenet.

$ export kubever=$(kubectl version | base64 | tr -d '\n')
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

You can find the list of CNIs here: kubernetes.io/docs/concepts/cluster-administration/networking/

 

Cluster Networking

Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work. There are 4 distinct networking problems to address: Highly-coupled container-to-container communications: this is solved by Pods and lo

kubernetes.io

 

Join to master node (worker nodes)

Now the worker node is ready to join to a cluster. Let's add the worker node to the master node you've installed.

Copy the join command that kubeadm init generated and paste it on your worker node.

$ kubeadm join 192.168.0.15:6443 --token wfwz25.d0xu1yjk16r35jk8 --discovery-token-ca-cert-hash sha256:ab97fb9bb3ae4e8869c5a0145f2fe737526e81d79b0710126355e6306347e976

Go to your master node and see it the worker nodes are successfully joined to the cluster.

$ kubectl get nodes
NAME            STATUS   ROLES                  AGE     VERSION
master-node     Ready    control-plane,master   36m     v1.19.4
worker-node-1   Ready    <none>                 15s     v1.19.4
worker-node-2   Ready    <none>                 13s     v1.19.4

 

References

Trougheshoots

missing required cgroups: memory

Add follow options in /boot/cmdline.txt and reboot.

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1