Deploying Kubernetes with Kubeadm and Calico - Part 1

Introduction

This post is part 1 of a series that details the steps necessary to deploy a Kubernetes cluster with Kubeadm and Calico.

I recently started studying for the Certified Kubernetes Administrator Exam, and I thought writing this post would be helpful for me and for any others who are interested in learning about Kubernetes.

These posts on Kubernetes were a long series to write. If you’ve researched Kubernetes deployment processes, you may have seen there is no shortage of them. Manual deployments like the one I’ve documented can be time-consuming, though they can also help you get a substantial foundation in this new technology. I hope you find this series of posts as valuable as I have.

Welcome to Part 1!
For the first half of this series, we will go through all of the prerequisites and system changes we will need to complete to prepare our virtual machines for installing Kubernetes. All of the commands we go over will not be node-specific and should be run against all virtual machines we plan to run Kubernetes on.

In Part 2, we will cover the node-specific installation processes for our controller and worker nodes. We will complete the cluster initialization, Calico configuration, and post-installation steps. We will then test a pod deployment on our new cluster to finish the series.

Let’s get started!


The Lab Topology

For this deployment, I’m running 3 virtual machines with Ubuntu Server 20 installed. Each virtual machine is running 2 vCPUs, along with 4 gigs of RAM. They each have 50 gigs of storage, and a static IP address.

As a reference, the topology of the kubernetes lab is posted below:

k8s-lab-image


Table of Contents


Prerequisites

The following steps should be taken on all of the virtual machines we will be using for our kubernetes cluster.

General Preparation

To kick us off, we will go into swap. The swap function is usually a good thing. The purpose of a swap file is to function as a ramdisk, where additional memory is available for processes. In the case of Kubernetes, we don’t want this because it prevents us from accurately managing the resources of our containers.

We disable swap in two ways. First, we will disable the running process by executing the swapoff command.

1
sudo swapoff -a

Next, we want to remove the swap configuration from the /etc/fstab file. Removing this will prevent swap from being configured at boot.

I’ve written a sed command that will disable swap by inserting a # symbol before the swap setting. If this command does not work for you, you will need to manually edit the /etc/fstab file.

to learn more about sed and what it can do, check out this link: GNU Sed

In the console of our virtual machine, we can check the existing /etc/fstab configuration to verify that swap is enabled before making any changes.

1
2
3
sudo cat /etc/fstab
...
/swap.img none swap sw 0 0

Now we will run the sed command against our machines.

1
sudo sed -e '\/swap/ s/^#*/#/' -i /etc/fstab

To confirm the change, we can run cat on the /etc/fstab file.

1
2
3
sudo cat /etc/fstab
...
#/swap.img none swap sw 0 0

For our pod networking to function properly, we will need to enable a few things on our nodes.

First we need to ensure our new modules will load at boot. To do this we will create a configuration file in modules-load.d on all of our machines.

1
2
3
4
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

We can run cat on the new file to confirm it was created.

1
2
3
4
cat /etc/modules-load.d/k8s.conf 
...
overlay
br_netfilter

Now, if we chose to reboot our machines, the modules would load when the virtual machines booted. If we want to load the modules now, we can run the modprobe command.

1
2
sudo modprobe overlay
sudo modprobe br_netfilter

Let’s confirm the modules are loaded with lsmod. If the modules have not loaded, they will not be listed in the command results.

1
2
3
4
5
lsmod | grep overlay && lsmod | grep br_netfilter
...
overlay               118784  0
br_netfilter           28672  0
bridge                176128  1 br_netfilter

For our pod networking to function correctly, we will need to make some changes to ip forwarding and iptables. To do so, we will be using sysctl, which is a command used to make changes to our kernel while the system is running.

For this change to be possible, sysctl will need to read a conf file. You will need to run the following command on each machine you would like to participate in the cluster.

1
2
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.Next, we will apply the new parameters without rebooting.

Now that we’ve made our system level changes to support Kubernetes, we will need to install our prerequisites. The following packages will need to be installed to support the download and installation of kubernetes, calico, and containerd.

1
2
3
4
5
6
sudo apt-get update && sudo apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

Installing Containerd

By design, Kubernetes does not install with any particular container runtime engine defined. For my lab, I will be installing Containerd. Containerd is a popular container runtime from Docker that I have experience with. If you are following along with this tutorial, my instructions will apply specifically to implementing Containerd. For more detailed information on Containerd, please visit the following link: Containerd - Getting Started

On each machine, We will need to download Docker’s official signing key to begin our installation process.

1
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Next, we will set up the Containerd stable branch repository. This repository is where we will download our packages.

1
2
3
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

On each machine, we will update our cache and install the containerd.io package.

1
sudo apt-get update && sudo apt-get install -y containerd.io

Now that Containerd has been installed, we can begin the configuration work.

Configuring Containerd

The containerd software does not come loaded with a default configuration. It does however provide a method for building a default configuration file that can be used.

On our machines we will create the configuration file by running the containerd config default command.

1
sudo containerd config default | sudo tee /etc/containerd/config.toml

The majority of settings in the configuration file will remain at their default settings, however we will need to enable the SystemdCgroup setting to allow for Kubernetes to integrate with containerd.

On each machine you could modify this value manually, or you can use the sed command I’ve provided below.

1
sudo sed -i 's/SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml

We can quickly confirm that the change has been successful by running a cat command on the file.

1
2
3
cat /etc/containerd/config.toml | grep SystemdCgroup
...
SystemdCgroup = true

Note!
If the SystemdCgroup property has not changed, with the sed command, you will need to modify this manually before restarting the containerd service.


If you decide to use a different container runtime, configure the appropriate Cgroup settings. Check the documentation here for your specific requirements:
Kubernetes - Container Runtimes


To finalize our containerd changes, we will restart the containerd service.

1
sudo systemctl restart containerd 

Now we are ready to continue on with the remaining prerequisites for our virtual machines.


Housekeeping

To support our future installation of Kubernetes' components, we will prepare the repository. First, Let’s add the Google public signing key for the Kubernetes repository.

1
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Let’s verify the new keys were added by running apt-key list we should see the 2 new keys, along with any other keys you’ve added.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apt-key list
...
/etc/apt/trusted.gpg
--------------------
pub   rsa2048 2020-12-04 [SC] [expires: 2022-12-04]
      59FE 0256 8272 69DC 8157  8F92 8B57 C5C2 836F 4BEB
uid           [ unknown] gLinux Rapture Automatic Signing Key (//depot/google3/production/borg/cloud-rapture/keys/cloud-rapture-pubkeys/cloud-rapture-signing-key-2020-12-03-16_08_05.pub) <glinux-team@google.com>
sub   rsa2048 2020-12-04 [E]

pub   rsa2048 2021-03-01 [SC] [expires: 2023-03-02]
      7F92 E05B 3109 3BEF 5A3C  2D38 FEEA 9169 307E A071
uid           [ unknown] Rapture Automatic Signing Key (cloud-rapture-signing-key-2021-03-01-08_01_09.pub)
sub   rsa2048 2021-03-01 [E]

Now we can add the official Kubernetes repository as an available location to pull packages from.

1
2
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
    sudo tee /etc/apt/sources.list.d/kubernetes.list

Let’s verify it was added correctly by running cat against the new Kubernetes.list file.

1
2
3
$ cat /etc/apt/sources.list.d/kubernetes.list
...
deb https://apt.kubernetes.io/ Kubernetes-xenial main

Make sure to complete all of these steps on all of the nodes in the cluster.

Closing Thoughts

This was Part 1 in the series on deploying Kubernetes with Kubeadm and Calico. The goal of this post was to get all of the prerequisites out of the way before we do node and controller specific steps.

Stay tuned for Part 2!

Back to Top