Deploying Kubernetes with Kubeadm and Calico - Part 1
Introduction
This post is part 1 of a series that details the steps necessary to deploy a Kubernetes cluster with Kubeadm and Calico.
I recently started studying for the Certified Kubernetes Administrator Exam, and I thought writing this post would be helpful for me and for any others who are interested in learning about Kubernetes.
These posts on Kubernetes were a long series to write. If you’ve researched Kubernetes deployment processes, you may have seen there is no shortage of them. Manual deployments like the one I’ve documented can be time-consuming, though they can also help you get a substantial foundation in this new technology. I hope you find this series of posts as valuable as I have.
Welcome to Part 1! For the first half of this series, we will go through all of the prerequisites and system changes we will need to complete to prepare our virtual machines for installing Kubernetes. All of the commands we go over will not be node-specific and should be run against all virtual machines we plan to run Kubernetes on.
In Part 2, we will cover the node-specific installation processes for our controller and worker nodes. We will complete the cluster initialization, Calico configuration, and post-installation steps. We will then test a pod deployment on our new cluster to finish the series.
Let’s get started!
The Lab Topology
For this deployment, I’m running 3 virtual machines with Ubuntu Server 20 installed. Each virtual machine is running 2 vCPUs, along with 4 gigs of RAM. They each have 50 gigs of storage, and a static IP address.
As a reference, the topology of the kubernetes lab is posted below:
Table of Contents
Prerequisites
The following steps should be taken on all of the virtual machines we will be using for our kubernetes cluster.
General Preparation
To kick us off, we will go into swap
. The swap
function is usually a good thing.
The purpose of a swap file is to function as a ramdisk, where additional memory is available for processes. In the case of Kubernetes, we don’t want this because it prevents us from accurately
managing the resources of our containers.
We disable swap in two ways.
First, we will disable the running process by executing the swapoff
command.
|
|
Next, we want to remove the swap configuration from the /etc/fstab
file. Removing this will
prevent swap from being configured at boot.
I’ve written a sed
command that will disable swap by inserting a #
symbol before the swap
setting. If this command does not work for you, you will need to manually edit the /etc/fstab
file.
to learn more about
sed
and what it can do, check out this link: GNU Sed
In the console of our virtual machine, we can check the existing /etc/fstab
configuration to
verify that swap is enabled before making any changes.
|
|
Now we will run the sed
command against our machines.
|
|
To confirm the change, we can run cat
on the /etc/fstab
file.
|
|
For our pod networking to function properly, we will need to enable a few things on our nodes.
First we need to ensure our new modules will load at boot. To do this we will create a
configuration file in modules-load.d
on all of our machines.
|
|
We can run cat
on the new file to confirm it was created.
|
|
Now, if we chose to reboot our machines, the modules would load when the virtual machines booted.
If we want to load the modules now, we can run the modprobe
command.
|
|
Let’s confirm the modules are loaded with lsmod
.
If the modules have not loaded, they will not be listed in the command results.
|
|
For our pod networking to function correctly, we will need to make some changes to ip forwarding and
iptables. To do so, we will be using sysctl
, which is a command used to make changes to our kernel
while the system is running.
For this change to be possible, sysctl
will need to read a conf file.
You will need to run the following command on each machine you would like to participate in the
cluster.
|
|
Now that we’ve made our system level changes to support Kubernetes, we will need to install our prerequisites. The following packages will need to be installed to support the download and installation of kubernetes, calico, and containerd.
|
|
Installing Containerd
By design, Kubernetes does not install with any particular container runtime engine defined. For my lab, I will be installing Containerd. Containerd is a popular container runtime from Docker that I have experience with. If you are following along with this tutorial, my instructions will apply specifically to implementing Containerd. For more detailed information on Containerd, please visit the following link: Containerd - Getting Started
On each machine, We will need to download Docker’s official signing key to begin our installation process.
|
|
Next, we will set up the Containerd stable
branch repository. This repository is where we will
download our packages.
|
|
On each machine, we will update our cache and install the containerd.io
package.
|
|
Now that Containerd has been installed, we can begin the configuration work.
Configuring Containerd
The containerd software does not come loaded with a default configuration. It does however provide a method for building a default configuration file that can be used.
On our machines we will create the configuration file by running the containerd config default
command.
|
|
The majority of settings in the configuration file will remain at their default settings, however
we will need to enable the SystemdCgroup
setting to allow for Kubernetes to integrate with
containerd.
On each machine you could modify this value manually, or you can use the sed
command I’ve provided below.
|
|
We can quickly confirm that the change has been successful by running a cat
command on the file.
|
|
Note! If the
SystemdCgroup
property has not changed, with thesed
command, you will need to modify this manually before restarting thecontainerd
service.If you decide to use a different container runtime, configure the appropriate Cgroup settings. Check the documentation here for your specific requirements: Kubernetes - Container Runtimes
To finalize our containerd changes, we will restart the containerd
service.
|
|
Now we are ready to continue on with the remaining prerequisites for our virtual machines.
Housekeeping
To support our future installation of Kubernetes' components, we will prepare the repository. First, Let’s add the Google public signing key for the Kubernetes repository.
|
|
Let’s verify the new keys were added by running apt-key list
we should see the 2 new keys,
along with any other keys you’ve added.
|
|
Now we can add the official Kubernetes repository as an available location to pull packages from.
|
|
Let’s verify it was added correctly by running cat
against the new Kubernetes.list
file.
|
|
Make sure to complete all of these steps on all of the nodes in the cluster.
Closing Thoughts
This was Part 1 in the series on deploying Kubernetes with Kubeadm and Calico. The goal of this post was to get all of the prerequisites out of the way before we do node and controller specific steps.
Stay tuned for Part 2!