DEV Community

Emmeline
Emmeline

Posted on

I created a Kubernetes cluster on AWS EC2 instances

Hello everyone,

Today I want to share how I have created a k8s 2-node cluster with kubeadm on AWS EC2 instances.
In this process, I was highly inspired and helped by this blog article : Setup Your K8s Cluster with AWS EC2 by Milinda Nandasena.

I have followed the following steps :

  1. Create an instance template to install all requirements at launch
  2. Launch two instances
  3. Made a minor - but important - change in the containerd config file
  4. Run kubeadm init command on the controlplane node
  5. Run kubeadm join command on the worker node
  6. Install the Flannel network plugin on both nodes

1. Create an instance template to install all requirements at launch

  • Application and OS Images (Amazon Machine Image)
    Ubuntu 22.04

  • Instance Type
    t2.medium

I have experienced that free-tier eligible instances such as t2.micro don't have enough CPU to be a kubernetes node. I reached easily 90% of CPU utilization with few pods running.
This is why I recommend t2.medium instance type.

  • Key pair (login)
    I recommend creating a new key pair that will be used to log into all the nodes.

  • Network setting
    Subnet : Don't include in launch template
    Security Group : Choose an existing or new security group with the following rules.

  • Advanced details
    This is where I think you can gain a lot of time and effort.
    You can write here commands that will be executed when launching the instance. We will install kubelet, kubectl, kubeadm and the container runtime containerd.

Here is the user data file.

At this point you have completed 50% of the job :)

2. Launch two instances

In the menu EC2 > Instances > Launch instance from template.

launch from template

Select your template kubernetes-node.
And change the name of your instance in Resource tags section, in order to create two ec2 instances : controlplane and node01

Now that you have your two instances, you can SSH into them.
ssh -i <KEY_PAIR>.pem ubuntu@<PUBLIC_IP>.

The command can be found here :

Connect

3. Made a minor - but important - change in the containerd config file

As described in Kubernetes documentation, when installing containerd as the CRI runtime, it is important to configure the system cgroup driver.

In /etc/containerd/config.toml, change the value of SystemdCgroup from false to true.

containerd configuration

4. Run kubeadm init command on the controlplane node

On the controlplane node :
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
This pod network cidr is compatible with the Flannel plugin discussed later.

As explained in the output, you must copy the kubectl config file to .kube/config with the following commands :

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

In order to prepare the worker node joining the cluster, execute the following to generate a token :

kubeadm token create --print-join-command

5. Run kubeadm join command on the worker node

Execute the kubeadm join command on the worker node.

6. Install Flannel network add-on on both nodes

Kubernetes does not provide networking, this is why you must install a networking plugin add-on. There is a list available here.

I chose Flannel and I found here the necessary steps :

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

At this point you should see all of your pods getting ready.
And the nodes should be ready.

Thank you for reading.
Do not hesitate if you have any questions or advices.

Top comments (0)