Spin up managed Kubernetes cluster on AWS with terraform

Gaurav Wadghule
4 min readMay 24, 2021

Hello Guys,

In this tutorial, We are going to create a K8s cluster on AWS using terraform. For this tutorial, we are creating 2 node Kubernetes cluster using kubeadm. So we first create aws virtual machines of size t2.micro then we will make these virtual machines ready to install the Kubernetes control plane on of the node and other for worker node.

Prerequisites:

  1. AWS account
  2. Terraform (0.15.4)
  3. aws cli (2.0)

Create the following terraform files.

  1. settings.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~>3.27"
}
}

backend "s3" {
bucket = "terrraformbackend"
key = "ec2-k8"
region = "us-east-1"
}
}

provider "aws" {
profile = "default"
region = "us-east-1"
}

As we are going to use aws resources so here in this file, I have defined the required provides a block with the appropriate version. After that I have defined the backend for this terraform script. Here we are going to use s3 bucket for storing the state files. click here to know more about terraform remote backend.

2. ec2.tf

resource "aws_instance" "master-k8" {
ami = "ami-013f17f36f8b1fefb"
instance_type = "t2.micro"
availability_zone = "us-east-1a"
key_name = "coda-k8-ec2"
security_groups = [aws_security_group.k8_sg.name]
user_data = <<-EOF
#!/bin/bash
sudo -i
apt-get update
swapoff -a
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm
apt-mark hold kubelet kubeadm

# install Docker runtime
sudo apt-get update
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io -y
EOF
tags = {
created_by = "wadghulegaurav@gmail.com"
}
}


resource "aws_instance" "worker1-k8" {
ami = "ami-013f17f36f8b1fefb"
instance_type = "t2.micro"
availability_zone = "us-east-1b"
key_name = "coda-k8-ec2"
security_groups = [aws_security_group.k8_sg.name]
user_data = <<-EOF
#!/bin/bash
sudo -i
apt-get update
swapoff -a
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm
apt-mark hold kubelet kubeadm

# install Docker runtime
sudo apt-get update
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io -y
EOF
tags = {
created_by = "wadghulegaurav@gmail.com"
}
}

Here we are creating 2 ec2 instances of size t2.micro. we have used user_data to execute the initial command which will install kubectl, kubelet, kubeadm, docker on both of the virtual machines.

3. securityGroup.tf

resource "aws_security_group" "k8_sg" {
name = "k8_sg"
description = "Allow TLS inbound traffic"
vpc_id = aws_vpc.k8_vpc.id

ingress {
description = "TLS from VPC"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = [aws_vpc.k8_vpc.cidr_block]
}

ingress {
description = "KubeAPI port from VPC"
from_port = 6443
to_port = 6443
protocol = "tcp"
cidr_blocks = [aws_vpc.k8_vpc.cidr_block]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "allow_tls"
}
}

4. vpc.tf

resource "aws_vpc" "k8_vpc" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default"

tags = {
Name = "k8_vpc"
}
}

Now you need to ssh into master node and initialize the cluster with following command which will setup master node.

kubeadm init --pod-network-cidr=10.0.0.0/8

After successfull initalization of master node you will get kubeadm join command to add worker nodes in the cluster. Use this command in worker node.

After this we need to install networking solution in our kubernetes cluster as kubeadm does not install this. For this tutorial we are going to weave Use the below command to install weave networking solution.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Now we are having ready to use managed kubernetes cluster.

--

--