Deploying Kubernetes Cluster Using Terraform

Christopher Quiles
5 min readJan 22, 2021

Utilizing Kubernetes with Terraform, by scheduling and exposing a NGINX deployment on a Kubernetes cluster.

Kubernetes (K8S) is an open-source workload scheduler with focus on containerized applications. “In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters.”

There are many advantages in using Terraform to provision Kubernetes Cluster:

  • Allows maintaining Kubernetes Cluster definitions in Code.
  • Modify Kubernetes Cluster configurations through variables.
  • Modularize the infrastructure in code.
  • The biggest benefit when using Terraform to maintain Kubernetes resources is integration into the Terraform plan/apply life-cycle. So you can review planned changes before applying them. Also, using kubectl, purging of resources from the cluster is not trivial without manual intervention. Terraform does this reliably.

When setting up a Kubernetes workload, it is possible to use Terraform to directly schedule the pods. After Terraform provisions the pod, Kubernetes is responsible for managing the containers within.

Getting Started:

If you don’t have a Kubernetes cluster, you can use kind to provision a local Kubernetes cluster or provision one on a cloud provider.

Use the package manager homebrew to install kind.

$ brew install kind$ curl https://raw.githubusercontent.com/hashicorp/learn-terraform-deploy-nginx-kubernetes-provider/master/kind-config.yaml --output kind-config.yaml

Once you’ve done this, download and save the kind configuration into a file named kind-config.yaml. This configuration adds extra port mappings, so you can access the NGINX service locally later.

vim kind-config.yaml
ERROR: problems with docker daemon.

ERROR: However for some reason my docker daemon wasn’t running. Apparently after some research it is a complicated process to get Docker to run on MacOS. Then I found there is a way with Docker Machine but it couldn’t seem to get Virtualbox installed, so long story short we’ll skip that mess and hop into a Virtual Environment moving forward.

process for installing and activating VENV for MacOs

Verify that your cluster exists by listing your kind clusters.

kind get clusters

Then, point kubectl to interact with this cluster. The context is kind- followed by the name of your cluster.

kubectl cluster-info --context kind-terraform-learn

Configure the provider

Before you can schedule any Kubernetes services using Terraform, you need to configure the Terraform Kubernetes provider. In this next part we’ll take advantage of kubectl. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

$ mkdir learn-terraform-deploy-nginx-kubernetes
$ cd learn-terraform-deploy-nginx-kubernetes
$ vim kubernetes.tf
$ kubectl config cuurent-context

Then, create a new file named kubernetes.tf and add the following configuration to it.

vim kubernetes.tf

Verify kubectl's current-context is pointing to your Kubernetes cluster. If you're running kind, your current-context should be kind-terraform-learn.

if you don’t see kind-terraform-learn switch by using $ kubectl config use-context kind-terraform-learn

run terraform init to download the latest version and initialize your Terraform workspace.

$ terraform init

Schedule a deployment:

Add the following to your kubernetes.tf file. This Terraform configuration will schedule a NGINX deployment with two replicas on your Kubernetes cluster, internally exposing port 80 (HTTP).

Here is the full kubernetes.tf file from my terminal

Apply the configuration to schedule the NGINX deployment.

$ terraform apply
$ kubectl get deployments

Once the apply is complete, verify the NGINX deployment is running.

Schedule a Service

Since our Kubernetes cluster is hosted locally on kind, we will expose the NGINX instance via NodePort to access the instance. This exposes the service on each node’s IP at a static port, allowing you to access the service from outside the cluster at <NodeIP>:<NodePort>.

Add the following configuration to your kubernetes.tf file. This will expose the NGINX instance at the node_port30201.

Here is the full kubernetes.tf configuration file.

Once the apply is complete, verify the NGINX deployment is running.

$ kubectl get services

You can access the NGINX instance by navigating to the NodePort at http://localhost:30201/.

http://localhost:30201/

Scale the deployment:

You can scale your deployment by increasing the replicas field in your configuration. Change the number of replicas in your Kubernetes deployment from 2 to 4.

Apply the change to scale your deployment.

$ terraform apply
$ kubectl get deployments

Clean up your workspace

Running terraform destroy will de-provision the NGINX deployment

$ terraform destroy

Thanks for checking this out, hopefully this helped you!

Contact me:

Linked In → https://www.linkedin.com/in/quiwest/

email → quileswest@gmail.com

--

--