Ahmed AL-Haffar
Ahmed AL-Haffar
2022-06-29 | 12 min read

Provisioning a production-ready Amazon EKS Fargate cluster using Terraform

Provisioning a production-ready Amazon EKS Fargate cluster using Terraform

What is Amazon EKS?

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized application workloads and helps standardize operations across your environments. In this article, we are going to demonstrate the provisioning of an EKS environment using IaC (Terraform).

This blog post will be part of a series of multiple articles to demonstrate the following

  • Provisioning and deploying Fargate EKS via Terraform
  • Managing AWS auth config to control access to K8S cluster via AWS IAM
  • Implementing CI/CD process using AWS Codebuild and Codepipeline

Environment requirements

awscli2AWS command line tool
helm3.0.xKubernetes packaging
kubectl>=1.21Kubernetes command line

Helm Kubernetes packages

Chart NameNamespaceChart VersionApplication VersionDocker Version

Git repositories

  • docker-nginx-sample : A Git repo hosting the K8S YAML config files, provisioning the Nginx application deployment, ALB, Nginx service and Nginx Ingress resources along with the required namespaces and service accounts.
  • terraform-aws-eks : A Git repo hosting the Terraform config files provisioning AWS services such as VPC, ACM, EKS, KMS and the CI/CD process using Codebuild and Codepipline.

EKS Architecture

Lets get the party started

After the long introduction and overview, now is the time to get our hands dirty and start our deployment processes. For your convenience, we have hosted all the files used in this article on GitHub, there are two different Git repositories as shown above hosting the infrastructure Terraform files and the K8S config YAML files.

AWS Resource Creation

The Terraform Git repository will create the below resources to host the EKS Fargate Cluster:


The below module will create the required VPC and all its components based on AWS best practices. The configuration for this design includes a virtual private cloud (VPC) with a public subnet and a private subnet in each availability zone for the selected region along with NAT Gateways, Internet GW, and a custom routing table. All the backend/K8S resources will be created in the private subnets spanning multiple AZs for more scalability. For more information about how to customize this VPC module please check the README.

Note: we are tagging our public subnets with special Tag Keys required by the AWS ALB ingress controller. For more information, please visit the AWS Documentation for ALB Ingress requirements

module "vpc" {
  source                  = "github.com/obytes/terraform-aws-vpc.git?ref=v1.0.6"
  environment             = var.environment
  region                  = var.region
  project_name            = var.project_name
  cidr_block              = var.cidr_block
  enable_dns_hostnames    = var.enable_dns_hostnames
  enable_nat_gateway      = var.enable_nat_gateway
  enable_internet_gateway = var.enable_internet_gateway
  create_public_subnets   = var.create_public_subnets
  single_nat_gateway      = var.single_nat_gateway
  map_public_ip_on_lunch  = true
  additional_public_subnet_tags = {
    "kubernetes.io/cluster/${join("-", [local.prefix, "backend"])}" = "shared"
    "kubernetes.io/role/elb"                                        = 1
  additional_private_subnet_tags = {


Amazon Certificate is used by the ALB ingress controller to allow our Nginx application to listen on port 443/HTTPS

resource "aws_acm_certificate" "_" {
  count                     = var.create_acm_certificate ? 1 : 0
  domain_name               = var.domain
  subject_alternative_names = [join(".", ["*", var.domain])]
  tags                      = merge(local.common_tags, tomap({ DomainName = var.domain, Name = local.prefix }))
  validation_method         = "DNS"


There are a lot of Terraform EKS modules out there, especially the official terraform-aws-eks, but since this is my first AWS EKS project, I decided to create a module from scratch that fits our case study where all the pods will be hosted on Fargate. Also, I would like to mention that you can take advantage of eksctl.io, which is a simple CLI tool for creating and managing clusters on EKS - Amazon's managed Kubernetes service for EC2, written in Go, and using CloudFormation APIs.

IAM Roles and policies

The Kubernetes clusters managed by Amazon EKS make calls to other AWS services on your behalf to manage the resources that you use with the service. Before you can create Amazon EKS clusters, you must create an IAM role with the following IAM policies:

  • AmazonEKSClusterPolicy: This policy provides Kubernetes the permissions it requires to manage resources on your behalf. Kubernetes requires EC2:CreateTags permissions to place identifying information on EC2 resources including but not limited to Instances, Security Groups, and Elastic Network Interfaces.
  • AmazonEKSVPCResourceController: This policy allows the role to manage network interfaces, their private IP addresses, and their attachment and detachment to and from network instances.
  • AmazonEKSFargatePodExecutionRolePolicy: Before you create a Fargate profile you must specify a pod execution role for the Amazon EKS components that run on the Fargate infrastructure using the profile. This role is added to the cluster's Kubernetes Role based access control (RBAC) for authorization. This allows the kubelet that's running on the Fargate infrastructure to register with your Amazon EKS cluster so that it can appear in your cluster as a node.
  • AWS ALB Controller IAM Policy and Roles: This is a set of policies required by the ALB Ingress controller to create the needed target groups, security groups and listing the ACM certificates provisioning of elasticloadbalancing which is controlled by K8S Ingress resources. For more information, you can refer to aws-load-balancer-controller installation docs

All of the above mentioned roles and policies are created by IAM Terraform file hosted on our repository.

EKS Security Groups

Once the cluster is created, a default security group will be created and associated with the cluster's security policy. However, as an additional security precaution, you can associate an additional security group to limit the open ports between the cluster and the nodes as shown bellow. For more information, please check sec-group-reqs

resource "aws_security_group" "cluster" {
  name        = join("-", [local.prefix, "sg"])
  description = join(" ", [local.prefix, "node ECS service"])
  vpc_id      = element(module.vpc.vpc_id, 0)
  tags = merge(local.common_tags, tomap({ "Name" = join("-", [local.prefix, "sg"]) }))
resource "aws_security_group_rule" "cluster" {
  for_each = local.cluster_security_group_rules
  security_group_id        = aws_security_group.cluster.id
  protocol                 = each.value.protocol
  from_port                = each.value.from_port
  to_port                  = each.value.to_port
  type                     = each.value.type
  self                     = try(each.value.self, null)
  ipv6_cidr_blocks         = try(each.value.ipv6_cidr_blocks, null)
  source_security_group_id = try(each.value.source_node_security_group, null)
  cidr_blocks              = try(each.value.cidr_blocks, null)

EKS Cluster

Below is the cluster resource where we set different parameters such as ClusterName, IAM Roles, VPC Network settings, and Kubernetes Control plain network settings such as CIDR block and public access strategy.


  • Kubernetes Control Plane public accessibility doesn't interfere with pods public accessibility, this is to limit your private access for your Amazon EKS cluster's Kubernetes API server endpoint and limit or completely disable the public access from the internet. As an enhancement, we will improve the EKS module to add a feature to control the behavior of the Amazon EKS cluster's Kubernetes API network access. For more information, please check private-clusters
  • Pods that run on Fargate are only supported on private subnets (with NAT gateway access to AWS services, but not a direct route to an Internet Gateway), so your cluster's VPC must have private subnets available.
resource "aws_eks_cluster" "_" {
  name     = join("-", [local.prefix, "backend"])
  role_arn = aws_iam_role._.arn
  vpc_config {
    subnet_ids              = module.vpc.prv_subnet_ids
    endpoint_private_access = true
    endpoint_public_access  = true
    security_group_ids      = [aws_security_group.cluster.id]
  kubernetes_network_config {
    service_ipv4_cidr = var.kubernetes_cidr
  encryption_config {
    resources = ["secrets"]
    provider {
      key_arn = aws_kms_key._[0].arn
  enabled_cluster_log_types = ["api", "audit"]

  depends_on = [
  timeouts {
    create = lookup(var.cluster_timeouts, "create", null)
    delete = lookup(var.cluster_timeouts, "update", null)
    update = lookup(var.cluster_timeouts, "delete", null)
  version = "1.21"

EKS Addons

There are 3 required EKS Addons that need to be created:

  • CoreDNS: it's a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS. When you launch an Amazon EKS cluster with at least one node, two replicas of the CoreDNS image are deployed by default, regardless of the number of nodes deployed in your cluster.
  • VPC CNI plugin for Kubernetes: Amazon EKS supports native VPC networking with the Amazon VPC Container Network Interface (CNI) plugin for Kubernetes. This plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • Kube-proxy: kube-proxy maintains network rules on each Amazon EC2 node. It enables network communication to your pods. kube-proxy is not deployed to Fargate nodes, so this is not needed in our deployment.
resource "aws_eks_addon" "this" {
  for_each = { for k, v in local.cluster_addons : k => v }
  cluster_name = aws_eks_cluster._.name
  addon_name   = try(each.value.name, each.key)
  addon_version            = lookup(each.value, "addon_version", null)
  resolve_conflicts        = lookup(each.value, "resolve_conflicts", null)
  service_account_role_arn = lookup(each.value, "service_account_role_arn", null)
  lifecycle {
    ignore_changes = [
  tags = local.common_tags

Note: By default, CoreDNS is configured to run on Amazon EC2 infrastructure on Amazon EKS clusters. If you want to only run your pods on Fargate, complete the following steps.

kubectl patch deployment coredns \
    -n kube-system \
    --type json \
    -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'

Delete and re-create any existing pods so that they are scheduled on Fargate

kubectl rollout restart -n kube-system deployment coredns

IAM OIDC provider

Amazon EKS supports using OpenID Connect (OIDC) identity providers as a method to authenticate users to your cluster. OIDC identity providers can be used with, or as an alternative to AWS Identity and Access Management (IAM). OIDC will allow configuring authentication to your cluster, you can create Kubernetes roles and clusterroles to assign permissions to the roles, and then bind the roles to the identities using Kubernetes rolebindings and clusterrolebindings. We will take advantage of the tls_certificate Terraform data source to get information about the TLS certificates for the EKS cluster.

data "tls_certificate" "this" {
  url = aws_eks_cluster._.identity[0].oidc[0].issuer

resource "aws_iam_openid_connect_provider" "oidc_provider" {
  client_id_list  = ["sts.${data.aws_partition.current.dns_suffix}"]
  thumbprint_list = [data.tls_certificate.this.certificates[0].sha1_fingerprint]
  url             = aws_eks_cluster._.identity[0].oidc[0].issuer
  tags = merge(
    Name = "${aws_eks_cluster._.name}-irsa" },

EKS Fargate profiles

The Fargate profile declares the pods to run on Fargate. This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and optional labels. You must define a namespace for every selector. The label field consists of multiple optional key-value pairs. Pods that match a selector (by matching a namespace for the selector and all of the labels specified in the selector) are scheduled on Fargate. During the creation of a Fargate profile, you must specify a pod execution role for the Amazon EKS components that run on the Fargate infrastructure using the profile. This role is added to the cluster's Kubernetes Role-Based Access Control (RBAC)

Below is the Terraform resource that creates the different Fargate profiles, as you will see in a later section the below pod selectors are used within the K8S YAML config files to declare which pod will be mapped to which Fargate profile.

Profile NameNamespaceSelector
CoreDNSstg-eks-euwest1{k8s-app: kube-dns}
stg-eks-euwest1default{Application: stg-eks-euwest1-core}
stg-eks-euwest1-corestg-eks-euwest1{app.kubernetes.io/name: core}
stg-eks-euwest1-kube-systemkube-system{app.kubernetes.io/name: aws-load-balancer-controller, app.kubernetes.io/instance: aws-load-balancer-controller}
resource "aws_eks_fargate_profile" "_" {
  for_each               = { for k, v in local.fargate_profiles : k => v }
  cluster_name           = aws_eks_cluster._.name
  fargate_profile_name   = each.value.name
  pod_execution_role_arn = aws_iam_role.eks_fargate_role.arn
  subnet_ids             = module.vpc.prv_subnet_ids
  dynamic "selector" {
    for_each = each.value.selectors
    content {
      namespace = selector.value.namespace
      labels    = lookup(selector.value, "labels", {})

Kubernetes config files

The Kubernetes Nginx YAML config files hosted on docker-nginx-sample is used to create the K8S environment to host the Nginx website, it consists of:

  • namespace: creates the required namespaces used in Nginx deployment files
  • nginx-deployment: the nginx application deployment file, here we define the required Fargate profile by defining the selector labels, CPU/Memory resources, replicas count and Docker image
apiVersion: apps/v1
kind: Deployment
  name: "core-deployment"
  namespace: "stg-eks-euwest1"
  replicas: 2
      app.kubernetes.io/name: "core"
        app.kubernetes.io/name: "core"
      - image: nginx
            memory: "2Gi"
            cpu: "1024m"
            memory: "2Gi"
            cpu: "1024m"
        imagePullPolicy: Always
        name: "core"
        - containerPort: 80
  • nginx-service: Nginx service to expose an application running on a set of pods as a network service.
apiVersion: v1
kind: Service
  name: "core-service"
  namespace: "stg-eks-euwest1"
    alb.ingress.kubernetes.io/target-type: ip
  - port: 80 # exposes the Kubernetes service on the specified port within the cluster
    targetPort: 80 # is the port on which the service will send requests to, that your pod will be listening on
    protocol: TCP
    nodePort: 31254
  type: NodePort
    app.kubernetes.io/name: "core"
  • nginx-ingress.yaml: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster, Ingress gives Service externally-reachable URLs, load balanced traffic, terminate SSL / TLS, and offer name-based virtual hosting.
  • alb-service-accounts: ALB Service accounts needed by the ALB ingress controller
  • helm-installation: A Bash script to install the AWS Ingress controller using Helm Kubernetes packaging system, here we are passing the clusterName, VPCiD, replicas as parameters to Helm, we are using the helm installation method as we are going to deploy in Fargate. For more information, please check add-controller-to-cluster

# ALB Controller #

echo "Adding required helm packages ...."
helm repo add eks https://aws.github.io/eks-charts

echo "Install the TargetGroupBinding CRDs if upgrading the chart via helm upgrade."
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"

echo "Install the helm chart if using IAM roles for service accounts. "
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
 -n kube-system \
 --set clusterName=stg-eks-euwest1-backend \
 --set serviceAccount.create=false \
 --set serviceAccount.name=aws-load-balancer-controller \
 --set vpcId=vpc-0e1e5b5db323d5764 \
 --set replicaCount=1
Share article

More articles