5 minute read Platform Engineering

Now that your infrastructure is live and Kubernetes is running in a secure VPC, it’s time to transform your EKS cluster into a real internal developer platform.

This phase is all about installing and configuring essential platform services — the kind that give your developers smooth deployments, visibility into their workloads, and a consistent operating model.

In this post, we’ll walk through the installation of:

  • A GitOps engine (Argo CD)
  • An ingress controller (AWS Load Balancer Controller)
  • Certificate management (cert-manager)
  • Observability stack (Prometheus, Grafana, Loki)
  • Secrets integration

All components will be installed declaratively using Helm, and optionally managed via GitOps.

Pre-requisites

You will need these!

Helm

Before proceeding, ensure Helm is installed on your local machine.

Verify the installation:

helm version

I would recommend using a package manager like Homebrew (macOS/Linux) or Chocolatey (Windows), to install Helm:

  • Homebrew:

    brew install helm
    
  • Chocolatey:

    choco install kubernetes-helm
    

For more details, refer to the Helm installation guide.

Quick Helm Usage Guide

From this point on we will be making use of the helm repo add command. This command is used to add a new Helm chart repository. A chart repository is a location where packaged charts can be stored and shared.

We will also use the helm install command. The helm install command is used to deploy a Helm chart into your Kubernetes cluster, creating the resources defined by the chart.

The helm upgrade command is used to upgrade a release to a new version of a chart or to apply changes to its configuration. If the release does not exist, it will install it (when used with --install).

Using Terraform to deploy Helm charts

It is also possible to deploy Helm charts using Terraform.

The helm terraform provider needs to be initialised with the cluster details, which is not possible until the cluster is created. So, a second Terraform module is required for helm charts.

To use the Helm Terraform provider, configure the provider with the config file created by the aws eks update-kubeconfig command used in part 3 of the series.

provider "helm" {
  kubernetes = {
    config_path = "~/.kube/config"
  }
}

Step 1: Bootstrap GitOps with Argo CD

Argo CD lets you define applications in Git and sync them into the cluster. It’s the cornerstone of a self-service platform.

Install it with Helm:

helm repo add argo https://argoproj.github.io/argo-helm
helm install argocd argo/argo-cd \
  --namespace argocd --create-namespace

Or, using Terraform:

resource "helm_release" "argo_cd" {
  name             = "argocd"
  repository       = "https://argoproj.github.io/argo-helm"
  chart            = "argo/argo-cd"
  create_namespace = true
  namespace        = "argocd"
}

Expose the Argo UI with an ingress (we’ll get to that shortly), and store Git credentials securely using Kubernetes secrets or IRSA.

💡 Tip: You can even manage Argo CD with itself, by applying its own Application CRs via Git.

Step 2: Ingress Controller (AWS ALB)

To expose services like Argo CD, we’ll use the AWS Load Balancer Controller, which integrates with AWS ALB and uses Kubernetes ingress resources.

First we will need to create a service account using Terraform:

provider "kubernetes" {
  config_path    = "~/.kube/config"
}

resource "kubernetes_service_account" "alb_controller" {
  metadata {
    name      = "aws-load-balancer-controller"
    namespace = "kube-system"
    annotations = {
      "eks.amazonaws.com/role-arn" = data.aws_iam_role.eks-cluster.arn
    }
  }
}

Then we need to create and attach the required IAM policy to the EKS cluster role.

Download the policy json to the module directory:

curl -o aws_load_balancer_controller_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.13.3/docs/install/iam_policy.json

Terraform the policy and role policy attachment:

resource "aws_iam_policy" "aws_load_balancer_controller" {
  name   = "AWSLoadBalancerController"
  policy = file("${path.module}/aws_load_balancer_controller_policy.json")
}

resource "aws_iam_role_policy_attachment" "aws_load_balancer_controller" {
  role       = "eks-cluster-role"
  policy_arn = aws_iam_policy.aws_load_balancer_controller.arn
}

Install the controller to the cluster:

helm repo add eks https://aws.github.io/eks-charts
helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=my-container-platform \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller

Or, using Terraform:

resource "helm_release" "aws_load_balancer_controller" {
  name             = "aws-load-balancer-controller"
  repository       = "https://aws.github.io/eks-charts"
  chart            = "aws-load-balancer-controller"
  namespace        = "kube-system"
  create_namespace = false

  set = [
    {
      name  = "clusterName"
      value = "my-container-platform"
    },
    {
      name  = "serviceAccount.create"
      value = "false"
    },
    {
      name  = "serviceAccount.name"
      value = "aws-load-balancer-controller"
    }
  ]
}

Don’t forget: the controller needs an IAM role with the correct policies, configured via IRSA.

Now your platform can expose HTTPS endpoints for workloads — and Argo CD’s UI — using annotated ingress manifests.

Step 3: SSL/TLS with cert-manager

You’ll want HTTPS by default for all services. cert-manager automates certificate issuance and renewal, supporting Let’s Encrypt and internal CAs.

Install with:

helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager --create-namespace \
  --set crds.enabled=true

Or, with Terraform:

resource "helm_release" "cert_manager" {
  name             = "cert-manager"
  repository       = "https://charts.jetstack.io"
  chart            = "cert-manager"
  namespace        = "cert-manager"
  create_namespace = true

  set {
    name  = "crds.enabled"
    value = "true"
  }
}

Then define a ClusterIssuer with Let’s Encrypt or a corporate CA:

cert-manager/cluster-issuer-letsencrypt.yaml

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: platform@example.com
    privateKeySecretRef:
      name: letsencrypt
    solvers:
      - http01:
          ingress:
            class: alb

Apply it to your cluster with:

kubectl apply -f cert-manager/cluster-issuer-letsencrypt.yaml

You should see a confirmation output:

clusterissuer.cert-manager.io/letsencrypt created

Step 4: Observability (Prometheus, Grafana, Loki)

A modern platform isn’t complete without visibility.

Use the kube-prometheus-stack Helm chart for an integrated setup:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install monitoring prometheus-community/kube-prometheus-stack \
  --namespace monitoring --create-namespace

This gives you:

  • Prometheus for metrics
  • Grafana for dashboards
  • Alertmanager for incident handling

Add Loki for logs (you can scrape container logs using Promtail or FluentBit):

helm repo add grafana https://grafana.github.io/helm-charts
helm install loki grafana/loki-stack --namespace monitoring

Dashboards and alerting rules can also be version-controlled; true platform hygiene.

Step 5: Secrets Integration

Avoid baking secrets into workloads or Helm charts. Instead, integrate with cloud-native tools:

Install External Secrets:

helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets \
  --namespace external-secrets --create-namespace

This lets you define Kubernetes ExternalSecret resources linked to AWS Secrets:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: my-api-keys
spec:
  secretStoreRef:
    name: aws-secrets
    kind: ClusterSecretStore
  target:
    name: api-keys
  data:
    - secretKey: key
      remoteRef:
        key: prod/my-service/api-key

What You Have Now

With these building blocks, your cluster is no longer just compute — it’s a platform:

  • Developers push to Git, Argo CD deploys
  • Services are exposed securely via HTTPS
  • Metrics, logs, and alerts are live
  • Secrets are secure and auditable

And everything is modular, auditable, and replicable; ready to grow with your team.

Coming Up in Part 5

Next, we’ll focus on building the developer experience layer:

  • Self-service namespaces and pipelines
  • Templates for common workloads
  • Internal developer portal options

Part 5: Crafting the Developer Experience Layer

Leave a comment