· DevOps  · 2 min read

Getting Started with Kubernetes — A Practical Guide

Kubernetes can feel overwhelming at first. Here's a no-nonsense guide based on real production experience running EKS clusters at scale.

Kubernetes can feel overwhelming at first. Here's a no-nonsense guide based on real production experience running EKS clusters at scale.

Why Kubernetes?

After running containerized workloads across multiple production environments — from self-managed clusters to AWS EKS — I can confidently say Kubernetes is the right tool for serious container orchestration. It automates deployment, scaling, and operations so you can focus on delivering value, not babysitting infrastructure.

Prerequisites

  • Docker installed locally
  • kubectl CLI
  • A local cluster via minikube or kind
  • Basic understanding of containers

Your First Deployment

kubectl create deployment hello-app --image=nginx
kubectl expose deployment hello-app --type=NodePort --port=80
kubectl get services

This creates a simple nginx deployment and exposes it via a NodePort service.

Key Concepts to Understand

Pods

The smallest deployable unit. A pod wraps one or more containers that share network and storage.

Deployments

Manage the desired state of your pods — rolling updates, rollbacks, and replica scaling.

Services

Abstract network access to a set of pods. Types include ClusterIP, NodePort, and LoadBalancer.

Cost Optimization with Karpenter

One thing I’ve learned running EKS in production: node provisioning matters for cost. Karpenter is a game-changer — it provisions exactly the right nodes for your workloads, eliminating over-provisioning.

apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  requirements:
    - key: karpenter.sh/capacity-type
      operator: In
      values: ["spot", "on-demand"]
  limits:
    resources:
      cpu: 1000
  ttlSecondsAfterEmpty: 30

Event-Driven Scaling with KEDA

For workloads that need to scale based on external metrics (queues, topics, etc.), KEDA extends Kubernetes autoscaling beyond CPU/memory:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: my-scaledobject
spec:
  scaleTargetRef:
    name: my-deployment
  triggers:
    - type: aws-sqs-queue
      metadata:
        queueURL: https://sqs.eu-west-1.amazonaws.com/123/my-queue
        queueLength: "5"

Helm — Package Management for Kubernetes

Don’t manage raw YAML manifests at scale. Use Helm:

helm repo add stable https://charts.helm.sh/stable
helm install my-release stable/nginx-ingress

Next Steps

Once you’re comfortable with the basics, explore:

  • Karpenter for intelligent node provisioning
  • KEDA for event-driven autoscaling
  • ArgoCD for GitOps-based deployments
  • Ingress controllers for HTTP routing

Kubernetes has a steep learning curve, but the investment pays off quickly when managing production workloads at scale.

---
Back to Blog

Related Posts

View All Posts »
GitOps on Kubernetes with ArgoCD

GitOps on Kubernetes with ArgoCD

ArgoCD changed how I think about deployments. Here's how to set up GitOps for your Kubernetes workloads — and why you won't go back to manual kubectl applies.