Introduction

Kubernetes (K8s) is an open-source container orchestration platform that automates deployment, scaling, and management of containerized applications. Understanding Kubernetes is essential for system design interviews involving containerized applications and microservices.

This guide covers:

  • Kubernetes Fundamentals: Pods, services, deployments, and namespaces
  • Scaling: Horizontal and vertical scaling
  • Service Discovery: Services and ingress
  • Storage: Persistent volumes and storage classes
  • Best Practices: Resource management, security, and monitoring

What is Kubernetes?

Kubernetes is a container orchestration platform that:

  • Container Orchestration: Manages containerized applications
  • Auto-Scaling: Automatically scales applications
  • Self-Healing: Restarts failed containers
  • Service Discovery: Automatic service discovery
  • Load Balancing: Distributes traffic across pods

Key Concepts

Pod: Smallest deployable unit (one or more containers)

Service: Stable network endpoint for pods

Deployment: Manages pod replicas

Namespace: Virtual cluster for resource isolation

Node: Worker machine in cluster

Cluster: Group of nodes

Architecture

High-Level Architecture

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│   Developer │────▶│   DevOps    │────▶│   Operator  │
│             │     │  Engineer   │     │             │
└──────┬──────┘     └──────┬──────┘     └──────┬──────┘
       │                    │                    │
       └────────────────────┴────────────────────┘
                            │
                            │ kubectl / API
                            │
                            ▼
              ┌─────────────────────────┐
              │  Kubernetes Cluster     │
              │                         │
              │  ┌──────────┐           │
              │  │  Master  │           │
              │  │  Node    │           │
              │  │(Control  │           │
              │  │  Plane)  │           │
              │  └────┬─────┘           │
              │       │                 │
              │  ┌────┴─────┐           │
              │  │  Worker   │           │
              │  │  Nodes    │           │
              │  │  (Pods)   │           │
              │  └──────────┘           │
              └─────────────────────────┘

Explanation:

  • Users: Developers, DevOps engineers, and operators who deploy and manage applications on Kubernetes.
  • Kubernetes Cluster: A collection of nodes (machines) that run containerized applications.
  • Master Node (Control Plane): Manages the cluster, schedules pods, maintains desired state, and handles API requests.
  • Worker Nodes: Machines that run application workloads. Each node runs pods (containers) and reports status to the master.

Core Architecture

┌─────────────────────────────────────────────────────────┐
│              Kubernetes Cluster                          │
│                                                           │
│  ┌──────────────────────────────────────────────────┐    │
│  │         Master Node (Control Plane)              │    │
│  │  (API Server, Scheduler, Controller Manager)       │    │
│  └──────────────────────────────────────────────────┘    │
│                          │                                │
│  ┌──────────────────────────────────────────────────┐    │
│  │         Worker Node 1                             │    │
│  │  (Kubelet, Kube-proxy, Pods)                       │    │
│  └──────────────────────────────────────────────────┘    │
│                          │                                │
│  ┌──────────────────────────────────────────────────┐    │
│  │         Worker Node 2                             │    │
│  │  (Kubelet, Kube-proxy, Pods)                       │    │
│  └──────────────────────────────────────────────────┘    │
└───────────────────────────────────────────────────────────┘

Pods

Basic Pod

pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: app
    image: nginx:latest
    ports:
    - containerPort: 80

Create Pod:

kubectl create -f pod.yaml

Multi-Container Pod

apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod
spec:
  containers:
  - name: app
    image: nginx:latest
  - name: sidecar
    image: busybox:latest
    command: ['sh', '-c', 'while true; do sleep 3600; done']

Deployments

Basic Deployment

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80

Create Deployment:

kubectl apply -f deployment.yaml

Rolling Update

Update Image:

kubectl set image deployment/nginx-deployment nginx=nginx:1.22

Rolling Update Strategy:

spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

Services

ClusterIP Service

service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: ClusterIP
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80

LoadBalancer Service

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80

NodePort Service

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080

Scaling

Manual Scaling

Scale Deployment:

kubectl scale deployment nginx-deployment --replicas=5

Horizontal Pod Autoscaler

hpa.yaml:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Create HPA:

kubectl apply -f hpa.yaml

ConfigMaps and Secrets

ConfigMap

configmap.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  database_url: "postgresql://localhost:5432/mydb"
  log_level: "info"

Use in Pod:

spec:
  containers:
  - name: app
    image: myapp:latest
    envFrom:
    - configMapRef:
        name: app-config

Secret

secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: app-secret
type: Opaque
data:
  password: cGFzc3dvcmQ=  # base64 encoded

Use in Pod:

spec:
  containers:
  - name: app
    image: myapp:latest
    env:
    - name: PASSWORD
      valueFrom:
        secretKeyRef:
          name: app-secret
          key: password

Persistent Storage

PersistentVolume

pv.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  hostPath:
    path: /mnt/data

PersistentVolumeClaim

pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-storage

Use in Pod:

spec:
  containers:
  - name: app
    image: myapp:latest
    volumeMounts:
    - name: storage
      mountPath: /data
  volumes:
  - name: storage
    persistentVolumeClaim:
      claimName: pvc-claim

Ingress

Ingress Resource

ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
spec:
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: app-service
            port:
              number: 80

Best Practices

1. Resource Management

  • Set resource requests and limits
  • Use namespaces for isolation
  • Monitor resource usage
  • Plan for capacity

2. Security

  • Use secrets for sensitive data
  • Implement RBAC
  • Use network policies
  • Scan container images

3. High Availability

  • Use multiple replicas
  • Distribute across nodes
  • Implement health checks
  • Use readiness and liveness probes

4. Monitoring

  • Monitor pod health
  • Track resource usage
  • Set up alerts
  • Use monitoring tools

What Interviewers Look For

Container Orchestration Understanding

  1. Kubernetes Concepts
    • Understanding of pods, services, deployments
    • Scaling mechanisms
    • Service discovery
    • Red Flags: No K8s understanding, wrong concepts, no scaling
  2. Deployment Strategies
    • Rolling updates
    • Blue-green deployment
    • Canary deployment
    • Red Flags: No deployment strategy, poor updates, no strategy
  3. Resource Management
    • Resource requests and limits
    • Namespace isolation
    • Resource quotas
    • Red Flags: No resource management, poor isolation, no quotas

Problem-Solving Approach

  1. Application Deployment
    • Pod design
    • Service configuration
    • Scaling strategy
    • Red Flags: Poor pod design, wrong services, no scaling
  2. High Availability
    • Replica strategy
    • Health checks
    • Failure handling
    • Red Flags: No HA, poor health checks, no failure handling

System Design Skills

  1. Container Orchestration Architecture
    • Kubernetes cluster design
    • Service mesh integration
    • Resource allocation
    • Red Flags: No architecture, poor integration, no resources
  2. Scalability
    • Horizontal scaling
    • Auto-scaling configuration
    • Performance optimization
    • Red Flags: No scaling, poor auto-scaling, no optimization

Communication Skills

  1. Clear Explanation
    • Explains Kubernetes concepts
    • Discusses trade-offs
    • Justifies design decisions
    • Red Flags: Unclear explanations, no justification, confusing

Meta-Specific Focus

  1. Container Orchestration Expertise
    • Understanding of containerization
    • Kubernetes mastery
    • Deployment strategies
    • Key: Demonstrate orchestration expertise
  2. System Design Skills
    • Can design containerized systems
    • Understands orchestration challenges
    • Makes informed trade-offs
    • Key: Show practical orchestration design skills

Summary

Kubernetes Key Points:

  • Container Orchestration: Manages containerized applications
  • Auto-Scaling: Automatically scales applications
  • Self-Healing: Restarts failed containers
  • Service Discovery: Automatic service discovery
  • Load Balancing: Distributes traffic

Common Use Cases:

  • Microservices deployment
  • Container orchestration
  • Auto-scaling applications
  • Service mesh
  • CI/CD pipelines
  • Cloud-native applications

Best Practices:

  • Set resource requests and limits
  • Use namespaces for isolation
  • Implement health checks
  • Use ConfigMaps and Secrets
  • Plan for high availability
  • Monitor resource usage
  • Implement security best practices

Kubernetes is a powerful platform for orchestrating containerized applications with automatic scaling and self-healing capabilities.