Difficulty: Junior
Answer:
Kubernetes (K8s) is an open-source container orchestration platform that automates deployment, scaling, and management of containerized applications.
Problems it Solves:
Key Concepts:
Real-world Context: Instead of manually managing 100 containers across 10 servers, Kubernetes automates deployment, scaling, and health checks.
Follow-up: What’s the difference between Kubernetes and Docker Swarm? (K8s: more features, complex, industry standard. Swarm: simpler, Docker-native)
Difficulty: Mid
Answer:
Master Node (Control Plane):
Worker Node:
Communication Flow:
Real-world Context: Master node manages cluster state. Worker nodes run your applications. If master fails, cluster management stops (but apps keep running).
Follow-up: What happens if the master node fails? (Cluster management stops, but worker nodes continue running existing pods. Need HA setup with multiple masters)
Difficulty: Mid
Answer:
A Pod is the smallest deployable unit in Kubernetes - a group of one or more containers that share storage and network.
Pod Characteristics:
Why Pods, Not Containers?
Example:
apiVersion: v1
kind: Pod
metadata:
name: web-app
spec:
containers:
- name: web
image: nginx
- name: log-collector
image: fluentd
Real-world Context: Web server pod with nginx container and fluentd sidecar for log collection. They share network and can communicate via localhost.
Follow-up: Can you run multiple containers in a pod? (Yes, but usually one main container + sidecars. Don’t put multiple apps in one pod)
Difficulty: Mid
Answer:
Namespaces provide logical separation and resource isolation within a cluster.
Default Namespaces:
default: User resources (if not specified)kube-system: System componentskube-public: Publicly accessible resourceskube-node-lease: Node heartbeatUse Cases:
Creating Namespace:
kubectl create namespace production
kubectl apply -f app.yaml -n production
Resource Quotas:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: production
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
Real-world Context: Separate dev and prod in same cluster. Dev team can’t accidentally affect prod resources.
Follow-up: Can pods in different namespaces communicate? (Yes, using service DNS: service-name.namespace.svc.cluster.local)
Difficulty: Mid
Answer:
Pod phases represent where a pod is in its lifecycle:
Phases:
Checking Phase:
kubectl get pods
kubectl describe pod <pod-name>
Container States (within Pod):
Real-world Context: Pod stuck in Pending → check node resources, image pull issues, node selectors. Pod in Failed → check container logs.
Follow-up: What causes a pod to be in Pending state? (No available nodes, image pull errors, resource constraints, node selectors/affinity)
Difficulty: Mid
Answer:
Requests: Guaranteed resources (scheduler uses for placement) Limits: Maximum resources (container cannot exceed)
Example:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
CPU Units:
1000m = 1 CPU core500m = 0.5 cores0.5 = 0.5 coresMemory Units:
64Mi = 64 mebibytes1Gi = 1 gibibyteQoS Classes:
Scheduling:
Real-world Context: Web app requests 256Mi memory, limit 512Mi. Scheduler places on node with 256Mi free. If app uses 600Mi, it’s killed (OOMKilled).
Follow-up: What happens if a container exceeds its memory limit? (Container is killed with OOMKilled status, pod may be restarted)
Difficulty: Mid
Answer:
Init containers run before main containers in a pod, must complete successfully before main containers start.
Characteristics:
Use Cases:
Example:
spec:
initContainers:
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z db 5432; do sleep 2; done']
containers:
- name: app
image: myapp
Real-world Context: App depends on database. Init container waits for DB to be ready, then main app container starts.
Follow-up: What’s the difference between init containers and sidecars? (Init: run before main, sequential. Sidecar: run alongside main, parallel)
Difficulty: Mid
Answer:
Liveness Probe:
Readiness Probe:
Probe Types:
Example:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Real-world Context: App takes 20s to start. Readiness probe waits 20s before adding to load balancer. Liveness probe restarts if app hangs.
Follow-up: What happens if liveness probe fails? (Container is killed and restarted. Pod phase may show CrashLoopBackOff)
Difficulty: Mid
Answer:
A Service provides stable network access to a set of pods, abstracting pod IPs which change.
Service Types:
ClusterIP (default):
NodePort:
LoadBalancer:
ExternalName:
Example:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
Real-world Context: Frontend pods change IPs. Service provides stable endpoint. Frontend Service (ClusterIP) → Backend Service (ClusterIP) → Database (ExternalName).
Follow-up: How does Service select pods? (Using label selectors: selector: { app: web })
Difficulty: Mid
Answer:
Kubernetes has built-in DNS (CoreDNS) that provides service discovery.
DNS Names:
service-name.namespace.svc.cluster.localpod-ip.namespace.pod.cluster.localservice-name (same namespace), service-name.namespace (different namespace)How it Works:
Example:
# From pod, access service:
curl http://web-service.default.svc.cluster.local
# Or short form (same namespace):
curl http://web-service
Real-world Context: Frontend pod needs to call backend API. Use DNS name backend-service instead of hardcoding IPs.
Follow-up: What’s the difference between Service DNS and Pod DNS? (Service: stable, multiple pods. Pod: specific pod IP, changes)
Difficulty: Mid
Answer:
Ingress provides HTTP/HTTPS routing to services based on hostname/path, acting as a reverse proxy.
Ingress vs Service:
Ingress Controller:
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
- host: api.example.com
http:
paths:
- path: /
backend:
service:
name: api-service
port:
number: 80
Real-world Context: Multiple apps on same domain: /app → frontend service, /api → backend service. Ingress routes based on path.
Follow-up: Do you need an Ingress Controller? (Yes, Ingress is just a spec. Need controller like nginx-ingress or AWS ALB Ingress Controller)
Difficulty: Mid
Answer:
Deployment manages ReplicaSets, which manage Pods. Provides declarative updates and rollback.
Hierarchy:
Deployment Features:
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx:1.20
Rolling Update:
maxSurge: Max pods over desired (default 25%)maxUnavailable: Max pods unavailable (default 25%)Real-world Context: Update app from v1 to v2. Deployment creates new ReplicaSet, gradually replaces pods. If issues, rollback to v1.
Follow-up: How do you rollback a deployment? (kubectl rollout undo deployment/web-deployment)
Difficulty: Mid
Answer:
RollingUpdate (default):
Recreate:
Configuration:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
RollingUpdate Parameters:
maxSurge: Can exceed desired replicas during updatemaxUnavailable: Can be below desired replicasReal-world Context: Stateless web app → RollingUpdate (zero downtime). Database migration → Recreate (can’t run two versions).
Follow-up: What’s the difference between maxSurge and maxUnavailable? (maxSurge: extra pods allowed, maxUnavailable: pods that can be down)
Difficulty: Mid
Answer:
ReplicaSet ensures a specified number of pod replicas are running.
ReplicaSet:
Deployment:
When to Use ReplicaSet Directly:
Example:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web-rs
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
# pod template
Real-world Context: Always use Deployment. ReplicaSet is lower-level component. Deployment = ReplicaSet + update capabilities.
Follow-up: Can you update a ReplicaSet? (Yes, but no rollback. Use Deployment for updates)
Difficulty: Mid
Answer:
ConfigMaps:
Secrets:
Creating ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database_url: "postgresql://db:5432/mydb"
log_level: "info"
Using in Pod:
env:
- name: DB_URL
valueFrom:
configMapKeyRef:
name: app-config
key: database_url
Creating Secret:
kubectl create secret generic db-secret \
--from-literal=username=admin \
--from-literal=password=secret123
Real-world Context: App config (API endpoint, log level) → ConfigMap. Database password → Secret. Mount as env vars or files.
Follow-up: Are Secrets encrypted? (Base64 encoded by default, not encrypted. Use external secret management or enable encryption at rest)
Difficulty: Senior
Answer:
Problem: Changing ConfigMap/Secret doesn’t automatically update pods using them.
Solutions:
1. Restart Pods:
kubectl rollout restart deployment/web-deployment
2. Use Reloader (third-party):
3. Volume Mounts with Reload:
4. Use Init Containers:
5. External Config Management:
Best Practice:
Real-world Context: Update database URL in ConfigMap. Need to restart pods for changes to take effect. Use kubectl rollout restart.
Follow-up: What’s the difference between env vars and volume mounts for ConfigMaps? (Env: set at startup, Volume: can be watched by app)
Difficulty: Mid
Answer:
Volumes:
PersistentVolume (PV):
PersistentVolumeClaim (PVC):
Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Access Modes:
Real-world Context: Database pod needs persistent storage. Create PVC, pod mounts it. If pod deleted, data persists. New pod can mount same PVC.
Follow-up: What’s the difference between PV and PVC? (PV: storage resource, PVC: request for storage)
Difficulty: Mid
Answer:
Check Pod Status:
kubectl get pods
kubectl describe pod <pod-name>
kubectl logs <pod-name>
Common Issues:
1. Pending:
2. ImagePullBackOff:
3. CrashLoopBackOff:
4. Error:
Debugging Steps:
kubectl describe pod → Check events, statuskubectl logs → Check container logskubectl exec → Debug inside containerReal-world Context: Pod stuck in ImagePullBackOff. Check image name, verify registry access, check imagePullSecrets if private registry.
Follow-up: How do you debug a container that crashes immediately? (Check logs, describe pod for events, exec into container if possible, check resource limits)
Difficulty: Senior
Answer:
Common Networking Issues:
1. Pods can’t communicate:
kubectl get endpoints2. Service not accessible:
3. DNS not working:
nslookup service-nameDebugging Commands:
# Check Service endpoints
kubectl get endpoints <service-name>
# Check DNS
kubectl run -it --rm debug --image=busybox -- nslookup <service-name>
# Test connectivity
kubectl exec <pod> -- curl <service-name>
# Check NetworkPolicies
kubectl get networkpolicies
# Check Service details
kubectl describe svc <service-name>
Real-world Context: Frontend can’t reach backend. Check Service selector, verify endpoints, test DNS, check NetworkPolicies.
Follow-up: What’s the difference between ClusterIP and NodePort? (ClusterIP: internal only, NodePort: exposed on node IP)
Difficulty: Senior
Answer:
RBAC (Role-Based Access Control) controls who can do what in Kubernetes.
Components:
Role/ClusterRole:
RoleBinding/ClusterRoleBinding:
Example:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
subjects:
- kind: User
name: alice
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Best Practices:
Real-world Context: Developer needs read-only access to pods in dev namespace. Create Role with get/list verbs, bind to user.
Follow-up: What’s the difference between Role and ClusterRole? (Role: namespace-scoped, ClusterRole: cluster-scoped)
Kubernetes is complex but powerful. Master these concepts: pods, services, deployments, ConfigMaps/Secrets, and troubleshooting. Practice with hands-on labs and understand the architecture.
Next Steps: