Difficulty: Junior
Answer:
CI (Continuous Integration):
CD (Continuous Delivery/Deployment):
Benefits:
CI/CD Pipeline Stages:
Real-world Context: Instead of manual builds and deployments taking hours, CI/CD automates everything. Push code → tests run → deploy automatically.
Follow-up: What’s the difference between Continuous Delivery and Continuous Deployment? (Delivery: auto to staging, manual to prod. Deployment: auto to prod)
Difficulty: Junior
Answer:
CI (Continuous Integration):
CD - Continuous Delivery:
CD - Continuous Deployment:
Key Difference:
Real-world Context:
Follow-up: When would you use Continuous Deployment vs Delivery? (Deployment: high test coverage, feature flags, canary deployments. Delivery: need manual approval, compliance)
Difficulty: Mid
Answer:
Typical Pipeline Stages:
Example Pipeline:
Commit → Build → Test → Package → Deploy Staging →
Test Staging → Approve → Deploy Prod → Verify
Real-world Context: Web application pipeline: Checkout → npm install → npm test → build → Docker build → deploy to ECS staging → e2e tests → approve → deploy to ECS prod.
Follow-up: What happens if a stage fails? (Pipeline stops, notification sent, fix issue and retry)
Difficulty: Mid
Answer:
Jenkins Master:
Jenkins Agents (Nodes):
Communication:
Benefits of Agents:
Real-world Context: Master on single server. Agents on multiple servers (Linux for builds, Windows for tests, macOS for iOS builds).
Follow-up: What’s the difference between static and dynamic agents? (Static: always running, Dynamic: created on-demand, destroyed after)
Difficulty: Mid
Answer:
Jenkinsfile is a text file that defines the pipeline as code, stored in repository.
Benefits:
Syntax:
Declarative Example:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Deploy') {
steps {
sh 'kubectl apply -f k8s/'
}
}
}
}
Scripted Example:
node {
stage('Build') {
sh 'mvn clean package'
}
stage('Test') {
sh 'mvn test'
}
}
Real-world Context: Pipeline defined in Jenkinsfile in repo. Any branch can have different pipeline. Changes reviewed like code.
Follow-up: What’s the difference between declarative and scripted pipelines? (Declarative: simpler, structured. Scripted: more flexible, Groovy)
Difficulty: Mid
Answer:
Stages:
Steps:
Post Actions:
Example:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
post {
always {
junit 'target/surefire-reports/*.xml'
}
failure {
mail to: 'team@example.com',
subject: "Build Failed",
body: "Build failed. Check ${env.BUILD_URL}"
}
}
}
}
post {
always {
cleanWs() // Clean workspace
}
success {
echo 'Pipeline succeeded!'
}
}
}
Real-world Context: After test stage, always publish test results. On failure, send email notification. After pipeline, always clean workspace.
Follow-up: What’s the difference between post in stage and post in pipeline? (Stage post: after that stage, Pipeline post: after all stages)
Difficulty: Mid
Answer:
Jenkins Credentials:
Using Credentials:
pipeline {
agent any
stages {
stage('Deploy') {
steps {
withCredentials([usernamePassword(
credentialsId: 'aws-credentials',
usernameVariable: 'AWS_ACCESS_KEY',
passwordVariable: 'AWS_SECRET_KEY'
)]) {
sh 'aws s3 cp file.txt s3://bucket/'
}
}
}
}
}
Best Practices:
Real-world Context: Pipeline needs AWS credentials. Store in Jenkins Credentials, reference by ID. Credentials injected as env vars, not logged.
Follow-up: How do you prevent secrets from appearing in logs? (Use withCredentials, don’t echo secrets, mask passwords in console)
Difficulty: Mid
Answer:
GitLab CI/CD is built into GitLab, provides CI/CD capabilities without separate tool.
Components:
How it Works:
.gitlab-ci.ymlExample .gitlab-ci.yml:
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
- npm run build
artifacts:
paths:
- dist/
test:
stage: test
script:
- npm test
deploy:
stage: deploy
script:
- kubectl apply -f k8s/
only:
- main
Real-world Context: Push to GitLab → pipeline runs automatically → build → test → deploy to prod (if main branch).
Follow-up: What’s the difference between GitLab CI and Jenkins? (GitLab CI: integrated, YAML-based. Jenkins: separate tool, more plugins, Groovy)
Difficulty: Mid
Answer:
Stages:
stages: sectionJobs:
Artifacts:
Example:
stages:
- build
- test
- deploy
build:
stage: build
script:
- mvn clean package
artifacts:
paths:
- target/*.jar
expire_in: 1 week
test:
stage: test
script:
- mvn test
dependencies:
- build # Get artifacts from build job
deploy:
stage: deploy
script:
- scp target/*.jar server:/app/
only:
- main
Real-world Context: Build job creates JAR file. Test job uses JAR. Deploy job deploys JAR. Artifacts passed between jobs.
Follow-up: What’s the difference between artifacts and cache? (Artifacts: job outputs, passed to next jobs. Cache: dependencies, speed up builds)
Difficulty: Mid
Answer:
Variables:
Using Variables:
deploy:
script:
- echo $CI_COMMIT_SHA
- echo $DEPLOY_TOKEN
variables:
DEPLOY_ENV: "production"
Predefined Variables:
CI_COMMIT_SHA: Commit hashCI_COMMIT_REF_NAME: Branch/tag nameCI_JOB_NAME: Job nameCI_PIPELINE_ID: Pipeline IDEnvironments:
Example:
deploy_staging:
stage: deploy
script:
- deploy.sh staging
environment:
name: staging
url: https://staging.example.com
deploy_prod:
stage: deploy
script:
- deploy.sh production
environment:
name: production
url: https://example.com
when: manual
only:
- main
Real-world Context: Use variables for API keys, environment names. Use environments to track deployments, show URLs, enable manual deployments.
Follow-up: What’s the difference between project and group variables? (Project: specific to project, Group: shared across projects in group)
Difficulty: Mid
Answer:
GitHub Actions is CI/CD platform integrated into GitHub, allows automation of workflows.
Components:
Workflow File:
.github/workflows/Example:
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Node.js
uses: actions/setup-node@v2
with:
node-version: '16'
- name: Install dependencies
run: npm install
- name: Run tests
run: npm test
Real-world Context: Push to GitHub → workflow runs → checkout code → setup environment → install deps → test → deploy.
Follow-up: What’s the difference between GitHub Actions and GitLab CI? (GitHub Actions: GitHub-native, marketplace. GitLab CI: GitLab-native, integrated)
Difficulty: Mid
Answer:
Workflows:
Jobs:
needs)if)Steps:
Example:
name: Build and Test
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build
run: npm run build
- name: Upload artifact
uses: actions/upload-artifact@v2
with:
name: dist
path: dist/
test:
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/checkout@v2
- name: Install dependencies
run: npm install
- name: Run tests
run: npm test
Real-world Context: Build job compiles code, creates artifact. Test job runs tests. Test depends on build (needs: build).
Follow-up: How do jobs share data? (Artifacts: upload-artifact/download-artifact actions, or job outputs)
Difficulty: Mid
Answer:
Secrets:
$Using Secrets:
deploy:
steps:
- name: Deploy
env:
AWS_ACCESS_KEY: $
AWS_SECRET_KEY: $
run: |
aws s3 cp file.txt s3://bucket/
Environments:
Example:
deploy:
environment: production
steps:
- name: Deploy to production
env:
API_KEY: $
run: deploy.sh
Protection Rules:
Real-world Context: Store AWS credentials as secrets. Use environment for production with required reviewers. Deploy requires manual approval.
Follow-up: What’s the difference between repository and environment secrets? (Repository: available to all workflows, Environment: specific to environment, can have reviewers)
Difficulty: Senior
Answer:
Challenges:
Pipeline Design:
1. Per-Service Pipelines:
2. Shared Pipeline Template:
3. Dependency Management:
4. Testing Strategy:
5. Deployment Strategy:
Example Structure:
service-a/
├── .gitlab-ci.yml
└── src/
service-b/
├── .gitlab-ci.yml
└── src/
shared/
└── pipeline-template.yml
Real-world Context: 10 microservices. Each has own pipeline. Shared template for common steps. Services deploy independently. Integration tests verify compatibility.
Follow-up: How do you handle database migrations in microservices? (Separate migration pipeline, versioned migrations, backward compatible changes)
Difficulty: Senior
Answer:
Blue-Green Deployment:
Benefits:
CI/CD Integration:
deploy_green:
script:
- kubectl apply -f k8s/green/
- kubectl rollout status deployment/green-app
- ./smoke-tests.sh green
switch_traffic:
script:
- kubectl patch service app -p '{"spec":{"selector":{"version":"green"}}}'
Canary Deployment:
Benefits:
CI/CD Integration:
deploy_canary:
script:
- kubectl set image deployment/app app=myapp:v2
- kubectl scale deployment/app --replicas=1 # 10% traffic
- sleep 300 # Monitor
- kubectl scale deployment/app --replicas=5 # 50% traffic
- sleep 300
- kubectl scale deployment/app --replicas=10 # 100%
Real-world Context: Blue-green: Switch entire traffic instantly, instant rollback. Canary: Gradual rollout, catch issues early, automatic rollback.
Follow-up: When would you use blue-green vs canary? (Blue-green: simple apps, instant switch. Canary: complex apps, gradual risk reduction)
Difficulty: Senior
Answer:
Infrastructure Testing:
Testing Types:
1. Syntax/Validation:
test_terraform:
script:
- terraform init
- terraform validate
- terraform fmt -check
2. Plan Review:
terraform_plan:
script:
- terraform plan -out=tfplan
artifacts:
paths:
- tfplan
review_plan:
script:
- terraform show tfplan
when: manual
3. Security Scanning:
4. Cost Estimation:
5. Compliance Testing:
Example:
test_infrastructure:
script:
- terraform init
- terraform validate
- checkov -d .
- infracost breakdown --path .
Real-world Context: Terraform changes → validate → security scan → cost estimate → plan review → apply. Fail pipeline on security issues.
Follow-up: How do you test infrastructure changes in staging before production? (Apply to staging first, run tests, then promote to prod)
Difficulty: Senior
Answer:
Pipeline Design:
Security:
Testing:
Deployment:
Monitoring:
Code Quality:
Real-world Context: Fast pipelines (< 10 min), security scanning, parallel tests, canary deployments, monitoring, rollback capability.
Follow-up: How do you optimize slow pipelines? (Cache dependencies, parallelize jobs, run only relevant tests, use faster runners)
Difficulty: Senior
Answer:
Challenges:
Strategies:
1. Separate Migration Pipeline:
2. Versioned Migrations:
3. Backward Compatible Changes:
4. Blue-Green with Database:
5. Feature Flags:
Example:
migrate_db:
script:
- flyway migrate
environment: staging
when: manual
deploy_app:
script:
- kubectl apply -f k8s/
needs:
- migrate_db
Real-world Context: Schema change: Add nullable column → Deploy app that handles both → Make required → Remove old code. Each step in separate deployment.
Follow-up: How do you rollback a database migration? (Create rollback migration, or restore from backup. Prefer backward compatible changes)
Difficulty: Mid
Answer:
Pipeline Stages:
1. Build:
2. Build Docker Image:
3. Push to Registry:
4. Deploy:
Example:
build:
script:
- npm install
- npm test
- npm run build
build_image:
script:
- docker build -t myapp:$CI_COMMIT_SHA .
- docker tag myapp:$CI_COMMIT_SHA myapp:latest
- docker push myapp:$CI_COMMIT_SHA
- docker push myapp:latest
deploy:
script:
- kubectl set image deployment/app app=myapp:$CI_COMMIT_SHA
- kubectl rollout status deployment/app
Best Practices:
Real-world Context: Build app → test → build Docker image → push to ECR → update K8s deployment → verify health.
Follow-up: How do you optimize Docker builds in CI/CD? (Multi-stage builds, layer caching, build args, use BuildKit)
Difficulty: Senior
Answer:
Security Scanning Types:
1. SAST (Static Application Security Testing):
2. Dependency Scanning:
3. Container Scanning:
4. Infrastructure Scanning:
5. Secrets Scanning:
Example Pipeline:
sast:
script:
- sonar-scanner
dependency_scan:
script:
- snyk test
container_scan:
script:
- trivy image myapp:$CI_COMMIT_SHA
infrastructure_scan:
script:
- checkov -d terraform/
secrets_scan:
script:
- git-secrets --scan
Best Practices:
Real-world Context: Pipeline runs: SAST → Dependency scan → Build → Container scan → Deploy. Fail on high-severity vulnerabilities.
Follow-up: How do you handle false positives in security scans? (Tune tool rules, suppress known false positives, review with security team)
CI/CD is essential for modern software delivery. Master pipeline design, security, testing, and deployment strategies. Practice with different tools and understand when to use each.
Next Steps: