docker

test

Here are comprehensive Docker interview questions organized by level:


🟢 Beginner Level

Concepts

Q1: What is Docker and why is it used?

Docker is an open-source containerization platform that packages applications and their dependencies into lightweight, portable containers — ensuring they run consistently across any environment (dev, staging, production).

Q2: What is the difference between a container and a virtual machine?

ContainerVirtual Machine
OSShares host OS kernelHas its own OS
SizeLightweight (MBs)Heavy (GBs)
StartupSecondsMinutes
IsolationProcess-levelFull hardware-level
PerformanceNear-nativeOverhead

Q3: What is a Docker image vs a Docker container?

  • Image — A read-only blueprint/template used to create containers
  • Container — A running instance of an image

Q4: What is a Dockerfile?

A text file containing step-by-step instructions to build a Docker image automatically.

Q5: What is Docker Hub?

A public cloud-based registry where Docker images are stored, shared, and distributed.


Basic Commands

Q6: What are the most common Docker commands?

docker build -t myapp .                                  # Build image

docker run -d -p 8080:80 myapp               # Run container

docker ps                                                              # List running containers

docker ps -a                                                         # List all containers

docker stop <container_id>                          # Stop container

docker rm <container_id>                              # Remove container

docker images                                                    # List images

docker rmi <image_id>                                   # Remove image

docker logs <container_id>                           # View logs

docker exec -it <id> /bin/bash                     # Enter container shell

Q7: What is the difference between CMD and ENTRYPOINT?

CMDENTRYPOINT
PurposeDefault command, easily overriddenFixed command, always executes
OverrideYes, at runtimeOnly with –entrypoint flag
Use caseFlexible defaultsEnforced commands

ENTRYPOINT [“python”]                     # always runs python

CMD [“app.py”]                                      # default arg, can be overridden

Q8: What is the difference between COPY and ADD?

  • COPY — Simply copies files from host to container (preferred)
  • ADD — Same as COPY but also supports URLs and auto-extracts tar files

🟡 Intermediate Level

Networking

Q9: What are Docker network types?

NetworkDescriptionUse Case
bridgeDefault, isolated networkSingle host containers
hostShares host network stackHigh performance needs
noneNo networkingFully isolated containers
overlayMulti-host networkingDocker Swarm / distributed apps

docker network create my-network

docker run –network my-network myapp

Q10: How do containers communicate with each other?

Containers on the same custom bridge network can communicate using their container name as hostname.

# Both containers on same network can reach each other by name

docker run –network my-net –name db postgres

docker run –network my-net –name app myapp  # app can reach “db”


Volumes & Storage

Q11: What is the difference between volumes, bind mounts, and tmpfs?

TypeDescriptionUse Case
VolumeManaged by DockerPersistent data (databases)
Bind MountMaps host directory to containerDevelopment, live code reload
tmpfsStored in memory onlySensitive/temporary data

docker run -v myvolume:/data myapp          # volume

docker run -v /host/path:/container myapp   # bind mount

Q12: How do you persist data in Docker?

Use named volumes — data persists even after the container is removed.

docker volume create mydata

docker run -v mydata:/app/data myapp


Docker Compose

Q13: What is Docker Compose and when do you use it?

Docker Compose defines and runs multi-container applications using a single docker-compose.yml file.

version: “3.8”

services:

  app:

    build: .

    ports:

      – “8080:80”

    depends_on:

      – db

    environment:

      – DB_HOST=db

  db:

    image: postgres:15

    volumes:

      – pgdata:/var/lib/postgresql/data

    environment:

      – POSTGRES_PASSWORD=secret

volumes:

  pgdata:

docker-compose up -d      # Start all services

docker-compose down       # Stop and remove

docker-compose logs -f    # Follow logs

Q14: What is the difference between docker-compose up and docker-compose start?

  • up — Creates and starts containers (builds if needed)
  • start — Starts existing stopped containers only

Images & Optimization

Q15: How do you reduce Docker image size?

  • Use minimal base images like alpine
  • Use multi-stage builds
  • Combine RUN commands to reduce layers
  • Use .dockerignore to exclude unnecessary files

# Multi-stage build example

FROM node:18 AS builder

WORKDIR /app

COPY . .

RUN npm run build

FROM nginx:alpine

COPY –from=builder /app/dist /usr/share/nginx/html

Q16: What is a .dockerignore file?

Similar to .gitignore — tells Docker which files to exclude from the build context.

node_modules

.git

*.log

.env

dist


🔴 Advanced Level

Security

Q17: How do you secure Docker containers?

  • Run containers as non-root user
  • Use read-only filesystems where possible
  • Scan images for vulnerabilities (docker scout)
  • Limit container capabilities with –cap-drop
  • Never store secrets in Dockerfiles — use Docker Secrets or environment variables

# Run as non-root

RUN adduser –disabled-password appuser

USER appuser

Q18: What is the difference between docker save and docker export?

docker save    docker export
TargetImage      Container
IncludesAll layers & history      Flattened filesystem only
Use caseBackup/transfer images      Snapshot a running container

Performance & Production

Q19: How do you limit container resources?

docker run \

  –memory=”512m” \

  –cpus=”1.0″ \

  –memory-swap=”1g” \

  Myapp

Q20: What is the difference between Docker Swarm and Kubernetes?

 Docker SwarmKubernetes
Complexity SimpleComplex but powerful
Setup EasySteeper learning curve
Scaling Basic auto-scalingAdvanced auto-scaling
Community  SmallerVery large
Best forSmall–medium workloadsLarge enterprise workloads

Q21: What happens when a Docker container crashes?

Use restart policies to handle crashes automatically:

docker run –restart=always myapp         # Always restart

docker run –restart=on-failure:3 myapp   # Restart up to 3 times on failure

docker run –restart=unless-stopped myapp # Restart unless manually stopped


Dockerfile Best Practices

Q22: What are Dockerfile best practices?

# ✅ Good Dockerfile

FROM node:18-alpine                    # Use minimal base image

WORKDIR /app

COPY package*.json ./                  # Copy dependency files first

RUN npm ci –only=production           # Install dependencies

COPY . .                               # Copy source code

RUN adduser –disabled-password app    # Create non-root user

USER app                               # Switch to non-root

EXPOSE 3000

CMD [“node”, “server.js”]


⚡ Quick-Fire Questions

QuestionAnswer
Default Docker network?bridge
Docker config file location?/etc/docker/daemon.json
How to see container resource usage?docker stats
How to copy files into a container?docker cp file.txt container:/path
Difference between stop and kill?stop = graceful (SIGTERM), kill = forceful (SIGKILL)
What is a dangling image?An image with no tag, created by rebuilds
How to clean up unused resources?docker system prune

kong – client certs

Short answer: Yes—use the full chain (leaf + intermediates), not just the leaf. Don’t include the root CA in the chain you send.

Here’s how it applies in the three common Kong TLS cases:

  1. Clients → Kong (mTLS client-auth)
  • The client must present its leaf cert + intermediate(s) during the handshake.
  • Kong must trust the issuing CA (configure trusted CA(s) for client verification).
  • If you only send the leaf, you’ll hit errors like “unable to get local issuer certificate.”

Example (client side):

# build a fullchain for the client cert (no root)
cat client.crt intermediate.crt > client-fullchain.crt

# test against Kong (mTLS)
curl --cert client-fullchain.crt --key client.key https://kong.example.com/secure

  1. Kong → Upstream (mTLS to your backend)
  • In Kong, create a Certificate whose cert field is full chain (leaf + intermediates) and key is the private key.
  • Attach it to the service via client_certificate.
  • Ensure the upstream trusts the issuing CA.

Kong (DB mode, gist):

# upload cert+key (cert must be full chain)
POST /certificates
{ "cert": "<PEM fullchain>", "key": "<PEM key>" }

# bind to service
PATCH /services/{id}
{ "client_certificate": "<certificate_id>" }

  1. Kong’s server cert (TLS termination at Kong)
  • Serve a full chain so browsers/clients validate without needing to have the intermediate locally.
  • If using Kong Ingress, put the full chain in tls.crt of the Kubernetes secret.

Quick checks & common pitfalls

  • Do not include the root CA in the chain you send.
  • Order matters: leaf first, then each intermediate in order up to (but excluding) the root.
  • If you see “No required SSL certificate was sent” → the client didn’t present a cert at all.
  • If you see “certificate verify failed” / “unable to get local issuer certificate” → chain or trust store problem (usually missing intermediate).

If you tell me which direction you’re doing mTLS (client→Kong or Kong→upstream) and how you deploy (Docker / KIC / bare-metal), I’ll give you the exact Kong config block to drop in.

latency

That message comes from Nginx (which Kong runs on). It means the upstream response didn’t fit in memory buffers, so Nginx spooled it to disk under /usr/local/kong/proxy_temp. It’s not fatal, but it adds I/O and latency.

Here are practical fixes—pick what matches your API pattern.

What you can do

1) Quick stopgap: don’t write to disk

Prevents temp-file writes; response still buffered in memory.

proxy_buffering on;
proxy_max_temp_file_size 0;

2) Stream responses (no buffering at all)

Great for large downloads/streaming; reduces latency & disk I/O. (Backpressure goes directly to upstream.)

proxy_buffering off;

3) Increase memory buffers (keep buffering, avoid disk)

Size these to your typical response size and concurrency.

proxy_buffering on;
proxy_buffer_size 64k;        # header/first buffer
proxy_buffers 32 64k;         # total ~2 MB per connection here
proxy_busy_buffers_size 256k; # busy threshold before spooling

4) If you must spool, make it fast

Put Kong’s proxy temp on tmpfs or faster disk:

  • Mount /usr/local/kong/proxy_temp on tmpfs (container/VM)
  • Or move it via: proxy_temp_path /usr/local/kong/proxy_temp 1 2;

5) Tame slow clients (common cause of spooling)

Slow downloads force Nginx to hold data. Tighten/adjust:

send_timeout 30s;
tcp_nodelay on;
keepalive_timeout 65s;

(Shorter timeouts reduce long-lived slow sends.)

How to set these in Kong

Pick one method you already use:

A) With kong.conf

Add proxy-level directives (no need to rebuild Kong):

nginx_proxy_directives = 
  proxy_buffering=on,
  proxy_max_temp_file_size=0,
  proxy_buffer_size=64k,
  proxy_buffers=32 64k,
  proxy_busy_buffers_size=256k

(Comma-separated key=value pairs; restart/reload Kong.)

B) With environment variables (containers)

KONG_NGINX_PROXY_DIRECTIVES=proxy_buffering=on,proxy_max_temp_file_size=0,proxy_buffer_size=64k,proxy_buffers=32 64k,proxy_busy_buffers_size=256k

C) With a custom Nginx template

If you already manage an nginx-kong.conf template, put the directives in the proxy context and redeploy.

D) Helm (Kubernetes)

If using the Kong Helm chart, set the proxy directives block (values key often named like proxy.nginx.* or nginxProxyDirectives) to include the same directives. (Exact key names vary by chart version—mirror the options above.)

What to choose

  • APIs returning big files / streamingproxy_buffering off;
  • APIs with occasional large JSON → keep buffering on and increase proxy_buffer_* sizes; also set proxy_max_temp_file_size 0;
  • Mobile or very slow clients → consider buffering on + larger buffers, or keep temp on tmpfs

Validate

  1. Hit the route and watch /usr/local/kong/proxy_temp usage.
  2. Check error log: the “buffered to a temporary file” line should stop.
  3. Watch p95/p99 request latency; it usually drops when disk spooling stops.

If you share your response size range and traffic pattern (download/JSON/streaming), I’ll tune exact buffer counts/sizes so you don’t overspend RAM.

500 errors

Best practice for 500 errors

  • Enable error.log (debug) in Kong.
  • Use file-log or http-log plugin to capture structured traffic.
  • Use correlation-id plugin to align Kong + upstream logs.
  • Always check upstream service logs — a 500 originates there, not in Kong.

Kong automatically adds X-Kong-Request-ID (if enabled).

  • Add plugin: correlation-id
  • Configure a header (e.g. X-Request-ID) so you can trace across Kong logs, upstream app logs, and client logs.

API – response time

Here are fast, reliable ways to measure client-side API response time (and break it down) — from your laptop or from an EKS pod.

1) One-shot timing (curl)

This prints DNS, TCP, TLS, TTFB, and Total in one go:

curl -s -o /dev/null -w '
{ "http_code":%{http_code},
  "remote_ip":"%{remote_ip}",
  "dns":%{time_namelookup},
  "tcp":%{time_connect},
  "tls":%{time_appconnect},
  "ttfb":%{time_starttransfer},
  "total":%{time_total},
  "size":%{size_download},
  "speed":%{speed_download}
}
' https://api.example.com/path

Fields

  • dns: DNS lookup
  • tcp: TCP connect
  • tls: TLS handshake (0 if HTTP)
  • ttfb: time to first byte (request→first response byte)
  • total: full download time

2) From EKS (ephemeral pod)

Run N samples and capture a CSV:

kubectl run curl --rm -it --image=curlimages/curl:8.8.0 -- \
sh -c 'for i in $(seq 1 50); do \
  curl -s -o /dev/null -w "%{time_namelookup},%{time_connect},%{time_appconnect},%{time_starttransfer},%{time_total}\n" \
  https://api.example.com/health; \
done' > timings.csv

Open timings.csv and look at columns: dns,tcp,tls,ttfb,total. Large ttfb means slow upstream/app; big tls means handshake issues; big gap total - ttfb means payload/download time.

3) Separate proxy vs upstream (Kong in the path)

Kong adds latency headers you can read on the client:

curl -i https://api.example.com/path | sed -n 's/^\(x-kong-.*latency\): \(.*\)$/\1: \2/p'
# x-kong-proxy-latency: <ms>   (Kong → upstream start)
# x-kong-upstream-latency: <ms> (Upstream processing)

These help you see if delay is at the gateway or in the service.

4) Quick load/percentiles (pick one)

  • hey hey -z 30s -c 20 https://api.example.com/path
  • vegeta echo "GET https://api.example.com/path" | vegeta attack -rate=20 -duration=30s | vegeta report
  • k6 (scriptable) // save as test.js import http from 'k6/http'; import { check } from 'k6'; export const options = { vus: 20, duration: '30s', thresholds: { http_req_duration: ['p(95)<300'] } }; export default () => { const r = http.get('https://api.example.com/path'); check(r, { '200': (res)=>res.status===200 }); }; Run: k6 run test.js

5) App-level timers (optional)

Add a Server-Timing header from the API to expose your own phase timings (DB, cache, etc.). Then the client can read those headers to correlate.

6) Common gotchas

  • Proxies can add latency; test both with and without proxy (NO_PROXY / --proxy).
  • Auth: measure with real headers/tokens; 401/403 will skew.
  • SNI/Host: if hitting by IP, use --resolve host:443:IP -H "Host: host" so cert/routing is correct.
  • Warmup: discard first few samples (JIT, caches, TLS session reuse).

If you want, share a few curl -w outputs from local vs EKS and I’ll pinpoint where the time is going (DNS/TLS/TTFB/payload).

tcpdummp

Short answer: tcpdump can capture payload, but only if the traffic is plaintext.
If it’s HTTPS/TLS (which most APIs are), tcpdump will not show the body unless you decrypt it. It also won’t write into your “app logs”—it just captures packets.

Here are your practical options:

403

Short version: the cipher suite isn’t why you’re getting 403. A 403 is an authorization/ACL/WAF decision at L7. The TLS differences you saw (TLS 1.3 locally vs TLS 1.2 on EKS) are fine and expected depending on your load balancer policy.

What to do first (to find the real 403 source)

  1. Hit the EKS URL with verbose curl and look at headers:
curl -i -v https://your.domain/path

  • If the response headers look AWS-y and you see it immediately, the 403 likely comes from ALB/WAF.
  • If it reaches your pod/ingress and then 403s, it’s from Ingress/Kong/app (e.g., missing Authorization, IP allowlist, route rule, JWT/OIDC plugin, etc.).
  1. Check ALB access logs for the request:
  • If elb_status_code = 403, ALB/WAF blocked it (WAF rules often show up as 403).
  • If elb_status_code = 200 and target_status_code = 403, your target/app returned 403. (AWS Documentation)

Recommended TLS settings (good practice, not the 403 fix)

If you terminate TLS on an AWS ALB (via AWS Load Balancer Controller)

Use a modern security policy that enables TLS 1.3 and falls back to TLS 1.2:

metadata:
  annotations:
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06

That policy offers TLS 1.3 and 1.2 with strong ciphers. (If you truly want TLS 1.3 only: ELBSecurityPolicy-TLS13-1-3-2021-06.) (AWS Documentation, Kubernetes SIGs)

Tip: If you need maximum client compatibility, attach both an RSA and an ECDSA ACM cert (ALB supports multiple certs):
alb.ingress.kubernetes.io/certificate-arn: arn:...rsa,arn:...ecdsa. (Kubernetes SIGs)

If you terminate TLS on an NLB (TLS listener)

Pick a TLS 1.3 policy on the listener; NLB supports it and will use appropriate backend defaults. (AWS Documentation)

If you terminate TLS on ingress-nginx inside the cluster

Use TLS 1.2 + 1.3 and a modern cipher list:

# ConfigMap for ingress-nginx controller
data:
  ssl-protocols: "TLSv1.2 TLSv1.3"
  ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305"
  ssl-prefer-server-ciphers: "off"

(Those are the ingress-nginx docs’ modern defaults; TLS 1.3 ciphers are implicit.) (kubernetes.github.io)


Why your TLS info looks different

  • Local: likely direct to your app or local reverse proxy → negotiates TLS 1.3 (TLS_AES_128_GCM_SHA256 is one of the three TLS 1.3 ciphers).
  • EKS: you’re probably terminating at ALB (or an older policy) → negotiated TLS 1.2 (TLS_ECDHE_ECDSA_WITH_AES_128_…). That’s still strong if you use a modern ALB policy. (AWS Documentation)

Quick 403 triage checklist

  • AWS WAF attached? Check WAF logs for the rule ID; 403s here are common and configurable. (Repost, Amazon Web Services, Inc.)
  • Auth headers missing? Ingress/controller or a proxy hop might be stripping/renaming Authorization. Confirm with curl -v -H "Authorization: Bearer …".
  • Host/Path mismatch? Ingress host: and path rules must match exactly; wrong Host yields 4xx at ingress.
  • IP allowlist / Geo rules? Compare client IP vs policy.
  • Kong/OIDC/JWT plugin? Kong often 403s on invalid/absent token or unauthorized consumer; check Kong logs for the route and plugin decision.
  • CORS preflight? Browsers can show 403 on failed OPTIONS—ensure ingress routes OPTIONS.

If you share:

  • one ALB access log line for a blocked request, and/or
  • the response headers from curl -i -v on EKS,

I’ll tell you precisely whether the 403 is from ALB/WAF or from your Ingress/Kong/app—and the exact knob to turn.


: locally you can call the on-prem API just fine, but from EKS you get 403.
Handshake is OK (so network + TLS basics are fine). This is almost never a cipher issue—403 is an L7 decision (WAF/edge/Kong/app/proxy policy).

Here’s a tight plan to pinpoint and fix it.

1) Identify who is returning the 403 (one command from EKS)

Run from an EKS pod (ephemeral curl pod shown below) and inspect headers:

kubectl run curl --rm -it --image=curlimages/curl:8.8.0 -- \
  sh -c 'curl -i -v https://your.onprem.host/path -H "Authorization: Bearer $TOKEN" --max-time 20'

Look for these telltales in the 403 response:

  • Cloudflare: server: cloudflare, cf-ray, cf-cache-status
  • F5: server: BigIP or ASM headers like X-WA-Info
  • Kong: server: kong, x-kong-*
  • Proxy (corp/Zscaler/etc.): Via, X-BlueCoat-Via, custom proxy headers
  • Your app/ingress: app-specific headers, or none

If your pod must use an egress proxy, also try explicitly through it:

kubectl run curlp --rm -it --image=curlimages/curl:8.8.0 -- \
 sh -c 'curl -i -v https://your.onprem.host/path --proxy http://PROXY_HOST:PROXY_PORT --max-time 20'

2) Fix by source

A) WAF/Edge (Cloudflare/F5) is denying EKS

  • Likely IP allowlist/geo/ASN or bot rule.
    Fix: Allowlist your EKS NAT/Egress IP(s) or relax/adjust the specific WAF rule; confirm in WAF/Firewall logs.

B) Corporate egress proxy is denying

  • From EKS, traffic goes via a corp proxy that enforces access policy or strips Authorization.
    Fix:
    • Whitelist the destination domain on the proxy, or
    • Set NO_PROXY=your.onprem.host on the workload (if policy allows), or
    • Ensure the proxy allows the path and forwards all headers (esp. Authorization).
    • If the proxy does TLS inspection + your server expects client mTLS, either bypass inspection for this domain or use a two-hop mTLS model.

C) Kong/Ingress/app is denying

  • Common causes: missing/invalid token, ACL/IP allowlist plugin, wrong Host header/SNI, or CORS preflight.
    Fix:
    • Verify the request from EKS includes the same auth headers as local (env var or secret may be missing).
    • If calling by IP, set the correct Host header:
      curl -H "Host: your.onprem.host" --resolve your.onprem.host:443:W.X.Y.Z https://your.onprem.host/...
    • Check Kong logs for route/plugin decision (look for consumer, route, service, plugin names).

3) Quick differentials to run from EKS

  • Compare with and without proxy env:
# Without proxy (if allowed)
curl -i -v https://your.onprem.host/path

# Via proxy
curl -i -v https://your.onprem.host/path --proxy http://PROXY_HOST:PROXY_PORT

  • Preserve Host/SNI if you must target an IP:
curl -i -v --resolve your.onprem.host:443:W.X.Y.Z https://your.onprem.host/path

  • Include your real auth:
curl -i -v -H "Authorization: Bearer $TOKEN" https://your.onprem.host/path

  • See who really answered:
    Check Server, Via, cf-ray, x-kong*, X-Envoy-*, etc., in the 403 response.

4) About ciphers/TLS

Use modern policies that allow TLS 1.3 + 1.2 with ECDHE + AES-GCM/CHACHA. The exact cipher (e.g., TLSv1.3 TLS_AES_128_GCM_SHA256 locally vs TLSv1.2 ECDHE_*_AES_128_GCM_SHA256 from EKS) will not cause a 403. It’s fine to keep both TLS 1.2 and 1.3 enabled; the fix lies in the access/authorization layer, not the cipher.


If you paste the 403 response headers from the EKS curl (no body needed), I can tell you in one shot whether it’s WAF/edge, proxy, Kong, or your app, and give the exact knob to turn (allowlist, header forward, plugin, or bypass).