Core Concepts
Docker packages applications and their dependencies into containers — portable, isolated units that run identically everywhere. Understanding a few key primitives unlocks everything else.
Image
Read-only blueprint of your app — OS layers, runtime, code, and config. Built from a Dockerfile.
Container
A running instance of an image. Isolated process with its own filesystem, network, and PID namespace.
Dockerfile
Recipe that defines how an image is built — base image, commands, files copied, ports exposed.
Volume
Persistent storage that survives container restarts. Lives outside the container's layered filesystem.
Network
Virtual network allowing containers to communicate. Multiple drivers: bridge, host, overlay.
Registry
Remote store for images. Docker Hub is the default. Private registries: ECR, GCR, GHCR, etc.
How It All Fits Together
Installation & Setup
Install Docker Desktop (macOS / Windows) or Docker Engine (Linux). Desktop includes the daemon, CLI, Compose, BuildKit, and a GUI.
docker --version
docker info
docker run hello-world # pulls & runs a test container
Docker Context — Switch Between Environments
Contexts let you target different Docker daemons (local, remote SSH host, Kubernetes) from the same CLI.
docker context ls # list contexts
docker context create remote \
--docker "host=ssh://user@remote-host" # create SSH context
docker context use remote # switch to remote
docker context use default # back to local
Images
Images are immutable, layered filesystems. Each Dockerfile instruction adds a new layer. Layers are cached and shared across images — making builds fast and pulls efficient.
docker image ls # list local images
docker image ls -a # include intermediate layers
docker pull node:20-alpine # pull a specific tag
docker image inspect node:20-alpine # full metadata as JSON
docker image rm my-app:1.0 # remove image
docker image prune # remove dangling (untagged) images
docker image prune -a # remove ALL unused images
docker history my-app # show image layer history
docker tag my-app:latest my-app:1.0.2 # re-tag an image
docker save -o app.tar my-app # export image to tar
docker load -i app.tar # import image from tar
[registry/][namespace/]name[:tag][@digest]Examples:
nginx:1.25-alpine, ghcr.io/myorg/api:sha-a3b4c5d
Containers
Containers are running (or stopped) instances of images. They are ephemeral by design — data written inside is lost when removed unless persisted via volumes.
Running Containers
docker run nginx # run in foreground (blocking)
docker run -d nginx # detached (background)
docker run -d --name web nginx # named container
docker run -d -p 8080:80 nginx # map host:container port
docker run -d -p 8080:80 -v ./html:/usr/share/nginx/html nginx # bind mount
docker run -e NODE_ENV=production node-app # pass env var
docker run --env-file .env node-app # env vars from file
docker run --rm node-app # auto-remove when stopped
docker run -it ubuntu bash # interactive terminal
docker run --cpus 0.5 --memory 256m app # limit resources
docker run --network mynet app # join a named network
docker run --restart unless-stopped app # auto-restart policy
Lifecycle Management
docker ps # running containers
docker ps -a # all containers
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
docker stop web # graceful stop (SIGTERM → SIGKILL)
docker start web # restart a stopped container
docker restart web # stop + start
docker kill web # immediate SIGKILL
docker rm web # remove stopped container
docker rm -f web # force remove (even if running)
docker container prune # remove all stopped containers
Interacting with Running Containers
docker exec -it web bash # open shell inside container
docker exec web cat /etc/nginx/nginx.conf # run one-off command
docker logs web # stdout / stderr
docker logs -f web # follow (tail -f)
docker logs --tail 50 -f web # last 50 lines + follow
docker logs --since "10m" web # last 10 minutes
docker inspect web # full JSON metadata
docker inspect --format '{{.NetworkSettings.IPAddress}}' web
docker stats # live CPU / memory usage
docker top web # processes inside container
docker diff web # filesystem changes
docker cp web:/app/output.log ./ # copy file out
docker cp ./config.json web:/app/ # copy file in
Dockerfile Deep-Dive
The Dockerfile defines every layer of your image. Every instruction that changes the filesystem creates a new cached layer. Order matters — put infrequently-changing instructions first.
All Core Instructions
# syntax=docker/dockerfile:1
FROM node:20-alpine # base image (always first)
FROM node:20-alpine AS builder # named stage for multi-stage
ARG NODE_ENV=production # build-time variable
ENV PORT=3000 # runtime environment variable
ENV PORT=3000 HOST=0.0.0.0 # multiple at once
WORKDIR /app # set working dir (creates if missing)
COPY package*.json ./ # copy specific files first (cache trick)
COPY . . # copy everything
COPY --from=builder /app/dist ./dist # copy from another stage
COPY --chown=node:node . . # set file ownership
ADD app.tar.gz /app # like COPY but auto-extracts archives
ADD https://example.com/file ./ # can fetch URLs (prefer COPY otherwise)
RUN npm ci --only=production # run command during build
RUN apt-get update \
&& apt-get install -y curl \
&& rm -rf /var/lib/apt/lists/* # chain to reduce layers
EXPOSE 3000 # document which port the app uses
VOLUME ["/data"] # declare mount point
USER node # switch to non-root user
HEALTHCHECK --interval=30s --timeout=5s \
CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "server.js"] # default command (exec form)
ENTRYPOINT ["node", "server.js"] # always runs; CMD provides default args
LABEL maintainer="[email protected]" # metadata labels
ONBUILD COPY . . # trigger for child images
STOPSIGNAL SIGTERM # signal used to stop container
SHELL ["/bin/bash", "-c"] # change default shell
CMD vs ENTRYPOINT
| Scenario | Use |
|---|---|
| Default command, overridable by user | CMD ["node", "server.js"] |
| Always run this binary, allow args | ENTRYPOINT ["node"] + CMD ["server.js"] |
| Fixed wrapper script | ENTRYPOINT ["./entrypoint.sh"] |
.dockerignore
Always add a .dockerignore to exclude files from the build context — keeps builds fast and images lean.
node_modules
.git
.gitignore
*.log
dist
coverage
.env
.env.local
**/*.test.ts
README.md
.DS_Store
Build Optimization
Layer Caching — Order Your Instructions Wisely
# ❌ Bad — copies ALL files first, busts cache on every code change
COPY . .
RUN npm ci
# ✅ Good — installs deps only when package.json changes
COPY package*.json ./
RUN npm ci
COPY . .
BuildKit — Enable for Faster Builds
# Enable for single build
DOCKER_BUILDKIT=1 docker build -t my-app .
# Enable permanently (set in ~/.docker/daemon.json)
{ "features": { "buildkit": true } }
# Mount cache for package managers (Dockerfile)
# syntax=docker/dockerfile:1
RUN --mount=type=cache,target=/root/.npm \
npm ci --prefer-offline
# Secret injection at build time (never stored in layer)
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
npm ci
# Build with: docker build --secret id=npmrc,src=.npmrc .
Multi-Stage Builds
Multi-stage builds let you use heavyweight build tools (compilers, bundlers) without including them in the final image. Production images stay small and secure.
Node.js Example
# syntax=docker/dockerfile:1
# ── Stage 1: build frontend ──────────────────────────────
FROM node:20-alpine AS frontend-builder
WORKDIR /app
COPY packages/frontend/package*.json ./packages/frontend/
RUN npm ci
COPY packages/frontend ./packages/frontend
RUN npm run build
# ── Stage 2: build backend ───────────────────────────────
FROM node:20-alpine AS backend-builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY src ./src
# ── Stage 3: final runtime image ─────────────────────────
FROM node:20-alpine AS production
WORKDIR /app
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=backend-builder /app/node_modules ./node_modules
COPY --from=backend-builder /app/src ./src
COPY --from=frontend-builder /app/packages/frontend/dist ./public
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "src/server.js"]
docker build --target backend-builder -t my-app:dev . # stop at a stage
docker build --target production -t my-app:prod . # final prod image
Volumes & Storage
Containers are ephemeral — their writable layer is destroyed on docker rm. Use volumes or bind mounts for persistence.
| Type | Use Case | Host Path |
|---|---|---|
| Named Volume | Databases, persistent app data | Managed by Docker (/var/lib/docker/volumes/…) |
| Bind Mount | Local dev, source code hot-reload | Any host path you specify |
| tmpfs | Temporary in-memory data, secrets | RAM only, never written to disk |
# ── Named Volumes ────────────────────────────────────────
docker volume create pgdata
docker volume ls
docker volume inspect pgdata
docker volume rm pgdata
docker volume prune # remove all unused volumes
# Attach named volume to container
docker run -d -v pgdata:/var/lib/postgresql/data postgres:16
# ── Bind Mounts ──────────────────────────────────────────
docker run -v ./src:/app/src:ro node-app # read-only bind
docker run --mount type=bind,source=$(pwd)/src,target=/app/src node-app
# ── tmpfs ────────────────────────────────────────────────
docker run --tmpfs /tmp:rw,size=64m app
docker run --mount type=tmpfs,destination=/tmp app
--mount syntax is verbose but explicit and error-safe. -v ./data:/data will silently create an empty directory if the source doesn't exist; --mount will error.
Networking
Docker containers communicate over virtual networks. By default each container gets a virtual NIC on the bridge network. Containers on the same user-defined network can reach each other by container name (DNS auto-discovery).
| Driver | When to Use |
|---|---|
| bridge | Default. Isolated network on a single host. Containers talk by name. |
| host | Container shares host network stack. Max performance, no isolation. |
| overlay | Multi-host networking. Required for Docker Swarm. |
| none | No networking at all. |
| macvlan | Assign a MAC address — appears as physical device on network. |
docker network ls
docker network create appnet
docker network create --driver overlay swarmnet # for Swarm
docker network inspect appnet
docker network rm appnet
docker network prune
# Connect containers to a network
docker run -d --name db --network appnet postgres
docker run -d --name api --network appnet my-api
# api can now reach db via hostname "db"
# e.g. DATABASE_URL=postgres://db:5432/mydb
# Connect/disconnect running container
docker network connect appnet web
docker network disconnect appnet web
# Port publishing
docker run -p 3000:3000 app # bind all interfaces
docker run -p 127.0.0.1:3000:3000 app # localhost only (more secure)
docker run -P app # auto-assign host ports
Environment & Config
# Single var at runtime
docker run -e DATABASE_URL=postgres://... app
# From .env file
docker run --env-file .env.production app
# Inspect current env inside container
docker exec myapp env
# Dockerfile: build-time ARG vs runtime ENV
ARG COMMIT_SHA # available during build only
ENV COMMIT_SHA=$COMMIT_SHA # expose ARG as runtime ENV
ENV NODE_ENV=production # always set at runtime
# Build: docker build --build-arg COMMIT_SHA=abc123 .
RUN command or ENV is visible in docker history.
Docker Compose
Compose lets you define and run multi-container applications from a single YAML file. One command starts your entire stack.
name: myapp
services:
api:
build:
context: .
dockerfile: Dockerfile
target: production
args:
NODE_ENV: production
image: myapp/api:latest
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/mydb
- REDIS_URL=redis://cache:6379
env_file:
- .env.local
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
volumes:
- ./src:/app/src # hot-reload in dev
networks:
- backend
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: mydb
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
interval: 10s
retries: 5
cache:
image: redis:7-alpine
command: redis-server --appendonly yes
volumes:
- redisdata:/data
networks:
- backend
nginx:
image: nginx:1.25-alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/ssl/certs:ro
depends_on:
- api
networks:
- backend
volumes:
pgdata:
redisdata:
networks:
backend:
driver: bridge
Essential Compose Commands
docker compose up # start all services (foreground)
docker compose up -d # detached
docker compose up --build # rebuild images first
docker compose up -d --scale api=3 # run 3 replicas of api
docker compose down # stop & remove containers + networks
docker compose down -v # also remove volumes
docker compose stop # stop without removing
docker compose start # start stopped services
docker compose restart api # restart one service
docker compose ps # status of all services
docker compose logs -f api # follow logs of one service
docker compose logs -f --tail 100 # all services, last 100 lines
docker compose exec api sh # shell into running service
docker compose run --rm api npm test # one-off command
docker compose build --no-cache # full rebuild
docker compose config # validate & view merged YAML
docker compose pull # pull latest base images
docker compose top # processes in each service
Compose Patterns
Override Files (dev vs prod)
Use docker-compose.override.yml for dev-specific settings — it's merged automatically.
services:
api:
build:
target: development # use dev stage
volumes:
- .:/app # hot-reload
- /app/node_modules # exclude node_modules from bind
command: npm run dev
environment:
- NODE_ENV=development
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Profiles — Conditional Services
services:
adminer:
image: adminer
profiles: ["tools"] # only starts with --profile tools
# Start only tool-flagged services
docker compose --profile tools up -d
Registries & Pushing Images
docker login # Docker Hub
docker login ghcr.io -u USERNAME --password-stdin # GHCR
docker login 123456789.dkr.ecr.us-east-1.amazonaws.com # ECR (use aws ecr get-login-password)
docker build -t myuser/myapp:1.0.0 .
docker push myuser/myapp:1.0.0
docker push myuser/myapp:latest
# Multi-platform build & push in one step (buildx)
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myuser/myapp:1.0.0 \
--push .
CI/CD Integration
GitHub Actions — Build, Test & Push
name: Build & Push
on:
push:
branches: [main]
tags: ['v*']
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha # GitHub Actions cache
cache-to: type=gha,mode=max
CI Commands Reference
# Run tests inside a container (no local deps needed)
docker run --rm -v $(pwd):/app -w /app node:20 npm test
# Integration tests with Compose
docker compose -f docker-compose.test.yml up --abort-on-container-exit
docker compose -f docker-compose.test.yml down -v
# Scan image for vulnerabilities (Docker Scout)
docker scout cves myapp:latest
docker scout quickview myapp:latest
Deployment
Single Host (SSH + Compose)
# Create SSH context pointing to prod server
docker context create prod \
--docker "host=ssh://[email protected]"
# Run any docker/compose command against prod
docker --context prod compose up -d
docker --context prod ps
# Or set context for the whole session
export DOCKER_CONTEXT=prod
docker compose pull && docker compose up -d
Docker Swarm (Multi-Host)
# Initialize a Swarm
docker swarm init --advertise-addr
docker swarm join --token MANAGER:2377 # on worker nodes
docker node ls # list nodes
# Deploy a stack (compose file format)
docker stack deploy -c docker-compose.yml myapp
docker stack ls
docker stack ps myapp # tasks / containers
docker stack rm myapp # teardown
# Scale a service
docker service scale myapp_api=5
docker service ls
docker service update --image myapp/api:2.0 myapp_api # rolling update
Security
# 1. Run as non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
# 2. Read-only root filesystem
docker run --read-only --tmpfs /tmp app
# 3. Drop capabilities
docker run --cap-drop ALL --cap-add NET_BIND_SERVICE app
# 4. Prevent privilege escalation
docker run --security-opt no-new-privileges app
# 5. Use Docker secrets (Swarm)
echo "mysecretpassword" | docker secret create db_password -
# In compose: secrets: [db_password], then mount at /run/secrets/db_password
# 6. Scan for CVEs
docker scout cves myapp:latest
# 7. Minimal base images
# Use alpine, distroless, or scratch where possible
FROM gcr.io/distroless/nodejs20-debian12
Debugging
# Shell into a running container
docker exec -it myapp sh
# Shell into an image (even if it has no shell installed)
docker debug myapp:latest
# Inspect why a container exited
docker inspect --format '{{.State}}' myapp
docker logs --tail 50 myapp
# Live resource stats
docker stats
docker stats myapp --no-stream # one snapshot
# Inspect filesystem diffs
docker diff myapp
# Check what a container sees on the network
docker exec myapp nslookup db # DNS resolution
docker exec myapp wget -qO- http://db:5432 # connectivity test
# Validate Compose config
docker compose config
# System-wide cleanup
docker system df # disk usage
docker system prune # remove stopped containers, unused images, networks
docker system prune -a --volumes # nuclear option
Quick Reference Cheatsheet
Images
| Command | Description |
|---|---|
| docker build -t name:tag . | Build image from Dockerfile |
| docker pull image:tag | Pull image from registry |
| docker push image:tag | Push image to registry |
| docker image ls | List local images |
| docker image rm name | Remove image |
| docker image prune -a | Remove all unused images |
| docker history image | Show layer history |
| docker tag src dst | Tag/alias an image |
Containers
| Command | Description |
|---|---|
| docker run -d -p 8080:80 --name web nginx | Run detached with port and name |
| docker run -it ubuntu bash | Interactive shell |
| docker run --rm image | Auto-remove on exit |
| docker ps / docker ps -a | Running / all containers |
| docker stop / start / restart name | Lifecycle control |
| docker rm -f name | Force remove container |
| docker exec -it name sh | Shell into running container |
| docker logs -f name | Follow logs |
| docker stats | Live resource usage |
| docker inspect name | Full metadata JSON |
| docker cp src container:/dst | Copy files in/out |
| docker container prune | Remove all stopped containers |
Volumes & Networks
| Command | Description |
|---|---|
| docker volume create name | Create named volume |
| docker volume ls / rm / prune | Manage volumes |
| docker network create name | Create network |
| docker network connect net container | Attach container to network |
| docker network ls / rm / prune | Manage networks |
Compose
| Command | Description |
|---|---|
| docker compose up -d --build | Build & start in background |
| docker compose down -v | Stop, remove containers + volumes |
| docker compose logs -f service | Follow service logs |
| docker compose exec svc sh | Shell into service |
| docker compose run --rm svc cmd | One-off command |
| docker compose ps | Service status |
| docker compose config | Validate merged YAML |
System
| Command | Description |
|---|---|
| docker system df | Disk usage breakdown |
| docker system prune | Clean unused resources |
| docker system prune -a --volumes | Full nuclear cleanup |
| docker info | Docker daemon info |
| docker version | CLI + daemon versions |
| docker context ls / use name | Switch environment |